Top Banner
TUGAS TIK Ketua : wandi mulyana Nama anggota : lina rahayu Taufik wahyudin
956

Tugas tik di kelas xi ips 3

May 13, 2023

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Tugas tik di kelas xi ips 3

TUGAS TIK

Ketua : wandi mulyanaNama anggota : lina rahayu

Taufik wahyudin

Page 2: Tugas tik di kelas xi ips 3

Luki lukmanul hakimAbdul hamid

FauziYandi mulyadi

cryptography

In cryptography, an adversary (rarely opponent, enemy) is a malicious entity whose aim is to prevent the users of the cryptosystem from achieving their goal (primarily privacy, integrity, and availability of data). An adversary's efforts might take the form of attempting to discover secret data, corrupting someof the data in the system, spoofing the identity of a message senderor receiver, or forcing system downtime. Actual adversaries, as opposed to idealized ones, are referred to as attackers. Not surprisingly, the former term predominates in the cryptographic and the latter in the computer security literature. Eve, Mallory, Oscar and Trudy are all adversarial characters widely used in both types of texts. This notion of an adversary helps both intuitive and formal reasoning about cryptosystems by casting security analysis ofcryptosystems as a 'game' between the users and a centrally co-ordinated enemy. The notion of security of a cryptosystem is meaningful only with respect to particular attacks (usually presumedto be carried out by particular sorts of adversaries). There are several types of adversaries depending on what capabilities or intentions they are presumed to have. Adversaries may be[1]

Page 3: Tugas tik di kelas xi ips 3

computationally bounded or unbounded (i.e. in terms of time and storage resources),eavesdropping or Byzantine (i.e. passively listening on or actively corrupting data in the channel), static or adaptive (i.e. having fixed or changing behavior),mobile or non-mobile (e.g. in the context of network security)and so on. In actualsecurity practice, the attacks assigned to such adversaries are often seen, so such notional analysis is not merely theoretical.How successful an adversary is at breaking a system is measured by its advantage. An adversary's advantage is the difference between the adversary's probability of breaking the system and the probability that the system can be broken by simply guessing. The advantage is specified as a function of the security parameter.

In telecommunications, a communications protocol is a system of rules that allow two or more entities of a communications system to transmit information via any kind of variation of a physical quantity. These are the rules or standard that defines the syntax, semantics and synchronization of communication and possible error recovery methods. Protocols may be implemented by hardware, software, or a combination of both.[1] Communicating systems use well-defined formats (protocol) for exchanging messages. Each message has an exact meaning intended to elicit a response from a range of possible responses pre-determined for that particular situation. The specified behavior is typically independent of how itis to be implemented. Communications protocols have to be agreed upon by the parties involved.[2] To reach agreement, a protocol may be developed into a technical standard. A programming language describes the same for computations, so there is a close analogy between protocols and programming languages: protocols are to communications as programming languages are to computations.[3] Information security, sometimes shortened to InfoSec, is the practice of defending information from unauthorized access, use, disclosure, disruption, modification, perusal, inspection, recordingor destruction. It is a general term that can be used regardless of the form the data may take (e.g. electronic, physical).[1] For the information security attribute CIA (confidentiality, integrity, and availability), see Information security. "Confidential" redirects here. For other uses, see Confidential (disambiguation). Confidentiality is a set of rules or a promise that limits access orplaces restrictions on certain types of information. Contents,, 1 Legal confidentiality, 1.1 History of the English law about

Page 4: Tugas tik di kelas xi ips 3

confidentiality, 2 Medical confidentiality, 3 Clinical and counseling psychology, 4 See also, 5 References. Legal confidentiality Main article: Privacy law Lawyers are often requiredby law to keep confidential anything pertaining to the representation of a client. The duty of confidentiality is much broader than the attorney–client evidentiary privilege, which only covers communications between the attorney and the client. Both the privilege and the duty serve the purpose of encouraging clients to speak frankly about their cases. This way, lawyers will be able to carry out their duty to provide clients with zealous representation.Otherwise, the opposing side may be able to surprise the lawyer in court with something which he did not know about his client, which may weaken the client's position. Also, a distrustful client might hide a relevant fact which he thinks is incriminating, but which a skilled lawyer could turn to the client's advantage (for example, byraising affirmative defenses like self-defense) However, most jurisdictions have exceptions for situations where the lawyer has reason to believe that the client may kill or seriously injure someone, may cause substantial injury to the financial interest or property of another, or is using (or seeking to use) the lawyer's services to perpetrate a crime or fraud.

In such situations the lawyer has the discretion, but not the obligation, to disclose information designed to prevent the planned action. Most states have a version of this discretionary disclosure rule under Rules of Professional Conduct, Rule 1.6 (or its equivalent). A few jurisdictions have made this traditionally discretionary duty mandatory. For example, see the New Jersey and Virginia Rules of Professional Conduct, Rule 1.6. In some jurisdictions the lawyer must try to convince the client to conform his or her conduct to the boundaries of the law before disclosing any otherwise confidential information. Note that these exceptions generally do not cover crimes that have already occurred, even in extreme cases where murderers have confessed the location of missingbodies to their lawyers but the police are still looking for those bodies. The U.S. Supreme Court and many state supreme courts have affirmed the right of a lawyer to withhold information in such situations. Otherwise, it would be impossible for any criminal defendant to obtain a zealous defense. California is famous for having one of the strongest duties of confidentiality in the world; its lawyers must protect client confidences at "every peril to

Page 5: Tugas tik di kelas xi ips 3

himself [or herself]" under former California Business and Professions Code section 6068(e). Until an amendment in 2004 (which turned subsection (e) into subsection (e)(1) and added subsection (e)(2) to section 6068), California lawyers were not even permitted to disclose that a client was about to commit murder or assault. TheSupreme Court of California promptly amended the California Rules ofProfessional Conduct to conform to the new exception in the revised statute. Recent legislation in the UK curtails the confidentiality professionals like lawyers and accountants can maintain at the expense of the state.[citation needed] Accountants, for example, arerequired to disclose to the state any suspicions of fraudulent accounting and, even, the legitimate use of tax saving schemes if those schemes are not already known to the tax authorities.

History of the English law about confidentiality The modern English law of confidence stems from the judgment of the Lord Chancellor, Lord Cottenham,[1] in which he restrained the defendant from publishing a catalogue of private etchings made by Queen Victoria and Prince Albert (Prince Albert v Strange). However, the jurisprudential basis of confidentiality remained largely unexamineduntil the case of Saltman Engineering Co. Ltd. v Campbell Engineering Co. Ltd.,[2] in which the Court of Appeal upheld the existence of an equitable doctrine of confidence, independent of contract. In Coco v A.N.Clark (Engineers) Ltd [1969] R.P.C. 41, Megarry J developed an influential tri-partite analysis of the essential ingredients of the cause of action for breach of confidence: the information must be confidential in quality,[3] and nature.[4][5] it must be imparted so as to import an obligation of confidence,[6][7] and there must be an unauthorised use[8][9] of that information resulting in the detriment[10] of the party communicating it.[11] The law in its then current state of development was authoritatively summarised by Lord Goff in the Spycatcher case.[12] He identified three qualifications limiting thebroad general principle that a duty of confidence arose when confidential information came to the knowledge of a person (the confidant) in circumstances where he had notice that the informationwas confidential, with the effect that it would be just in all the circumstances that he should be precluded from disclosing the information to others. First, once information had entered the public domain, it could no longer be protected as confidential. Secondly, the duty of confidence applied neither to useless

Page 6: Tugas tik di kelas xi ips 3

information, nor to trivia. Thirdly, the public interest in the preservation of a confidence might be outweighed by a greater publicinterest favouring disclosure.

The incorporation into domestic law of Article 8 of the European Convention on Human Rights by the Human Rights Act 1998 hassince had a profound effect on the development of the English law ofconfidentiality. Article 8 provides that everyone has the right to respect for his private and family life, his home and his correspondence. In Campbell v MGN Ltd,[13] the House of Lords held that the Daily Mirror had breached Naomi Campbell’s confidentiality rights by publishing reports and pictures of her attendance at Narcotics Anonymous meetings. Although their lordships were divided 3–2 as to the result of the appeal and adopted slightly different formulations of the applicable principles, there was broad agreementthat, in confidentiality cases involving issues of privacy, the focus shifted from the nature of the relationship between claimant and defendant to (a) an examination of the nature of the informationitself and (b) a balancing exercise between the claimant's rights under Article 8 and the defendant's competing rights (for example, under Article 10, to free speech). It presently remains unclear to what extent and how this judge-led development of a partial law of privacy will impact on the equitable principles of confidentiality as traditionally understood. Medical confidentiality It has been suggested that HIV confidentiality be merged into this article. (Discuss) Proposed since January 2015. Confidentiality is commonly applied to conversations between doctors and patients. Legal protections prevent physicians from revealing certain discussions with patients, even under oath in court.[14] This physician-patient privilege only applies to secrets shared between physician and patient during the course of providing medical care.[14] The rule dates back to at least the Hippocratic Oath, which reads: Whatever, in connection with my professional service, or not in connection with it, I see or hear, in the life of men, which ought not to be spoken of abroad, I will not divulge, as reckoning that all such should be kept secret. Traditionally, medical ethics has viewed the duty of confidentiality as a relatively non-negotiable tenet of medical practice. In the UK information about an individual's HIV status is kept confidential within the NHS. This is based in law, in

Page 7: Tugas tik di kelas xi ips 3

the NHS Constitution and in key NHS rules and procedures. It is alsooutlined in every NHS employee’s contract of employment and in professional standards set by regulatory bodies. The National AIDS Trust's Confidentiality in the NHS: Your Information, Your Rights[15] outlines these rights. However, there are a few limited instances when a healthcare worker can share personal information without consent if it is in the public interest. These instances areset out in guidance from the General Medical Council[16] which is the regulatory body for doctors. Sometimes the healthcare worker hasto provide the information - if required by law or in response to a court order.

Confidentiality is mandated in America by HIPAA laws, specifically the Privacy Rule, and various state laws, some more rigorous than HIPAA. However, numerous exceptions to the rules have been carved out over the years. For example, many American states require physicians to report gunshot wounds to the police and impaired drivers to the Department of Motor Vehicles. Confidentiality is also challenged in cases involving the diagnosis of a sexually transmitted disease in a patient who refuses to revealthe diagnosis to a spouse, and in the termination of a pregnancy in an underage patient, without the knowledge of the patient's parents.Many states in the U.S. have laws governing parental notification inunderage abortion.[17] Clinical and counseling psychology The ethical principle of confidentiality requires that information shared by the client with the therapist in the course of treatment is not shared with others. This is important for the therapeutic alliance, as it promotes an environment of trust. There are important exceptions to confidentiality, namely where it conflicts with the clinician's duty to warn or duty to protect. This includes instances of suicidal behavior or homicidal plans, child abuse, elder abuse and dependent adult abuse. [18] On 26 June 2012, a judgeof Oslo District Court apologized for the court's hearing of testimony (on 14 June, regarding contact with Child Welfare Services(Norway)) that was covered by confidentiality (that had not been waived at that point of the trial of Anders Behring Breivik).[19] Data integrity refers to maintaining and assuring the accuracy and consistency of data over its entire life-cycle,[1] and is a criticalaspect to the design, implementation and usage of any system which

Page 8: Tugas tik di kelas xi ips 3

stores, processes, or retrieves data. The term data integrity is broad in scope and may have widely different meanings depending on the specific context – even under the same general umbrella of computing. This article provides only a broad overview of some of the different types and concerns of data integrity.

Data integrity is the opposite of data corruption, which is a form of data loss. The overall intent of any data integrity technique is the same: ensure data is recorded exactly as intended (such as a database correctly rejecting mutually exclusive possibilities,) and upon later retrieval, ensure the data is the same as it was when it was originally recorded. In short, data integrity aims to prevent unintentional changes to information. Dataintegrity is not to be confused with data security, the discipline of protecting data from unauthorized parties. Any unintended changesto data as the result of a storage, retrieval or processing operation, including malicious intent, unexpected hardware failure, and human error, is failure of data integrity. If the changes are the result of unauthorized access, it may also be a failure of data security. Depending on the data involved this could manifest itself as benign as a single pixel in an image appearing a different color than was originally recorded, to the loss of vacation pictures or a business-critical database, to even catastrophic loss of human life in a life-critical system. Contents 1 Integrity types 1.1 Physical integrity 1.2 Logical integrity 2 Databases 2.1 Types of integrity constraints 2.2 Examples 3 File systems 4 Data storage 5 See also 6 References 7 Further readin Integrity types Physical integrity, Physical integrity deals with challenges associated with correctly storing and fetching the data itself. Challenges with physical integrity may include electromechanical faults, design flaws, material fatigue, corrosion, power outages, natural disasters, acts of war and terrorism, and other special environmental hazards such as ionizing radiation, extreme temperatures, pressures and g-forces.Ensuring physical integrity includes methods such as redundant hardware, an uninterruptible power supply, certain types of RAID arrays, radiation hardened chips, error-correcting memory, use of a clustered file system, using file systems that employ block level checksums such as ZFS, storage arrays that compute parity

Page 9: Tugas tik di kelas xi ips 3

calculations such as exclusive or or use a cryptographic hash function and even having a watchdog timer on critical subsystems.

Physical integrity often makes extensive use of error detecting algorithms known as error-correcting codes. Human-induced data integrity errors are often detected through the use of simpler checks and algorithms, such as the Damm algorithm or Luhn algorithm.These are used to maintain data integrity after manual transcriptionfrom one computer system to another by a human intermediary (e.g. credit card or bank routing numbers). Computer-induced transcriptionerrors can be detected through hash functions. In production systemsthese techniques are used together to ensure various degrees of dataintegrity. For example, a computer file system may be configured on a fault-tolerant RAID array, but might not provide block-level checksums to detect and prevent silent data corruption. As another example, a database management system might be compliant with the ACID properties, but the RAID controller or hard disk drive's internal write cache might not be Logical integrity See also: Mutex and Copy-on-write This type of integrity is concerned with the correctness or rationality of a piece of data, given a particular context. This includes topics such as referential integrity and entity integrity in a relational database or correctly ignoring impossible sensor data in robotic systems. These concerns involve ensuring that the data "makes sense" given its environment. Challenges include software bugs, design flaws, and human errors. Common methods of ensuring logical integrity include things such as a check constraints, foreign key constraints, program assertions, and other run-time sanity checks. Both physical and logical integrity often share many common challenges such as human errors and design flaws, and both must appropriately deal with concurrent requests to record and retrieve data, the latter of which is entirely a subject on its own. Databases

Data integrity contains guidelines for data retention, specifying or guaranteeing the length of time data can be retained in a particular database. It specifies what can be done with data values when their validity or usefulness expires. In order to achieve data integrity, these rules are consistently and routinely

Page 10: Tugas tik di kelas xi ips 3

applied to all data entering the system, and any relaxation of enforcement could cause errors in the data. Implementing checks on the data as close as possible to the source of input (such as human data entry), causes less erroneous data to enter the system. Strict enforcement of data integrity rules causes the error rates to be lower, resulting in time saved troubleshooting and tracing erroneousdata and the errors it causes algorithms. Data integrity also includes rules defining the relations a piece of data can have, to other pieces of data, such as a Customer record being allowed to link to purchased Products, but not to unrelated data such as Corporate Assets. Data integrity often includes checks and correction for invalid data, based on a fixed schema or a predefinedset of rules. An example being textual data entered where a date-time value is required. Rules for data derivation are also applicable, specifying how a data value is derived based on algorithm, contributors and conditions. It also specifies the conditions on how the data value could be re-derived. Types of integrity constraints Data integrity is normally enforced in a database system by a series of integrity constraints or rules. Threetypes of integrity constraints are an inherent part of the relational data model: entity integrity, referential integrity and domain integrity:

Entity integrity concerns the concept of a primary key. Entity integrity is an integrity rule which states that every table must have a primary key and that the column or columns chosen to be the primary key should be unique and not null. Referential integrityconcerns the concept of a foreign key. The referential integrity rule states that any foreign-key value can only be in one of two states. The usual state of affairs is that the foreign-key value refers to a primary key value of some table in the database. Occasionally, and this will depend on the rules of the data owner, aforeign-key value can be null. In this case we are explicitly sayingthat either there is no relationship between the objects representedin the database or that this relationship is unknown. Domain integrity specifies that all columns in a relational database must be declared upon a defined domain. The primary unit of data in the relational data model is the data item. Such data items are said to be non-decomposable or atomic. A domain is a set of values of the

Page 11: Tugas tik di kelas xi ips 3

same type. Domains are therefore pools of values from which actual values appearing in the columns of a table are drawn. User-defined integrity refers to a set of rules specified by a user, which do notbelong to the entity, domain and referential integrity categories. If a database supports these features it is the responsibility of the database to insure data integrity as well as the consistency model for the data storage and retrieval. If a database does not support these features it is the responsibility of the applications to ensure data integrity while the database supports the consistencymodel for the data storage and retrieval. Having a single, well-controlled, and well-defined data-integrity system increases stability (one centralized system performs all data integrity operations) performance (all data integrity operations are performedin the same tier as the consistency model) re-usability (all applications benefit from a single centralized data integrity system) maintainability (one centralized system for all data integrity administration).

As of 2012, since all modern databases support these features (see Comparison of relational database management systems), it has become the de facto responsibility of the database to ensure data integrity. Out-dated and legacy systems that use filesystems (text, spreadsheets, ISAM, flat files, etc.) for their consistency model lack any[citation needed] kind of data-integrity model. This requires organizations to invest a large amount of time,money and personnel in building data-integrity systems on a per-application basis that needlessly duplicate the existing data integrity systems found in modern databases. Many companies, and indeed many database systems themselves, offer products and servicesto migrate out-dated and legacy systems to modern databases to provide these data-integrity features. This offers organizations substantial savings in time, money and resources because they do nothave to develop per-application data-integrity systems that must be refactored each time the business requirements change. Examples An example of a data-integrity mechanism is the parent-and-child relationship of related records. If a parent record owns one or morerelated child records all of the referential integrity processes arehandled by the database itself, which automatically ensures the accuracy and integrity of the data so that no child record can existwithout a parent (also called being orphaned) and that no parent loses their child records. It also ensures that no parent record can

Page 12: Tugas tik di kelas xi ips 3

be deleted while the parent record owns any child records. All of this is handled at the database level and does not require coding integrity checks into each applications. File systems Various research results show that neither widespread filesystems (includingUFS, Ext, XFS, JFS and NTFS) nor hardware RAID solutions provide sufficient protection against data integrity problems.[2][3][4][5][6] Some filesystems (including Btrfs and ZFS) provide internal dataand metadata checksumming, what is used for detecting silent data corruption and improving data integrity. If a corruption is detectedthat way and internal RAID mechanisms provided by those filesystems are also used, such filesystems can additionally reconstruct corrupted data in a transparent way.[7] This approach allows improved data integrity protection covering the entire data paths, which is usually known as end-to-end data protection.[8] Data storage Main article: Data Integrity Field Apart from data in databases, standards exist to address the integrity of data on storage devices.[9] Methods Main article: Provenance

Authentication has relevance to multiple fields. In art, antiques, and anthropology, a common problem is verifying that a given artifact was produced by a certain person or was produced in acertain place or period of history. In computer science, verifying aperson's identity is often required to secure access to confidentialdata or systems. Authentication can be considered to be of three types: The first type of authentication is accepting proof of identity given by a credible person who has first-hand evidence thatthe identity is genuine. When authentication is required of art or physical objects, this proof could be a friend, family member or colleague attesting to the item's provenance, perhaps by having witnessed the item in its creator's possession. With autographed sports memorabilia, this could involve someone attesting that they witnessed the object being signed. A vendor selling branded items implies authenticity, while he or she may not have evidence that every step in the supply chain was authenticated. Authority based trust relationships (centralized) drive the majority of secured internet communication through known public certificate authorities;peer based trust (decentralized, web of trust) is used for personal services like email or files (pretty good privacy, GNU Privacy Guard) and trust is established by known individuals signing each

Page 13: Tugas tik di kelas xi ips 3

other's keys (proof of identity) or at Key signing parties, for example. The second type of authentication is comparing the attributes of the object itself to what is known about objects of that origin. For example, an art expert might look for similarities in the style of painting, check the location and form of a signature, or compare the object to an old photograph. An archaeologist might use carbon dating to verify the age of an artifact, do a chemical analysis of the materials used, or compare the style of construction or decoration to other artifacts of similar origin. The physics of sound and light, and comparison with a known physical environment, can be used to examine the authenticity of audio recordings, photographs, or videos. Documents can be verified as being created on ink or paper readily available at the time of the item's implied creation.

Attribute comparison may be vulnerable to forgery. In general,it relies on the facts that creating a forgery indistinguishable from a genuine artifact requires expert knowledge, that mistakes areeasily made, and that the amount of effort required to do so is considerably greater than the amount of profit that can be gained from the forgery. In art and antiques, certificates are of great importance for authenticating an object of interest and value. Certificates can, however, also be forged, and the authentication ofthese poses a problem. For instance, the son of Han van Meegeren, the well-known art-forger, forged the work of his father and provided a certificate for its provenance as well; see the article Jacques van Meegeren. Criminal and civil penalties for fraud, forgery, and counterfeiting can reduce the incentive for falsification, depending on the risk of getting caught. Currency andother financial instruments commonly use this second type of authentication method. Bills, coins, and cheques incorporate hard-to-duplicate physical features, such as fine printing or engraving, distinctive feel, watermarks, and holographic imagery, which are easy for trained receivers to verify. The third type of authentication relies on documentation or other external affirmations. In criminal courts, the rules of evidence often require establishing the chain of custody of evidence presented. This can be accomplished through a written evidence log, or by testimony from the police detectives and forensics staff that

Page 14: Tugas tik di kelas xi ips 3

handled it. Some antiques are accompanied by certificates attesting to their authenticity. Signed sports memorabilia is usually accompanied by a certificate of authenticity. These external recordshave their own problems of forgery and perjury, and are also vulnerable to being separated from the artifact and lost. In computer science, a user can be given access to secure systems basedon user credentials that imply authenticity. A network administratorcan give a user a password, or provide the user with a key card or other access device to allow system access. In this case, authenticity is implied but not guaranteed.

Consumer goods such as pharmaceuticals, perfume, fashion clothing can use all three forms of authentication to prevent counterfeit goods from taking advantage of a popular brand's reputation (damaging the brand owner's sales and reputation). As mentioned above, having an item for sale in a reputable store implicitly attests to it being genuine, the first type of authentication. The second type of authentication might involve comparing the quality and craftsmanship of an item, such as an expensive handbag, to genuine articles. The third type of authentication could be the presence of a trademark on the item, which is a legally protected marking, or any other identifying feature which aids consumers in the identification of genuine brand-name goods. With software, companies have taken great steps to protect from counterfeiters, including adding holograms, security rings, security threads and color shifting ink.[1] Factors and identity The ways in which someone may be authenticated fall into three categories, based on what are known as the factors of authentication: something the user knows, something the user has, and something the user is. Each authentication factor covers a rangeof elements used to authenticate or verify a person's identity priorto being granted access, approving a transaction request, signing a document or other work product, granting authority to others, and establishing a chain of authority. Security research has determined that for a positive authentication, elements from at least two, and preferably all three, factors should be verified.[2] The three factors (classes) and some of elements of each factor are: This is apicture of the front (top) and back (bottom) of an ID Card. the knowledge factors: Something the user knows (e.g., a password, pass

Page 15: Tugas tik di kelas xi ips 3

phrase, or personal identification number (PIN), challenge response (the user must answer a question, or pattern) the ownership factors:Something the user has (e.g., wrist band, ID card, security token, cell phone with built-in hardware token, software token, or cell phone holding a software token) the inherence factors: Something theuser is or does (e.g., fingerprint, retinal pattern, DNA sequence (there are assorted definitions of what is sufficient), signature, face, voice, unique bio-electric signals, or other biometric identifier). Two-factor authentication Main article: Two-factor authentication

When elements representing two factors are required for authentication, the term two-factor authentication is applied — e.g. a bankcard (something the user has) and a PIN (something the user knows). Business networks may require users to provide a password (knowledge factor) and a pseudorandom number froma security token (ownership factor). Access to a very-high-security system might require a mantrap screening of height, weight, facial, and fingerprint checks (several inherence factor elements) plus a PIN and a day code (knowledge factor elements), but this is still a two-factor authentication. Product authentication A Security hologram label on an electronics box for authentication Counterfeit products are often offered to consumers as being authentic. Counterfeit consumer goods such as electronics, music, apparel, and Counterfeit medications have been sold as being legitimate. Efforts to control the supply chain and educate consumers help ensure that authentic products are sold and used. Even security printing on packages, labels, and nameplates, however, is subject to counterfeiting. A secure key storage device can be used for authentication in consumer electronics, network authentication, license management, supply chain management, etc. Generally the device to be authenticated needs some sort of wireless or wired digital connection to either a host system or a network. Nonetheless, the component being authenticated need not be electronic in nature as an authentication chip can be mechanically attached and read through a connector to the host e.g. an authenticated ink tank for use with a printer. For products and services that these Secure Coprocessors can be applied to, they can offer a solution that can be much more difficult to counterfeit thanmost other options while at the same time being more easily verified. Packaging, Packaging and labeling can be engineered to

Page 16: Tugas tik di kelas xi ips 3

help reduce the risks of counterfeit consumer goods or the theft andresale of products.[3][4] Some package constructions are more difficult to copy and some have pilfer indicating seals. Counterfeitgoods, unauthorized sales (diversion), material substitution and tampering can all be reduced with these anti-counterfeiting technologies. Packages may include authentication seals and use security printing to help indicate that the package and contents arenot counterfeit; these too are subject to counterfeiting. Packages also can include anti-theft devices, such as dye-packs, RFID tags, or electronic article surveillance[5] tags that can be activated or detected by devices at exit points and require specialized tools to deactivate. Anti-counterfeiting technologies that can be used with packaging include:

Taggant fingerprinting - uniquely coded microscopic materials that are verified from a database. Encrypted micro-particles - unpredictably placed markings (numbers, layers and colors) not visible to the human eye Holograms - graphics printed onseals, patches, foils or labels and used at point of sale for visualverification Micro-printing - second line authentication often used on currencies Serialized barcodes UV printing - marks only visible under UV light Track and trace systems - use codes to link products to database tracking system Water indicators - become visible when contacted with water DNA tracking - genes embedded onto labels that can be traced Color shifting ink or film - visible marks that switchcolors or texture when tilted Tamper evident seals and tapes - destructible or graphically verifiable at point of sale 2d barcodes - data codes that can be tracked RFID chips Information content. Theauthentication of information can pose special problems with electronic communication, such as vulnerability to man-in-the-middleattacks, whereby a third party taps into the communication stream, and poses as each of the two other communicating parties, in order to intercept information from each. Extra identity factors can be required to authenticate each party's identity. Literary forgery caninvolve imitating the style of a famous author. If an original manuscript, typewritten text, or recording is available, then the medium itself (or its packaging — anything from a box to e-mail headers) can help prove or disprove the authenticity of the document. However, text, audio, and video can be copied into new

Page 17: Tugas tik di kelas xi ips 3

media, possibly leaving only the informational content itself to usein authentication. Various systems have been invented to allow authors to provide a means for readers to reliably authenticate thata given message originated from or was relayed by them. These involve authentication factors like: A difficult-to-reproduce physical artifact, such as a seal, signature, watermark, special stationery, or fingerprint. A shared secret, such as a passphrase, in the content of the message. An electronic signature; public-key infrastructure is often used to cryptographically guarantee that a message has been signed by the holder of a particular private key.The opposite problem is detection of plagiarism, where information from a different author is passed off as a person's own work. A common technique for proving plagiarism is the discovery of another copy of the same or very similar text, which has different attribution. In some cases, excessively high quality or a style mismatch may raise suspicion of plagiarism.

Factual verification Determining the truth or factual accuracyof information in a message is generally considered a separate problem from authentication. A wide range of techniques, from detective work, to fact checking in journalism, to scientific experiment might be employed. Video authentication It is sometimes necessary to authenticate the veracity of video recordings used as evidence in judicial proceedings. Proper chain-of-custody records and secure storage facilities can help ensure the admissibility of digital or analog recordings by the Court. Literacy & Literature authentication In literacy, authentication is a readers’ process of questioning the veracity of an aspect of literature and then verifying those questions via research. The fundamental question forauthentication of literature is - Do you believe it? Related to that, an authentication project is therefore a reading and writing activity which students documents the relevant research process ([6]). It builds students' critical literacy. The documentation materials for literature go beyond narrative texts and likely include informational texts, primary sources, and multimedia. The process typically involves both internet and hands-on library research. When authenticating historical fiction in particular, readers considers the extent that the major historical events, as well as the culture portrayed (e.g., the language, clothing, food, gender roles), are believable for the time period. ([7]). History and state-of-the-art Historically, fingerprints have been used as

Page 18: Tugas tik di kelas xi ips 3

the most authoritative method of authentication, but recent court cases in the US and elsewhere have raised fundamental doubts about fingerprint reliability.[citation needed] Outside of the legal system as well, fingerprints have been shown to be easily spoofable,with British Telecom's top computer-security official noting that "few" fingerprint readers have not already been tricked by one spoofor another.[8] Hybrid or two-tiered authentication methods offer a compelling solution, such as private keys encrypted by fingerprint inside of a USB device. In a computer data context, cryptographic methods have been developed (see digital signature and challenge-response authentication) which are currently not spoofable if and only if the originator's key has not been compromised. That the originator (or anyone other than an attacker) knows (or doesn't know) about a compromise is irrelevant. It is not known whether these cryptographically based authentication methods are provably secure, since unanticipated mathematical developments may make them vulnerable to attack in future. If that were to occur, it may call into question much of the authentication in the past. In particular,a digitally signed contract may be questioned when a new attack on the cryptography underlying the signature is discovered.

Strong authentication The U.S. Government's National Information Assurance Glossary defines strong authentication as layered authentication approach relying on two or more authenticators to establish the identity of an originator or receiver of information. The above definition is consistent with that of the European Central Bank, as discussed in the strong authentication entry. Authorization Main article: Authorization A soldier checks a driver's identification card before allowing her toenter a military base. The process of authorization is distinct fromthat of authentication. Whereas authentication is the process of verifying that "you are who you say you are", authorization is the process of verifying that "you are permitted to do what you are trying to do". Authorization thus presupposes authentication. For example, a client showing proper identification credentials to a bank teller is asking to be authenticated that he really is the one whose identification he is showing. A client whose authentication request is approved becomes authorized to access the accounts of that account holder, but no others. However note that if a stranger tries to access someone else's account with his own identification credentials, the stranger's identification credentials will still be

Page 19: Tugas tik di kelas xi ips 3

successfully authenticated because they are genuine and not counterfeit, however the stranger will not be successfully authorized to access the account, as the stranger's identification credentials had not been previously set to be eligible to access theaccount, even if valid (i.e. authentic). Similarly when someone tries to log on a computer, they are usually first requested to identify themselves with a login name and support that with a password. Afterwards, this combination is checked against an existing login-password validity record to check if the combination is authentic. If so, the user becomes authenticated (i.e. the identification he supplied in step 1 is valid, or authentic). Finally, a set of pre-defined permissions and restrictions for that particular login name is assigned to this user, which completes the final step, authorization. Even though authorization cannot occur without authentication, the former term is sometimes used to mean the combination of both. To distinguish "authentication" from the closely related "authorization", the shorthand notations A1 (authentication), A2 (authorization) as well as AuthN / AuthZ (AuthR) or Au / Az are used in some communities.[9] Normally delegation was considered to be a part of authorization domain. Recently authentication is also used for various type of delegation tasks. Delegation in IT network is also a new but evolving field.[10] Access control Main article: Access control, One familiar use of authentication and authorization is access control. A computer system that is supposed to be used only by those authorized must attempt to detect and exclude the unauthorized. Access to it is therefore usually controlled by insisting on an authentication procedure to establish with some degree of confidence the identity of the user, granting privileges established for that identity. One such procedure involves the usage of Layer 8 which allows IT administrators to identify users, control Internet activity of usersin the network, set user based policies and generate reports by username. Common examples of access control involving authenticationinclude:

Asking for photoID when a contractor first arrives at a house to perform work. Using captcha as a means of asserting that a user is a human being and not a computer program. By using One Time Password (OTP), received on a tele-network enabled device like

Page 20: Tugas tik di kelas xi ips 3

mobile phone, as an authentication password/PIN. A computer program using a blind credential to authenticate to another program Enteringa country with a passport Logging in to a computer Using a confirmation E-mail to verify ownership of an e-mail address Using an Internet banking system Withdrawing cash from an ATM In some cases, ease of access is balanced against the strictness of access checks. For example, the credit card network does not require a personal identification number for authentication of the claimed identity; and a small transaction usually does not even require a signature of the authenticated person for proof of authorization of the transaction. The security of the system is maintained by limiting distribution of credit card numbers, and by the threat of punishment for fraud. Security experts argue that it is impossible to prove the identity of a computer user with absolute certainty. Itis only possible to apply one or more tests which, if passed, have been previously declared to be sufficient to proceed. The problem isto determine which tests are sufficient, and many such are inadequate. Any given test can be spoofed one way or another, with varying degrees of difficulty. Computer security experts are now also recognising that despite extensive efforts, as a business, research and network community, we still do not have a secure understanding of the requirements for authentication, in a range of circumstances. Lacking this understanding is a significant barrier to identifying optimum methods of authentication. major questions are:

What is authentication for? Who benefits fromauthentication/who is disadvantaged by authentication failures? Whatdisadvantages can effective authentication actually guard against? Non-repudiation refers to a state of affairs where the author of a statement will not be able to successfully challenge the authorship of the statement or validity of an associated contract. The term is often seen in a legal setting wherein the authenticity of a signature is being challenged. In such an instance, the authenticityis being "repudiated". Contents 1 In security 1.1 In digital security 2 Trusted third parties (TTPs) 3 See also 4 References 5 External links In security In a general sense non-repudiation involves associating actions or changes to a unique individual. For a secure area, for example, it may be desirable to implement a key card access system. Non-repudiation would be violated if it were notalso a strictly enforced policy to prohibit sharing of the key cards

Page 21: Tugas tik di kelas xi ips 3

and to immediately report lost or stolen cards. Otherwise determining who performed the action of opening the door cannot be trivially determined. Similarly, for computer accounts, the individual owner of the account must not allow others to use that account, especially, for instance, by giving away their account's password, and a policy should be implemented to enforce this. This prevents the owner of the account from denying actions performed by the account.[1] In digital security Regarding digital security, thecryptological meaning and application of non-repudiation shifts to mean:[2] A service that provides proof of the integrity and origin of data. An authentication that can be asserted to be genuine with high assurance. Proof of data integrity is typically the easiest of these requirements to accomplish. A data hash, such as SHA2, is usually sufficient to establish that the likelihood of data being undetectably changed is extremely low. Even with this safeguard, it is still possible to tamper with data in transit, either through a man-in-the-middle attack or phishing. Due to this flaw, data integrity is best asserted when the recipient already possesses the necessary verification information. The most common method of asserting the digital origin of data is through digital certificates, a form of public key infrastructure, to which digital signatures belong. Note that the public key scheme is not used for encryption in this form, confidentiality is not achieved by signing a message with a private key (since anyone can obtain the public keyto reverse the signature). Verifying the digital origin means that the certified/signed data can be, with reasonable certainty, trustedto be from somebody who possesses the private key corresponding to the signing certificate. If the key is not properly safeguarded by the original owner, digital forgery can become a major concern.

Trusted third parties (TTPs) The ways in which a party may attempt to repudiate a signature present a challenge to the trustworthiness of the signatures themselves. The standard approach to mitigating these risks is to involve a trusted third party. The two most common TTPs are forensic analysts and notaries. A forensic analyst specializing in handwriting can look at a signature, compareit to a known valid signature, and make a reasonable assessment of the legitimacy of the first signature. A notary provides a witness whose job is to verify the identity of an individual by checking other credentials and affixing their certification that the party signing is who they claim to be. Further, a notary provides the

Page 22: Tugas tik di kelas xi ips 3

extra benefit of maintaining independent logs of their transactions,complete with the type of credential checked and another signature that can independently be verified by the preceding forensic analyst. For this double security, notaries are the preferred form of verification. On the digital side, the only TTP is the repositoryfor public key certificates. This provides the recipient with the ability to verify the origin of an item even if no direct exchange of the public information has ever been made. The digital signature,however, is forensically identical in both legitimate and forged uses - if someone possesses the private key they can create a "real"signature. The protection of the private key is the idea behind the United States Department of Defense's Common Access Card (CAC), which never allows the key to leave the card and therefore necessitates the possession of the card in addition to the personal identification number (PIN) code necessary to unlock the card for permission to use it for encryption and digital signatures. This article is about the study of topics such as quantity and structure.For other uses, see Mathematics (disambiguation). "Math" redirects here. For other uses, see Math (disambiguation). Euclid (holding calipers), Greek mathematician, 3rd century BC, as imagined by Raphael in this detail from The School of Athens.[1] Mathematics (from Greek μάθημα máthēma, “knowledge, study, learning”) is the study of topics such as quantity (numbers),[2] structure,[3] space,[2] and change.[4][5][6] There is a range of views among mathematicians and philosophers as to the exact scope and definitionof mathematics.[7][8] Mathematicians seek out patterns[9][10] and use them to formulate new conjectures. Mathematicians resolve the truth or falsity of conjectures by mathematical proof. When mathematical structures are good models of real phenomena, then mathematical reasoning can provide insight or predictions about nature. Through the use of abstraction and logic, mathematics developed from counting, calculation, measurement, and the systematic study of the shapes and motions of physical objects. Practical mathematics has been a human activity for as far back as written records exist. The research required to solve mathematical problems can take years or even centuries of sustained inquiry.

Rigorous arguments first appeared in Greek mathematics, most notably in Euclid's Elements. Since the pioneering work of Giuseppe

Page 23: Tugas tik di kelas xi ips 3

Peano (1858–1932), David Hilbert (1862–1943), and others on axiomatic systems in the late 19th century, it has become customary to view mathematical research as establishing truth by rigorous deduction from appropriately chosen axioms and definitions. Mathematics developed at a relatively slow pace until the Renaissance, when mathematical innovations interacting with new scientific discoveries led to a rapid increase in the rate of mathematical discovery that has continued to the present day.[11] Galileo Galilei (1564–1642) said, "The universe cannot be read untilwe have learned the language and become familiar with the charactersin which it is written. It is written in mathematical language, and the letters are triangles, circles and other geometrical figures, without which means it is humanly impossible to comprehend a single word. Without these, one is wandering about in a dark labyrinth."[12] Carl Friedrich Gauss (1777–1855) referred to mathematics as "the Queen of the Sciences".[13] Benjamin Peirce (1809–1880) called mathematics "the science that draws necessary conclusions".[14] David Hilbert said of mathematics: "We are not speaking here of arbitrariness in any sense. Mathematics is not likea game whose tasks are determined by arbitrarily stipulated rules. Rather, it is a conceptual system possessing internal necessity thatcan only be so and by no means otherwise."[15] Albert Einstein (1879–1955) stated that "as far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality."[16] French mathematician Claire Voisin states "There is creative drive in mathematics, it's all about movement trying to express itself." [17] Mathematics is used throughout the world as an essential tool in many fields, including natural science, engineering, medicine, finance and the social sciences. Applied mathematics, the branch of mathematics concerned with application of mathematical knowledge to other fields, inspiresand makes use of new mathematical discoveries, which has led to the development of entirely new mathematical disciplines, such as statistics and game theory. Mathematicians also engage in pure mathematics, or mathematics for its own sake, without having any application in mind. There is no clear line separating pure and applied mathematics, and practical applications for what began as pure mathematics are often discovered.[18]

Contents

Page 24: Tugas tik di kelas xi ips 3

1 History

1.1 Evolution

1.2 Etymology

2 Definitions of mathematics

2.1 Mathematics as science

3 Inspiration, pure and applied mathematics, and aesthetics

4 Notation, language, and rigor

5 Fields of mathematics

5.1 Foundations and philosophy

5.2 Pure mathematics 5.3 Applied mathematics 6 Mathematical awards

7 See also

8 Notes

9 References

10 Further reading

11 External links

History

Evolution

Main article: History of mathematics

The evolution of mathematics can be seen as an ever-increasingseries of abstractions. The first abstraction, which is shared by many animals,[19] was probably that of numbers: the realization thata collection of two apples and a collection of two oranges (for example) have something in common, namely quantity of their members.

Page 25: Tugas tik di kelas xi ips 3

Greek mathematician Pythagoras (c. 570 – c. 495 BC), commonly credited with discovering the Pythagorean theorem

Mayan numerals

As evidenced by tallies found on bone, in addition to recognizing how to count physical objects, prehistoric peoples may have also recognized how to count abstract quantities, like time – days, seasons, years.[20]

More complex mathematics did not appear until around 3000 BC, when the Babylonians and Egyptians began using arithmetic, algebra and geometry for taxation and other financial calculations, for building and construction, and for astronomy.[21] The earliest uses of mathematics were in trading, land measurement, painting and weaving patterns and the recording of time.

In Babylonian mathematics elementary arithmetic (addition, subtraction, multiplication and division) first appears in the archaeological record. Numeracy pre-dated writing and numeral systems have been many and diverse, with the first known written numerals created by Egyptians in Middle Kingdom texts such as the Rhind Mathematical Papyrus.[citation needed]

Between 600 and 300 BC the Ancient Greeks began a systematic study of mathematics in its own right with Greek mathematics.[22]

Mathematics has since been greatly extended, and there has been a fruitful interaction between mathematics and science, to the benefit of both. Mathematical discoveries continue to be made today.According to Mikhail B. Sevryuk, in the January 2006 issue of the Bulletin of the American Mathematical Society, "The number of papersand books included in the Mathematical Reviews database since 1940 (the first year of operation of MR) is now more than 1.9 million, and more than 75 thousand items are added to the database each year.

Page 26: Tugas tik di kelas xi ips 3

The overwhelming majority of works in this ocean contain new mathematical theorems and their proofs."[23]

Etymology

The word mathematics comes from the Greek μάθημα (máthēma), which, in the ancient Greek language, means "that which is learnt",[24] "what one gets to know", hence also "study" and "science", and in modern Greek just "lesson". The word máthēma is derived from μανθάνω (manthano), while the modern Greek equivalent is μαθαίνω (mathaino), both of which mean "to learn". In Greece, the word for "mathematics" came to have the narrower and more technical meaning "mathematical study" even in Classical times.[25] Its adjective is μαθηματικός (mathēmatikós), meaning "related to learning" or "studious", which likewise further came to mean "mathematical". In particular, μαθηματικὴ τέχνη (mathēmatikḗ tékhnē), Latin: ars mathematica, meant "the mathematical art".

In Latin, and in English until around 1700, the term mathematics more commonly meant "astrology" (or sometimes "astronomy") rather than "mathematics"; the meaning gradually changed to its present one from about 1500 to 1800. This has resulted in several mistranslations: a particularly notorious one isSaint Augustine's warning that Christians should beware of mathematici meaning astrologers, which is sometimes mistranslated asa condemnation of mathematicians.[26]

The apparent plural form in English, like the French plural form les mathématiques (and the less commonly used singular derivative la mathématique), goes back to the Latin neuter plural mathematica (Cicero), based on the Greek plural τα μαθηματικά (ta mathēmatiká), used by Aristotle (384–322 BC), and meaning roughly "all things mathematical"; although it is plausible that English borrowed only the adjective mathematic(al) and formed the noun mathematics anew, after the pattern of physics and metaphysics, which were inherited from the Greek.[27] In English, the noun

Page 27: Tugas tik di kelas xi ips 3

mathematics takes singular verb forms. It is often shortened to maths or, in English-speaking North America, math.[28]

Definitions of mathematics

Main article: Definitions of mathematics

Leonardo Fibonacci, the Italian mathematician who established the Hindu–Arabic numeral system to the Western World

Aristotle defined mathematics as "the science of quantity", and this definition prevailed until the 18th century.[29] Starting in the 19th century, when the study of mathematics increased in rigor and began to address abstract topics such as group theory and projective geometry, which have no clear-cut relation to quantity and measurement, mathematicians and philosophers began to propose a variety of new definitions.[30] Some of these definitions emphasize the deductive character of much of mathematics, some emphasize its abstractness, some emphasize certain topics within mathematics. Today, no consensus on the definition of mathematics prevails, even among professionals.[7] There is not even consensus on whether mathematics is an art or a science.[8] A great many professional mathematicians take no interest in a definition of mathematics, or consider it undefinable.[7] Some just say, "Mathematics is what mathematicians do."[7]

Three leading types of definition of mathematics are called logicist, intuitionist, and formalist, each reflecting a different philosophical school of thought.[31] All have severe problems, none has widespread acceptance, and no reconciliation seems possible.[31]

An early definition of mathematics in terms of logic was Benjamin Peirce's "the science that draws necessary conclusions" (1870).[32] In the Principia Mathematica, Bertrand Russell and Alfred North Whitehead advanced the philosophical program known as logicism, and attempted to prove that all mathematical concepts, statements, and principles can be defined and proven entirely in

Page 28: Tugas tik di kelas xi ips 3

terms of symbolic logic. A logicist definition of mathematics is Russell's "All Mathematics is Symbolic Logic" (1903).[33]

Intuitionist definitions, developing from the philosophy of mathematician L.E.J. Brouwer, identify mathematics with certain mental phenomena. An example of an intuitionist definition is "Mathematics is the mental activity which consists in carrying out constructs one after the other."[31] A peculiarity of intuitionism is that it rejects some mathematical ideas considered valid according to other definitions. In particular, while other philosophies of mathematics allow objects that can be proven to exist even though they cannot be constructed, intuitionism allows only mathematical objects that one can actually construct.

Formalist definitions identify mathematics with its symbols and the rules for operating on them. Haskell Curry defined mathematics simply as "the science of formal systems".[34] A formal system is a set of symbols, or tokens, and some rules telling how the tokens may be combined into formulas. In formal systems, the word axiom has a special meaning, different from the ordinary meaning of "a self-evident truth". In formal systems, an axiom is a combination of tokens that is included in a given formal system without needing to be derived using the rules of the system.

Mathematics as science

Carl Friedrich Gauss, known as the prince of mathematicians

Gauss referred to mathematics as "the Queen of the Sciences".[13] In the original Latin Regina Scientiarum, as well as in German Königin der Wissenschaften, the word corresponding to science means a "field of knowledge", and this was the original meaning of "science" in English, also; mathematics is in this sense a field of knowledge. The specialization restricting the meaning of "science" to natural science follows the rise of Baconian science, which contrasted "natural science" to scholasticism, the Aristotelean method of inquiring from first principles. The role of empirical

Page 29: Tugas tik di kelas xi ips 3

experimentation and observation is negligible in mathematics, compared to natural sciences such as psychology, biology, or physics. Albert Einstein stated that "as far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality."[16] More recently, Marcus du Sautoy has called mathematics "the Queen of Science ... the main driving force behind scientific discovery".[35]

Many philosophers believe that mathematics is not experimentally falsifiable, and thus not a science according to the definition of Karl Popper.[36] However, in the 1930s Gödel's incompleteness theorems convinced many mathematicians[who?] that mathematics cannot be reduced to logic alone, and Karl Popper concluded that "most mathematical theories are, like those of physics and biology, hypothetico-deductive: pure mathematics therefore turns out to be much closer to the natural sciences whose hypotheses are conjectures, than it seemed even recently."[37] Otherthinkers, notably Imre Lakatos, have applied a version of falsificationism to mathematics itself.

An alternative view is that certain scientific fields (such astheoretical physics) are mathematics with axioms that are intended to correspond to reality. The theoretical physicist J.M. Ziman proposed that science is public knowledge, and thus includes mathematics.[38] Mathematics shares much in common with many fields in the physical sciences, notably the exploration of the logical consequences of assumptions. Intuition and experimentation also playa role in the formulation of conjectures in both mathematics and the(other) sciences. Experimental mathematics continues to grow in importance within mathematics, and computation and simulation are playing an increasing role in both the sciences and mathematics.

The opinions of mathematicians on this matter are varied. Manymathematicians[who?] feel that to call their area a science is to downplay the importance of its aesthetic side, and its history in the traditional seven liberal arts; others[who?] feel that to ignoreits connection to the sciences is to turn a blind eye to the fact

Page 30: Tugas tik di kelas xi ips 3

that the interface between mathematics and its applications in science and engineering has driven much development in mathematics. One way this difference of viewpoint plays out is in the philosophical debate as to whether mathematics is created (as in art) or discovered (as in science). It is common to see universitiesdivided into sections that include a division of Science and Mathematics, indicating that the fields are seen as being allied butthat they do not coincide. In practice, mathematicians are typicallygrouped with scientists at the gross level but separated at finer levels. This is one of many issues considered in the philosophy of mathematics.[citation needed]

Inspiration, pure and applied mathematics, and aesthetics

Main article: Mathematical beauty

Isaac Newton

Gottfried Wilhelm von Leibniz

Isaac Newton (left) and Gottfried Wilhelm Leibniz (right), developers of infinitesimal calculus

Mathematics arises from many different kinds of problems. At first these were found in commerce, land measurement, architecture and later astronomy; today, all sciences suggest problems studied bymathematicians, and many problems arise within mathematics itself. For example, the physicist Richard Feynman invented the path integral formulation of quantum mechanics using a combination of mathematical reasoning and physical insight, and today's string theory, a still-developing scientific theory which attempts to unifythe four fundamental forces of nature, continues to inspire new mathematics.[39]

Some mathematics is relevant only in the area that inspired it, and is applied to solve further problems in that area. But oftenmathematics inspired by one area proves useful in many areas, and joins the general stock of mathematical concepts. A distinction is often made between pure mathematics and applied mathematics. Howeverpure mathematics topics often turn out to have applications, e.g.

Page 31: Tugas tik di kelas xi ips 3

number theory in cryptography. This remarkable fact, that even the "purest" mathematics often turns out to have practical applications,is what Eugene Wigner has called "the unreasonable effectiveness of mathematics".[40] As in most areas of study, the explosion of knowledge in the scientific age has led to specialization: there arenow hundreds of specialized areas in mathematics and the latest Mathematics Subject Classification runs to 46 pages.[41] Several areas of applied mathematics have merged with related traditions outside of mathematics and become disciplines in their own right, including statistics, operations research, and computer science.

For those who are mathematically inclined, there is often a definite aesthetic aspect to much of mathematics. Many mathematicians talk about the elegance of mathematics, its intrinsicaesthetics and inner beauty. Simplicity and generality are valued. There is beauty in a simple and elegant proof, such as Euclid's proof that there are infinitely many prime numbers, and in an elegant numerical method that speeds calculation, such as the fast Fourier transform. G.H. Hardy in A Mathematician's Apology expressedthe belief that these aesthetic considerations are, in themselves, sufficient to justify the study of pure mathematics. He identified criteria such as significance, unexpectedness, inevitability, and economy as factors that contribute to a mathematical aesthetic.[42] Mathematicians often strive to find proofs that are particularly elegant, proofs from "The Book" of God according to Paul Erdős.[43][44] The popularity of recreational mathematics is another sign of the pleasure many find in solving mathematical questions.

Notation, language, and rigor

Main article: Mathematical notation

Leonhard Euler, who created and popularized much of the mathematical notation used today

Most of the mathematical notation in use today was not invented until the 16th century.[45] Before that, mathematics was written out in words, a painstaking process that limited mathematical discovery.[46] Euler (1707–1783) was responsible for

Page 32: Tugas tik di kelas xi ips 3

many of the notations in use today. Modern notation makes mathematics much easier for the professional, but beginners often find it daunting. It is extremely compressed: a few symbols contain a great deal of information. Like musical notation, modern mathematical notation has a strict syntax (which to a limited extentvaries from author to author and from discipline to discipline) and encodes information that would be difficult to write in any other way.

Mathematical language can be difficult to understand for beginners. Words such as or and only have more precise meanings thanin everyday speech. Moreover, words such as open and field have beengiven specialized mathematical meanings. Technical terms such as homeomorphism and integrable have precise meanings in mathematics. Additionally, shorthand phrases such as iff for "if and only if" belong to mathematical jargon. There is a reason for special notation and technical vocabulary: mathematics requires more precision than everyday speech. Mathematicians refer to this precision of language and logic as "rigor".

Mathematical proof is fundamentally a matter of rigor. Mathematicians want their theorems to follow from axioms by means ofsystematic reasoning. This is to avoid mistaken "theorems", based onfallible intuitions, of which many instances have occurred in the history of the subject.[47] The level of rigor expected in mathematics has varied over time: the Greeks expected detailed arguments, but at the time of Isaac Newton the methods employed wereless rigorous. Problems inherent in the definitions used by Newton would lead to a resurgence of careful analysis and formal proof in the 19th century. Misunderstanding the rigor is a cause for some of the common misconceptions of mathematics. Today, mathematicians continue to argue among themselves about computer-assisted proofs. Since large computations are hard to verify, such proofs may not be sufficiently rigorous.[48]

Axioms in traditional thought were "self-evident truths", but that conception is problematic.[49] At a formal level, an axiom is

Page 33: Tugas tik di kelas xi ips 3

just a string of symbols, which has an intrinsic meaning only in thecontext of all derivable formulas of an axiomatic system. It was thegoal of Hilbert's program to put all of mathematics on a firm axiomatic basis, but according to Gödel's incompleteness theorem every (sufficiently powerful) axiomatic system has undecidable formulas; and so a final axiomatization of mathematics is impossible. Nonetheless mathematics is often imagined to be (as far as its formal content) nothing but set theory in some axiomatization, in the sense that every mathematical statement or proof could be cast into formulas within set theory.[50]

Fields of mathematics

See also: Areas of mathematics and Glossary of areas of mathematics

An abacus, a simple calculating tool used since ancient times

Mathematics can, broadly speaking, be subdivided into the study of quantity, structure, space, and change (i.e. arithmetic, algebra, geometry, and analysis). In addition to these main concerns, there are also subdivisions dedicated to exploring links from the heart of mathematics to other fields: to logic, to set theory (foundations), to the empirical mathematics of the various sciences (applied mathematics), and more recently to the rigorous study of uncertainty. While some areas might seem unrelated, the Langlands program has found connections between areas previously thought unconnected, such as Galois groups, Riemann surfaces and number theory.

Foundations and philosophy

In order to clarify the foundations of mathematics, the fieldsof mathematical logic and set theory were developed. Mathematical logic includes the mathematical study of logic and the applications of formal logic to other areas of mathematics; set theory is the branch of mathematics that studies sets or collections of objects. Category theory, which deals in an abstract way with mathematical structures and relationships between them, is still in development.

Page 34: Tugas tik di kelas xi ips 3

The phrase "crisis of foundations" describes the search for a rigorous foundation for mathematics that took place from approximately 1900 to 1930.[51] Some disagreement about the foundations of mathematics continues to the present day. The crisis of foundations was stimulated by a number of controversies at the time, including the controversy over Cantor's set theory and the Brouwer–Hilbert controversy.

Mathematical logic is concerned with setting mathematics within a rigorous axiomatic framework, and studying the implicationsof such a framework. As such, it is home to Gödel's incompleteness theorems which (informally) imply that any effective formal system that contains basic arithmetic, if sound (meaning that all theorems that can be proven are true), is necessarily incomplete (meaning that there are true theorems which cannot be proved in that system).Whatever finite collection of number-theoretical axioms is taken as a foundation, Gödel showed how to construct a formal statement that is a true number-theoretical fact, but which does not follow from those axioms. Therefore, no formal system is a complete axiomatization of full number theory. Modern logic is divided into recursion theory, model theory, and proof theory, and is closely linked to theoretical computer science,[citation needed] as well as to category theory.

Theoretical computer science includes computability theory, computational complexity theory, and information theory. Computability theory examines the limitations of various theoreticalmodels of the computer, including the most well-known model – the Turing machine. Complexity theory is the study of tractability by computer; some problems, although theoretically solvable by computer, are so expensive in terms of time or space that solving them is likely to remain practically unfeasible, even with the rapidadvancement of computer hardware. A famous problem is the "P = NP?" problem, one of the Millennium Prize Problems.[52] Finally, information theory is concerned with the amount of data that can be stored on a given medium, and hence deals with concepts such as compression and entropy.

Page 35: Tugas tik di kelas xi ips 3

p \Rightarrow q \, Venn A intersect B.svg Commutative diagram for morphism.svg DFAexample.svg

Mathematical logic Set theory Category theory Theory of computation

Pure mathematics

Quantity

The study of quantity starts with numbers, first the familiar natural numbers and integers ("whole numbers") and arithmetical operations on them, which are characterized in arithmetic. The deeper properties of integers are studied in number theory, from which come such popular results as Fermat's Last Theorem. The twin prime conjecture and Goldbach's conjecture are two unsolved problemsin number theory.

As the number system is further developed, the integers are recognized as a subset of the rational numbers ("fractions"). These,in turn, are contained within the real numbers, which are used to represent continuous quantities. Real numbers are generalized to complex numbers. These are the first steps of a hierarchy of numbersthat goes on to include quaternions and octonions. Consideration of the natural numbers also leads to the transfinite numbers, which formalize the concept of "infinity". Another area of study is size, which leads to the cardinal numbers and then to another conception of infinity: the aleph numbers, which allow meaningful comparison ofthe size of infinitely large sets.

1, 2, 3,\ldots\! \ldots,-2, -1, 0, 1, 2\,\ldots\! -2, \frac{2}{3}, 1.21\,\! -e, \sqrt{2}, 3, \pi\,\! 2, i, -2+3i, 2e^{i\frac{4\pi}{3}}\,\!

Page 36: Tugas tik di kelas xi ips 3

Natural numbers Integers Rational numbers Real numbers Complex numbers

Structure

Many mathematical objects, such as sets of numbers and functions, exhibit internal structure as a consequence of operationsor relations that are defined on the set. Mathematics then studies properties of those sets that can be expressed in terms of that structure; for instance number theory studies properties of the set of integers that can be expressed in terms of arithmetic operations.Moreover, it frequently happens that different such structured sets (or structures) exhibit similar properties, which makes it possible,by a further step of abstraction, to state axioms for a class of structures, and then study at once the whole class of structures satisfying these axioms. Thus one can study groups, rings, fields and other abstract systems; together such studies (for structures defined by algebraic operations) constitute the domain of abstract algebra.

By its great generality, abstract algebra can often be appliedto seemingly unrelated problems; for instance a number of ancient problems concerning compass and straightedge constructions were finally solved using Galois theory, which involves field theory and group theory. Another example of an algebraic theory is linear algebra, which is the general study of vector spaces, whose elementscalled vectors have both quantity and direction, and can be used to model (relations between) points in space. This is one example of the phenomenon that the originally unrelated areas of geometry and algebra have very strong interactions in modern mathematics. Combinatorics studies ways of enumerating the number of objects thatfit a given structure.

\begin{matrix} (1,2,3) & (1,3,2) \\ (2,1,3) & (2,3,1) \\ (3,1,2) & (3,2,1) \end{matrix} Elliptic curve simple.svg Rubik's

Page 37: Tugas tik di kelas xi ips 3

cube.svg Group diagdram D6.svg Lattice of the divisibility of60.svg Braid-modular-group-cover.svg

Combinatorics Number theory Group theory Graph theory Order theory Algebra

Space

The study of space originates with geometry – in particular, Euclidean geometry. Trigonometry is the branch of mathematics that deals with relationships between the sides and the angles of triangles and with the trigonometric functions; it combines space and numbers, and encompasses the well-known Pythagorean theorem. Themodern study of space generalizes these ideas to include higher-dimensional geometry, non-Euclidean geometries (which play a centralrole in general relativity) and topology. Quantity and space both play a role in analytic geometry, differential geometry, and algebraic geometry. Convex and discrete geometry were developed to solve problems in number theory and functional analysis but now are pursued with an eye on applications in optimization and computer science. Within differential geometry are the concepts of fiber bundles and calculus on manifolds, in particular, vector and tensor calculus. Within algebraic geometry is the description of geometric objects as solution sets of polynomial equations, combining the concepts of quantity and space, and also the study of topological groups, which combine structure and space. Lie groups are used to study space, structure, and change. Topology in all its many ramifications may have been the greatest growth area in 20th-centurymathematics; it includes point-set topology, set-theoretic topology,algebraic topology and differential topology. In particular, instances of modern-day topology are metrizability theory, axiomaticset theory, homotopy theory, and Morse theory. Topology also includes the now solved Poincaré conjecture, and the still unsolved areas of the Hodge conjecture. Other results in geometry and topology, including the four color theorem and Kepler conjecture, have been proved only with the help of computers.

Page 38: Tugas tik di kelas xi ips 3

Illustration to Euclid's proof of the Pythagorean theorem.svg Sinusvåg 400px.png Hyperbolic triangle.svg

Torus.png Mandel zoom 07 satellite.jpg Measure illustration.png

Geometry Trigonometry Differential geometry Topology Fractal geometry Measure theory

Change

Understanding and describing change is a common theme in the natural sciences, and calculus was developed as a powerful tool to investigate it. Functions arise here, as a central concept describing a changing quantity. The rigorous study of real numbers and functions of a real variable is known as real analysis, with complex analysis the equivalent field for the complex numbers. Functional analysis focuses attention on (typically infinite-dimensional) spaces of functions. One of many applications of functional analysis is quantum mechanics. Many problems lead naturally to relationships between a quantity and its rate of change, and these are studied as differential equations. Many phenomena in nature can be described by dynamical systems; chaos theory makes precise the ways in which many of these systems exhibitunpredictable yet still deterministic behavior.

Integral as region under curve.svg Vector field.svg Navier Stokes Laminar.svg Limitcycle.svg Lorenz

attractor.svg Conformal grid after Möbius transformation.svg

Calculus Vector calculus Differential equations Dynamical systems Chaos theory Complex

analysis

Applied mathematics

Applied mathematics concerns itself with mathematical methods that are typically used in science, engineering, business, and industry. Thus, "applied mathematics" is a mathematical science with

Page 39: Tugas tik di kelas xi ips 3

specialized knowledge. The term applied mathematics also describes the professional specialty in which mathematicians work on practicalproblems; as a profession focused on practical problems, applied mathematics focuses on the "formulation, study, and use of mathematical models" in science, engineering, and other areas of mathematical practice.

In the past, practical applications have motivated the development of mathematical theories, which then became the subject of study in pure mathematics, where mathematics is developed primarily for its own sake. Thus, the activity of applied mathematics is vitally connected with research in pure mathematics.

Statistics and other decision sciences

Applied mathematics has significant overlap with the discipline of statistics, whose theory is formulated mathematically,especially with probability theory. Statisticians (working as part of a research project) "create data that makes sense" with random sampling and with randomized experiments;[53] the design of a statistical sample or experiment specifies the analysis of the data (before the data be available). When reconsidering data from experiments and samples or when analyzing data from observational studies, statisticians "make sense of the data" using the art of modelling and the theory of inference – with model selection and estimation; the estimated models and consequential predictions should be tested on new data.[54]

Statistical theory studies decision problems such as minimizing the risk (expected loss) of a statistical action, such asusing a procedure in, for example, parameter estimation, hypothesis testing, and selecting the best. In these traditional areas of mathematical statistics, a statistical-decision problem is formulated by minimizing an objective function, like expected loss or cost, under specific constraints: For example, designing a surveyoften involves minimizing the cost of estimating a population mean with a given level of confidence.[55] Because of its use of

Page 40: Tugas tik di kelas xi ips 3

optimization, the mathematical theory of statistics shares concerns with other decision sciences, such as operations research, control theory, and mathematical economics.[56]

Computational mathematics

Computational mathematics proposes and studies methods for solving mathematical problems that are typically too large for humannumerical capacity. Numerical analysis studies methods for problems in analysis using functional analysis and approximation theory; numerical analysis includes the study of approximation and discretization broadly with special concern for rounding errors. Numerical analysis and, more broadly, scientific computing also study non-analytic topics of mathematical science, especially algorithmic matrix and graph theory. Other areas of computational mathematics include computer algebra and symbolic computation.

Gravitation space source.png BernoullisLawDerivationDiagram.svg Composite

trapezoidal rule illustration small.svg Maximum boxed.png Two red dice 01.svg Oldfaithful3.png Caesar3.svg

Mathematical physics Fluid dynamics Numerical analysis Optimization Probability theory Statistics Cryptography

Market Data Index NYA on 20050726 202628 UTC.png Arbitrary-gametree-solved.svg Signal transduction pathways.svg CH4-structure.svg GDP PPP Per Capita IMF 2008.svg Simple feedback control loop2.svg

Mathematical finance Game theory Mathematical biology Mathematical chemistry Mathematical economics Control theory

Mathematical awards

Arguably the most prestigious award in mathematics is the Fields Medal,[57][58] established in 1936 and now awarded every four

Page 41: Tugas tik di kelas xi ips 3

years. The Fields Medal is often considered a mathematical equivalent to the Nobel Prize.

The Wolf Prize in Mathematics, instituted in 1978, recognizes lifetime achievement, and another major international award, the Abel Prize, was introduced in 2003. The Chern Medal was introduced in 2010 to recognize lifetime achievement. These accolades are awarded in recognition of a particular body of work, which may be innovational, or provide a solution to an outstanding problem in an established field.

A famous list of 23 open problems, called "Hilbert's problems", was compiled in 1900 by German mathematician David Hilbert. This list achieved great celebrity among mathematicians, and at least nine of the problems have now been solved. A new list of seven important problems, titled the "Millennium Prize Problems",was published in 2000. A solution to each of these problems carries a $1 million reward, and only one (the Riemann hypothesis) is duplicated in Hilbert's problems.

Computer science is the scientific and practical approach to computation and its applications. It is the systematic study of the feasibility, structure, expression, and mechanization of the methodical procedures (or algorithms) that underlie the acquisition,representation, processing, storage, communication of, and access toinformation, whether such information is encoded as bits in a computer memory or transcribed in genes and protein structures in a biological cell.[1] An alternate, more succinct definition of computer science is the study of automating algorithmic processes that scale. A computer scientist specializes in the theory of computation and the design of computational systems.[2]

Its subfields can be divided into a variety of theoretical andpractical disciplines. Some fields, such as computational complexitytheory (which explores the fundamental properties of computational and intractable problems), are highly abstract, while fields such ascomputer graphics emphasize real-world visual applications. Still

Page 42: Tugas tik di kelas xi ips 3

other fields focus on the challenges in implementing computation. For example, programming language theory considers various approaches to the description of computation, while the study of computer programming itself investigates various aspects of the use of programming language and complex systems. Human–computer interaction considers the challenges in making computers and computations useful, usable, and universally accessible to humans.

Contents

1 History

1.1 Contributions

2 Philosophy

2.1 Name of the field

3 Areas of computer science

3.1 Theoretical computer science

3.1.1 Theory of computation

3.1.2 Information and coding theory

3.1.3 Algorithms and data structures

3.1.4 Programming language theory

3.1.5 Formal methods

3.2 Applied computer science

3.2.1 Artificial intelligence

3.2.2 Computer architecture and engineering

3.2.3 Computer performance analysis

3.2.4 Computer graphics and visualization

3.2.5 Computer security and cryptography

3.2.6 Computational science

3.2.7 Computer networks

Page 43: Tugas tik di kelas xi ips 3

3.2.8 Concurrent, parallel and distributed systems

3.2.9 Databases

3.2.10 Software engineering

4 The great insights of computer science

5 Academia

6 Education

7 See also

8 Notes

9 References

10 Further reading

11 External links

History

Main article: History of computer science

Charles Babbage is credited with inventing the first mechanical computer.

Ada Lovelace is credited with writing the first algorithm intended for processing on a computer.

The earliest foundations of what would become computer sciencepredate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, aiding in computations such as multiplication and division. Further, algorithms for performing computations have existed since antiquity, even before the development of sophisticated computing equipment. The ancient Sanskrit treatise Shulba Sutras, or "Rules of the Chord", is a book of algorithms written in 800 BCE for constructing geometric objects like altars using a peg and chord, an early precursor of the modern field of computational geometry.

Page 44: Tugas tik di kelas xi ips 3

Blaise Pascal designed and constructed the first working mechanical calculator, Pascal's calculator, in 1642.[3] In 1673 Gottfried Leibniz demonstrated a digital mechanical calculator, called the 'Stepped Reckoner'.[4] He may be considered the first computer scientist and information theorist, for, among other reasons, documenting the binary number system. In 1820, Thomas de Colmar launched the mechanical calculator industry[note 1] when he released his simplified arithmometer, which was the first calculating machine strong enough and reliable enough to be used daily in an office environment. Charles Babbage started the design of the first automatic mechanical calculator, his difference engine,in 1822, which eventually gave him the idea of the first programmable mechanical calculator, his Analytical Engine.[5] He started developing this machine in 1834 and "in less than two years he had sketched out many of the salient features of the modern computer. A crucial step was the adoption of a punched card system derived from the Jacquard loom"[6] making it infinitely programmable.[note 2] In 1843, during the translation of a French article on the analytical engine, Ada Lovelace wrote, in one of the many notes she included, an algorithm to compute the Bernoulli numbers, which is considered to be the first computer program.[7] Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information; eventually his company became part of IBM. In 1937, one hundred years after Babbage's impossible dream, Howard Aiken convinced IBM, which was making all kinds of punched card equipment and was also in the calculator business[8] to develop his giant programmable calculator,the ASCC/Harvard Mark I, based on Babbage's analytical engine, whichitself used cards and a central computing unit. When the machine wasfinished, some hailed it as "Babbage's dream come true".[9]

During the 1940s, as new and more powerful computing machines were developed, the term computer came to refer to the machines rather than their human predecessors.[10] As it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. Computer science began to be established as

Page 45: Tugas tik di kelas xi ips 3

a distinct academic discipline in the 1950s and early 1960s.[11][12]The world's first computer science degree program, the Cambridge Diploma in Computer Science, began at the University of Cambridge Computer Laboratory in 1953. The first computer science degree program in the United States was formed at Purdue University in 1962.[13] Since practical computers became available, many applications of computing have become distinct areas of study in their own rights.

Although many initially believed it was impossible that computers themselves could actually be a scientific field of study, in the late fifties it gradually became accepted among the greater academic population.[14][15] It is the now well-known IBM brand thatformed part of the computer science revolution during this time. IBM(short for International Business Machines) released the IBM 704[16]and later the IBM 709[17] computers, which were widely used during the exploration period of such devices. "Still, working with the IBM[computer] was frustrating ... if you had misplaced as much as one letter in one instruction, the program would crash, and you would have to start the whole process over again".[14] During the late 1950s, the computer science discipline was very much in its developmental stages, and such issues were commonplace.[15]

Time has seen significant improvements in the usability and effectiveness of computing technology. Modern society has seen a significant shift in the users of computer technology, from usage only by experts and professionals, to a near-ubiquitous user base. Initially, computers were quite costly, and some degree of human aidwas needed for efficient use - in part from professional computer operators. As computer adoption became more widespread and affordable, less human assistance was needed for common usage.

Contributions

The German military used the Enigma machine (shown here) during World War II for communications they wanted kept secret. The large-scale decryption of Enigma traffic at Bletchley Park was an important factor that contributed to Allied victory in WWII.[18]

Page 46: Tugas tik di kelas xi ips 3

Despite its short history as a formal academic discipline, computer science has made a number of fundamental contributions to science and society - in fact, along with electronics, it is a founding science of the current epoch of human history called the Information Age and a driver of the Information Revolution, seen as the third major leap in human technological progress after the Industrial Revolution (1750-1850 CE) and the Agricultural Revolution(8000-5000 BCE).

These contributions include:

The start of the "digital revolution", which includes the current Information Age and the Internet.[19]

A formal definition of computation and computability, and proof that there are computationally unsolvable and intractable problems.[20]

The concept of a programming language, a tool for the precise expression of methodological information at various levels of abstraction.[21]

In cryptography, breaking the Enigma code was an importantfactor contributing to the Allied victory in World War II.[18]

Scientific computing enabled practical evaluation of processes and situations of great complexity, as well as experimentation entirely by software. It also enabled advanced studyof the mind, and mapping of the human genome became possible with the Human Genome Project.[19] Distributed computing projects such asFolding@home explore protein folding.

Algorithmic trading has increased the efficiency and liquidity of financial markets by using artificial intelligence, machine learning, and other statistical and numerical techniques on a large scale.[22] High frequency algorithmic trading can also exacerbate volatility.[23]

Page 47: Tugas tik di kelas xi ips 3

Computer graphics and computer-generated imagery have become ubiquitous in modern entertainment, particularly in television, cinema, advertising, animation and video games. Even films that feature no explicit CGI are usually "filmed" now on digital cameras, or edited or postprocessed using a digital video editor.[citation needed]

Simulation of various processes, including computational fluid dynamics, physical, electrical, and electronic systems and circuits, as well as societies and social situations (notably war games) along with their habitats, among many others. Modern computers enable optimization of such designs as complete aircraft. Notable in electrical and electronic circuit design are SPICE, as well as software for physical realization of new (or modified) designs. The latter includes essential design software for integrated circuits.[citation needed]

Artificial intelligence is becoming increasingly importantas it gets more efficient and complex. There are many applications of AI, some of which can be seen at home, such as robotic vacuum cleaners. It is also present in video games and on the modern battlefield in drones, anti-missile systems, and squad support robots.

Philosophy

Main article: Philosophy of computer science

A number of computer scientists have argued for the distinction of three separate paradigms in computer science. Peter Wegner argued that those paradigms are science, technology, and mathematics.[24] Peter Denning's working group argued that they are theory, abstraction (modeling), and design.[25] Amnon H. Eden described them as the "rationalist paradigm" (which treats computer science as a branch of mathematics, which is prevalent in theoretical computer science, and mainly employs deductive reasoning), the "technocratic paradigm" (which might be found in engineering approaches, most prominently in software engineering), and the "scientific paradigm" (which approaches computer-related

Page 48: Tugas tik di kelas xi ips 3

artifacts from the empirical perspective of natural sciences, identifiable in some branches of artificial intelligence).[26]

Name of the field

Although first proposed in 1956,[15] the term "computer science" appears in a 1959 article in Communications of the ACM,[27]in which Louis Fein argues for the creation of a Graduate School in Computer Sciences analogous to the creation of Harvard Business School in 1921,[28] justifying the name by arguing that, like management science, the subject is applied and interdisciplinary in nature, while having the characteristics typical of an academic discipline.[27] His efforts, and those of others such as numerical analyst George Forsythe, were rewarded: universities went on to create such programs, starting with Purdue in 1962.[29] Despite its name, a significant amount of computer science does not involve the study of computers themselves. Because of this, several alternative names have been proposed.[30] Certain departments of major universities prefer the term computing science, to emphasize precisely that difference. Danish scientist Peter Naur suggested theterm datalogy,[31] to reflect the fact that the scientific discipline revolves around data and data treatment, while not necessarily involving computers. The first scientific institution touse the term was the Department of Datalogy at the University of Copenhagen, founded in 1969, with Peter Naur being the first professor in datalogy. The term is used mainly in the Scandinavian countries. An alternative term, also proposed by Naur, is data science; this is now used for a distinct field of data analysis, including statistics and databases.

Also, in the early days of computing, a number of terms for the practitioners of the field of computing were suggested in the Communications of the ACM – turingineer, turologist, flow-charts-man, applied meta-mathematician, and applied epistemologist.[32] Three months later in the same journal, comptologist was suggested, followed next year by hypologist.[33] The term computics has also been suggested.[34] In Europe, terms derived from contracted translations of the expression "automatic information" (e.g.

Page 49: Tugas tik di kelas xi ips 3

"informazione automatica" in Italian) or "information and mathematics" are often used, e.g. informatique (French), Informatik (German), informatica (Italy, The Netherlands), informática (Spain, Portugal), informatika (Slavic languages and Hungarian) or pliroforiki (πληροφορική, which means informatics) in Greek. Similarwords have also been adopted in the UK (as in the School of Informatics of the University of Edinburgh).[35]

A folkloric quotation, often attributed to—but almost certainly not first formulated by—Edsger Dijkstra, states that "computer science is no more about computers than astronomy is abouttelescopes."[note 3] The design and deployment of computers and computer systems is generally considered the province of disciplinesother than computer science. For example, the study of computer hardware is usually considered part of computer engineering, while the study of commercial computer systems and their deployment is often called information technology or information systems. However,there has been much cross-fertilization of ideas between the variouscomputer-related disciplines. Computer science research also often intersects other disciplines, such as philosophy, cognitive science,linguistics, mathematics, physics, biology, statistics, and logic.

Computer science is considered by some to have a much closer relationship with mathematics than many scientific disciplines, withsome observers saying that computing is a mathematical science.[11] Early computer science was strongly influenced by the work of mathematicians such as Kurt Gödel and Alan Turing, and there continues to be a useful interchange of ideas between the two fieldsin areas such as mathematical logic, category theory, domain theory,and algebra.[15]

The relationship between computer science and software engineering is a contentious issue, which is further muddied by disputes over what the term "software engineering" means, and how computer science is defined.[36] David Parnas, taking a cue from therelationship between other engineering and science disciplines, has claimed that the principal focus of computer science is studying the

Page 50: Tugas tik di kelas xi ips 3

properties of computation in general, while the principal focus of software engineering is the design of specific computations to achieve practical goals, making the two separate but complementary disciplines.[37]

The academic, political, and funding aspects of computer science tend to depend on whether a department formed with a mathematical emphasis or with an engineering emphasis. Computer science departments with a mathematics emphasis and with a numericalorientation consider alignment with computational science. Both types of departments tend to make efforts to bridge the field educationally if not across all research.

Areas of computer science

Further information: Outline of computer science

As a discipline, computer science spans a range of topics fromtheoretical studies of algorithms and the limits of computation to the practical issues of implementing computing systems in hardware and software.[38][39] CSAB, formerly called Computing Sciences Accreditation Board – which is made up of representatives of the Association for Computing Machinery (ACM), and the IEEE Computer Society (IEEE-CS)[40] – identifies four areas that it considers crucial to the discipline of computer science: theory of computation, algorithms and data structures, programming methodologyand languages, and computer elements and architecture. In addition to these four areas, CSAB also identifies fields such as software engineering, artificial intelligence, computer networking and telecommunications, database systems, parallel computation, distributed computation, computer-human interaction, computer graphics, operating systems, and numerical and symbolic computation as being important areas of computer science.[38]

Theoretical computer science

Main article: Theoretical computer science

Page 51: Tugas tik di kelas xi ips 3

The broader field of theoretical computer science encompasses both the classical theory of computation and a wide range of other topics that focus on the more abstract, logical, and mathematical aspects of computing.

Theory of computation

Main article: Theory of computation

According to Peter J. Denning, the fundamental question underlying computer science is, "What can be (efficiently) automated?"[11] The study of the theory of computation is focused onanswering fundamental questions about what can be computed and what amount of resources are required to perform those computations. In an effort to answer the first question, computability theory examines which computational problems are solvable on various theoretical models of computation. The second question is addressed by computational complexity theory, which studies the time and spacecosts associated with different approaches to solving a multitude ofcomputational problems.

The famous "P=NP?" problem, one of the Millennium Prize Problems,[41] is an open problem in the theory of computation.

DFAexample.svg Wang tiles.png P = NP ? GNITIRW-TERCES Blochsphere.svg

Automata theory Computability theory Computational complexitytheory Cryptography Quantum computing theory

Information and coding theory

Main articles: Information theory and Coding theory

Information theory is related to the quantification of information. This was developed by Claude E. Shannon to find fundamental limits on signal processing operations such as compressing data and on reliably storing and communicating data.[42]Coding theory is the study of the properties of codes (systems for

Page 52: Tugas tik di kelas xi ips 3

converting information from one form to another) and their fitness for a specific application. Codes are used for data compression, cryptography, error detection and correction, and more recently alsofor network coding. Codes are studied for the purpose of designing efficient and reliable data transmission methods.

Algorithms and data structures

Algorithms and data structures is the study of commonly used computational methods and their computational efficiency.

O(n^2) Sorting quicksort anim.gif Singly linked list.png TSP Deutschland 3.png SimplexRangeSearching.png

Analysis of algorithms Algorithms Data structures Combinatorial optimization Computational geometry

Programming language theory

Main article: Programming language theory

Programming language theory is a branch of computer science that deals with the design, implementation, analysis, characterization, and classification of programming languages and their individual features. It falls within the discipline of computer science, both depending on and affecting mathematics, software engineering and linguistics. It is an active research area,with numerous dedicated academic journals.

\Gamma\vdash x: \text{Int} Compiler.svg Python add5 syntax.svg

Type theory Compiler design Programming languages

Formal methods

Main article: Formal methods

Formal methods are a particular kind of mathematically based technique for the specification, development and verification of

Page 53: Tugas tik di kelas xi ips 3

software and hardware systems. The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design. They form an important theoretical underpinning for software engineering, especially where safety or security is involved. Formal methods are a useful adjunct to software testing since they help avoid errors and can also give a framework for testing. For industrial use, tool support is required.However, the high cost of using formal methods means that they are usually only used in the development of high-integrity and life-critical systems, where safety or security is of utmost importance. Formal methods are best described as the application of a fairly broad variety of theoretical computer science fundamentals, in particular logic calculi, formal languages, automata theory, and program semantics, but also type systems and algebraic data types toproblems in software and hardware specification and verification.

Applied computer science

Applied computer science aims at identifying certain computer science concepts that can be used directly in solving real world problems.

Artificial intelligence

Main article: Artificial intelligence

This branch of computer science aims to or is required to synthesise goal-orientated processes such as problem-solving, decision-making, environmental adaptation, learning and communication found in humans and animals. From its origins in cybernetics and in the Dartmouth Conference (1956), artificial intelligence (AI) research has been necessarily cross-disciplinary, drawing on areas of expertise such as applied mathematics, symbolic logic, semiotics, electrical engineering, philosophy of mind, neurophysiology, and social intelligence. AI is associated in the popular mind with robotic development, but the main field of practical application has been as an embedded component in areas of

Page 54: Tugas tik di kelas xi ips 3

software development, which require computational understanding. Thestarting-point in the late 1940s was Alan Turing's question "Can computers think?", and the question remains effectively unanswered although the "Turing test" is still used to assess computer output on the scale of human intelligence. But the automation of evaluativeand predictive tasks has been increasingly successful as a substitute for human monitoring and intervention in domains of computer application involving complex real-world data.

Nicolas P. Rougier's rendering of the human brain.png Human eye, rendered from Eye.png Corner.png

Machine learning Computer vision Image processing

KnnClassification.svg Julia iteration data.png Sky.png

Pattern recognition Data mining Evolutionary computation

Neuron.svg English.png HONDA ASIMO.jpg

Knowledge representation Natural language processing Robotics

Computer architecture and engineering

Main articles: Computer architecture and Computer engineering

Computer architecture, or digital computer organization, is the conceptual design and fundamental operational structure of a computer system. It focuses largely on the way by which the central processing unit performs internally and accesses addresses in memory.[43] The field often involves disciplines of computer engineering and electrical engineering, selecting and interconnecting hardware components to create computers that meet functional, performance, and cost goals.

NOR ANSI.svg Fivestagespipeline.png SIMD.svg

Digital logic Microarchitecture Multiprocessing

Roomba original.jpg Flowchart.png Operating system placement.svg

Ubiquitous computing Systems architecture Operating systems

Page 55: Tugas tik di kelas xi ips 3

Computer performance analysis

Main article: Computer performance

Computer performance analysis is the study of work flowing through computers with the general goals of improving throughput, controlling response time, using resources efficiently, eliminating bottlenecks, and predicting performance under anticipated peak loads.[44]

Computer graphics and visualization

Main article: Computer graphics (computer science)

Computer graphics is the study of digital visual contents, andinvolves synthese and manipulations of image data. The study is connected to many other fields in computer science, including computer vision, image processing, and computational geometry, and is heavily applied in the fields of special effects and video games.

Computer security and cryptography

Main articles: Computer security and Cryptography

Computer security is a branch of computer technology, whose objective includes protection of information from unauthorized access, disruption, or modification while maintaining the accessibility and usability of the system for its intended users. Cryptography is the practice and study of hiding (encryption) and therefore deciphering (decryption) information. Modern cryptography is largely related to computer science, for many encryption and decryption algorithms are based on their computational complexity.

Computational science

Computational science (or scientific computing) is the field of study concerned with constructing mathematical models and

Page 56: Tugas tik di kelas xi ips 3

quantitative analysis techniques and using computers to analyze and solve scientific problems. In practical use, it is typically the application of computer simulation and other forms of computation toproblems in various scientific disciplines.

Lorenz attractor yb.svg Quark wiki.jpg Naphthalene-3D-balls.png 1u04-argonaute.png

Numerical analysis Computational physics Computationalchemistry Bioinformatics

Computer networks

Main article: Computer network

This branch of computer science aims to manage networks between computers worldwide.

Concurrent, parallel and distributed systems

Main articles: Concurrency (computer science) and Distributed computing

Concurrency is a property of systems in which several computations are executing simultaneously, and potentially interacting with each other. A number of mathematical models have been developed for general concurrent computation including Petri nets, process calculi and the Parallel Random Access Machine model. A distributed system extends the idea of concurrency onto multiple computers connected through a network. Computers within the same distributed system have their own private memory, and information isoften exchanged among themselves to achieve a common goal.

Databases

Main article: Database

A database is intended to organize, store, and retrieve large amounts of data easily. Digital databases are managed using database

Page 57: Tugas tik di kelas xi ips 3

management systems to store, create, maintain, and search data, through database models and query languages.

Software engineering

Main article: Software engineering

See also: Computer programming

Software engineering is the study of designing, implementing, and modifying software in order to ensure it is of high quality, affordable, maintainable, and fast to build. It is a systematic approach to software design, involving the application of engineering practices to software. Software engineering deals with the organizing and analyzing of software— it doesn't just deal with the creation or manufacture of new software, but its internal maintenance and arrangement. Both computer applications software engineers and computer systems software engineers are projected to be among the fastest growing occupations from 2008 and 2018.

The great insights of computer science

The philosopher of computing Bill Rapaport noted three Great Insights of Computer Science[45]

Leibniz's, Boole's, Alan Turing's, Shannon's, & Morse's insight: There are only two objects that a computer has to deal within order to represent "anything"

All the information about any computable problem can be represented using only 0 and 1 (or any other bistable pair that can flip-flop between two easily distinguishable states, such as "on"/"off", "magnetized/de-magnetized", "high-voltage/low-voltage", etc.).

See also: Digital physics

Page 58: Tugas tik di kelas xi ips 3

Alan Turing's insight: There are only five actions that a computer has to perform in order to do "anything"

Every algorithm can be expressed in a language for a computer consisting of only five basic instructions:

* move left one location

* move right one location

* read symbol at current location

* print 0 at current location

* print 1 at current location

See also: Turing machine

Böhm and Jacopini's insight: There are only three ways of combining these actions (into more complex ones) that are needed in order for a computer to do "anything"

Only three rules are needed to combine any set of basic instructions into more complex ones:

sequence:

first do this; then do that

selection :

Page 59: Tugas tik di kelas xi ips 3

IF such-and-such is the case,

THEN do this

ELSE do that

repetition:

WHILE such-and-such is the case DO this

Note that the three rules of Boehm's and Jacopini's insight can be further simplified with the use of goto (which means it is more elementary than structured programming).

See also: Elementary function arithmetic § Friedman's grand conjecture

Academia

Further information: List of computer science conferences and Category:Computer science journals

Conferences are important events for computer science research. During these conferences, researchers from the public and private sectors present their recent work and meet. Unlike in most other academic fields, in computer science, the prestige of conference papers is greater than that of journal publications.[46][47] One proposed explanation for this is the quick development of this relatively new field requires rapid review and distribution of results, a task better handled by conferences than by journals.[48]

Education

See also: Women in computing

Since computer science is a relatively new field, it is not aswidely taught in schools and universities as other academic

Page 60: Tugas tik di kelas xi ips 3

subjects. For example, in 2014, Code.org estimated that only 10 percent of high schools in the United States offered computer science education.[49] A 2010 report by Association for Computing Machinery (ACM) and Computer Science Teachers Association (CSTA) revealed that only 14 out of 50 states have adopted significant education standards for high school computer science.[50] However, computer science education is growing. Some countries, such as Israel, New Zealand and South Korea, have already included computer science in their respective national secondary education curriculum.[51][52] Several countries are following suit.[53]

In most countries, there is a significant gender gap in computer science education. For example, in the U.S. about 20% of computer science degrees in 2012 were conferred to women.[54] This gender gap also exists in other Western countries.[55] However, in some parts of the world, the gap is small or nonexistent. In 2011, approximately half of all computer science degrees in Malaysia were conferred to women.[56] In 2001, women made up 54.5% of computer science graduates in Guyana.[55]

"Electrical and computer engineering" redirects here. For contents about computer engineering, see Computer engineering.

Electrical engineers design complex power systems ...

... and electronic circuits.

Electrical engineering is a field of engineering that generally deals with the study and application of electricity, electronics, and electromagnetism. This field first became an identifiable occupation in the latter half of the 19th century aftercommercialization of the electric telegraph, the telephone, and electric power distribution and use. Subsequently, broadcasting and recording media made electronics part of daily life. The invention of the transistor, and later the integrated circuit, brought down the cost of electronics to the point where they can be used in almost any household object.

Page 61: Tugas tik di kelas xi ips 3

Electrical engineering has now subdivided into a wide range ofsubfields including electronics, digital computers, power engineering, telecommunications, control systems, radio-frequency engineering, signal processing, instrumentation, and microelectronics. The subject of electronic engineering is often treated as its own subfield but it intersects with all the other subfields, including the power electronics of power engineering.

Electrical engineers typically hold a degree in electrical engineering or electronic engineering. Practicing engineers may haveprofessional certification and be members of a professional body. Such bodies include the Institute of Electrical and Electronic Engineers (IEEE) and the Institution of Engineering and Technology (IET).

Electrical engineers work in a very wide range of industries and the skills required are likewise variable. These range from basic circuit theory to the management skills required of a project manager. The tools and equipment that an individual engineer may need are similarly variable, ranging from a simple voltmeter to a top end analyzer to sophisticated design and manufacturing software.

Contents

1 History

1.1 19th century

1.2 More modern developments

1.3 Solid-state transistors

2 Subdisciplines

2.1 Power

2.2 Control

Page 62: Tugas tik di kelas xi ips 3

2.3 Electronics

2.4 Microelectronics

2.5 Signal processing

2.6 Telecommunications

2.7 Instrumentation

2.8 Computers

2.9 Related disciplines

3 Education

4 Practicing engineers

5 Tools and work

6 See also

7 Notes

8 References

9 Further reading

10 External links

History

Main article: History of electrical engineering

Electricity has been a subject of scientific interest since atleast the early 17th century. The first electrical engineer was probably William Gilbert who designed the versorium: a device that detected the presence of statically charged objects. He was also thefirst to draw a clear distinction between magnetism and static electricity and is credited with establishing the term electricity.[1] In 1775 Alessandro Volta's scientific experimentations devised the electrophorus, a device that produced a static electric charge, and by 1800 Volta developed the voltaic pile, a forerunner of the electric battery.[2]

Page 63: Tugas tik di kelas xi ips 3

19th century

The discoveries of Michael Faraday formed the foundation of electric motor technology

However, it was not until the 19th century that research into the subject started to intensify. Notable developments in this century include the work of Georg Ohm, who in 1827 quantified the relationship between the electric current and potential difference in a conductor, of Michael Faraday, the discoverer of electromagnetic induction in 1831, and of James Clerk Maxwell, who in 1873 published a unified theory of electricity and magnetism in his treatise Electricity and Magnetism.[3]

Beginning in the 1830s, efforts were made to apply electricityto practical use in the telegraph. By the end of the 19th century the world had been forever changed by the rapid communication made possible by the engineering development of land-lines, submarine cables, and, from about 1890, wireless telegraphy.

Practical applications and advances in such fields created an increasing need for standardized units of measure. They led to the international standardization of the units volt, ampere, coulomb, ohm, farad, and henry. This was achieved at an international conference in Chicago in 1893.[4] The publication of these standardsformed the basis of future advances in standardisation in various industries, and in many countries the definitions were immediately recognised in relevant legislation.[5]

During these years, the study of electricity was largely considered to be a subfield of physics. It was not until about 1885 that universities and institutes of technology such as MassachusettsInstitute of Technology (MIT) and Cornell University started to offer bachelor's degrees in electrical engineering. The Darmstadt University of Technology founded the first department of electrical engineering in the world in 1882. In that same year, under Professor

Page 64: Tugas tik di kelas xi ips 3

Charles Cross, MIT began offering the first option of electrical engineering within its physics department.[6] In 1883, Darmstadt University of Technology and Cornell University introduced the world's first bachelor's degree courses of study in electrical engineering, and in 1885 University College London founded the firstchair of electrical engineering in Great Britain.[7] The University of Missouri established the first department of electrical engineering in the United States in 1886.[8] Several other schools soon followed suit, including Cornell and the Georgia School of Technology in Atlanta, Georgia.

Thomas Edison, electric light and (DC) power supply networks

Károly Zipernowsky, Ottó Bláthy, Miksa Déri, the ZDB transformer

William Stanley, Jr., transformers

Galileo Ferraris, electrical theory, induction motor

Nikola Tesla, practical polyphase (AC) and induction motordesigns

Mikhail Dolivo-Dobrovolsky developed standard 3-phase (AC)systems

Charles Proteus Steinmetz, AC mathematical theories for engineers

Page 65: Tugas tik di kelas xi ips 3

Oliver Heaviside, developed theoretical models for electric circuits

During these decades use of electrical engineering increased dramatically. In 1882, Thomas Edison switched on the world's first large-scale electric power network that provided 110 volts — direct current (DC) — to 59 customers on Manhattan Island in New York City.In 1884, Sir Charles Parsons invented the steam turbine allowing formore efficient electric power generation. Alternating current, with its ability to transmit power more efficiently over long distances via the use of transformers, developed rapidly in the 1880s and 1890s with transformer designs by Károly Zipernowsky, Ottó Bláthy and Miksa Déri (later called ZBD transformers), Lucien Gaulard, JohnDixon Gibbs and William Stanley, Jr.. Practical AC motor designs including induction motors were independently invented by Galileo Ferraris and Nikola Tesla and further developed into a practical three-phase form by Mikhail Dolivo-Dobrovolsky and Charles Eugene Lancelot Brown.[9] Charles Steinmetz and Oliver Heaviside contributed to the theoretical basis of alternating current engineering.[10][11] The spread in the use of AC set off in the United States what has been called the War of Currents between a George Westinghouse backed AC system and a Thomas Edison backed DC power system, with AC being adopted as the overall standard.[12]

More modern developments

Guglielmo Marconi known for his pioneering work on long distance radio transmission

During the development of radio, many scientists and inventorscontributed to radio technology and electronics. The mathematical work of James Clerk Maxwell during the 1850s had shown the relationship of different forms of electromagnetic radiation including possibility of invisible airborne waves (later called "radio waves"). In his classic physics experiments of 1888, HeinrichHertz proved Maxwell's theory by transmitting radio waves with a spark-gap transmitter, and detected them by using simple electrical devices. Other physicists experimented with these new waves and in the process developed devices for transmitting and detecting them.

Page 66: Tugas tik di kelas xi ips 3

In 1895 Guglielmo Marconi began work on a way to adapt the known methods of transmitting and detecting these "Hertzian waves" into a purpose built commercial wireless telegraphic system. Early on, he sent wireless signals over a distance of one and a half miles. In December 1901, he sent wireless waves that were not affected by the curvature of the Earth. Marconi later transmitted the wireless signals across the Atlantic between Poldhu, Cornwall, and St. John's, Newfoundland, a distance of 2,100 miles (3,400 km).[13]

In 1897, Karl Ferdinand Braun introduced the cathode ray tube as part of an oscilloscope, a crucial enabling technology for electronic television.[14] John Fleming invented the first radio tube, the diode, in 1904. Two years later, Robert von Lieben and LeeDe Forest independently developed the amplifier tube, called the triode.[15]

In 1920 Albert Hull developed the magnetron which would eventually lead to the development of the microwave oven in 1946 by Percy Spencer.[16][17] In 1934 the British military began to make strides toward radar (which also uses the magnetron) under the direction of Dr Wimperis, culminating in the operation of the first radar station at Bawdsey in August 1936.[18]

In 1941 Konrad Zuse presented the Z3, the world's first fully functional and programmable computer using electromechanical parts. In 1943 Tommy Flowers designed and built the Colossus, the world's first fully functional, electronic, digital and programmable computer.[19] In 1946 the ENIAC (Electronic Numerical Integrator andComputer) of John Presper Eckert and John Mauchly followed, beginning the computing era. The arithmetic performance of these machines allowed engineers to develop completely new technologies and achieve new objectives, including the Apollo program which culminated in landing astronauts on the Moon.[20]

Solid-state transistors

Page 67: Tugas tik di kelas xi ips 3

The invention of the transistor in late 1947 by William B. Shockley, John Bardeen, and Walter Brattain of the Bell Telephone Laboratories opened the door for more compact devices and led to thedevelopment of the integrated circuit in 1958 by Jack Kilby and independently in 1959 by Robert Noyce.[21] Starting in 1968, Ted Hoff and a team at the Intel Corporation invented the first commercial microprocessor, which foreshadowed the personal computer.The Intel 4004 was a four-bit processor released in 1971, but in 1973 the Intel 8080, an eight-bit processor, made the first personalcomputer, the Altair 8800, possible.[22]

Subdisciplines

Electrical engineering has many subdisciplines, the most common of which are listed below. Although there are electrical engineers who focus exclusively on one of these subdisciplines, manydeal with a combination of them. Sometimes certain fields, such as electronic engineering and computer engineering, are considered separate disciplines in their own right.

Power

Main article: Power engineering

Power pole

Power engineering deals with the generation, transmission and distribution of electricity as well as the design of a range of related devices.[23] These include transformers, electric generators, electric motors, high voltage engineering, and power electronics. In many regions of the world, governments maintain an electrical network called a power grid that connects a variety of generators together with users of their energy. Users purchase electrical energy from the grid, avoiding the costly exercise of having to generate their own. Power engineers may work on the designand maintenance of the power grid as well as the power systems that connect to it.[24] Such systems are called on-grid power systems andmay supply the grid with additional power, draw power from the grid or do both. Power engineers may also work on systems that do not

Page 68: Tugas tik di kelas xi ips 3

connect to the grid, called off-grid power systems, which in some cases are preferable to on-grid systems. The future includes Satellite controlled power systems, with feedback in real time to prevent power surges and prevent blackouts.

Control

Main article: Control engineering

Control systems play a critical role in space flight.

Control engineering focuses on the modeling of a diverse rangeof dynamic systems and the design of controllers that will cause these systems to behave in the desired manner.[25] To implement suchcontrollers electrical engineers may use electrical circuits, digital signal processors, microcontrollers and PLCs (Programmable Logic Controllers). Control engineering has a wide range of applications from the flight and propulsion systems of commercial airliners to the cruise control present in many modern automobiles.[26] It also plays an important role in industrial automation.

Control engineers often utilize feedback when designing control systems. For example, in an automobile with cruise control the vehicle's speed is continuously monitored and fed back to the system which adjusts the motor's power output accordingly. Where there is regular feedback, control theory can be used to determine how the system responds to such feedback.[27]

Electronics

Main article: Electronic engineering

Electronic components

Electronic engineering involves the design and testing of electronic circuits that use the properties of components such as resistors, capacitors, inductors, diodes and transistors to achieve a particular functionality.[24] The tuned circuit, which allows the user of a radio to filter out all but a single station, is just one

Page 69: Tugas tik di kelas xi ips 3

example of such a circuit. Another example (of a pneumatic signal conditioner) is shown in the adjacent photograph.

Prior to the Second World War, the subject was commonly known as radio engineering and basically was restricted to aspects of communications and radar, commercial radio and early television.[24]Later, in post war years, as consumer devices began to be developed,the field grew to include modern television, audio systems, computers and microprocessors. In the mid-to-late 1950s, the term radio engineering gradually gave way to the name electronic engineering.

Before the invention of the integrated circuit in 1959,[28] electronic circuits were constructed from discrete components that could be manipulated by humans. These discrete circuits consumed much space and power and were limited in speed, although they are still common in some applications. By contrast, integrated circuits packed a large number—often millions—of tiny electrical components, mainly transistors,[29] into a small chip around the size of a coin.This allowed for the powerful computers and other electronic deviceswe see today.

Microelectronics

Main article: Microelectronics

Microprocessor

Microelectronics engineering deals with the design and microfabrication of very small electronic circuit components for usein an integrated circuit or sometimes for use on their own as a general electronic component.[30] The most common microelectronic components are semiconductor transistors, although all main electronic components (resistors, capacitors etc.) can be created ata microscopic level. Nanoelectronics is the further scaling of devices down to nanometer levels. Modern devices are already in the nanometer regime, with below 100 nm processing having been standard since about 2002.[31]

Page 70: Tugas tik di kelas xi ips 3

Microelectronic components are created by chemically fabricating wafers of semiconductors such as silicon (at higher frequencies, compound semiconductors like gallium arsenide and indium phosphide) to obtain the desired transport of electronic charge and control of current. The field of microelectronics involves a significant amount of chemistry and material science and requires the electronic engineer working in the field to have a verygood working knowledge of the effects of quantum mechanics.[32]

Signal processing

Main article: Signal processing

A Bayer filter on a CCD requires signal processing to get a red, green, and blue value at each pixel.

Signal processing deals with the analysis and manipulation of signals.[33] Signals can be either analog, in which case the signal varies continuously according to the information, or digital, in which case the signal varies according to a series of discrete values representing the information. For analog signals, signal processing may involve the amplification and filtering of audio signals for audio equipment or the modulation and demodulation of signals for telecommunications. For digital signals, signal processing may involve the compression, error detection and error correction of digitally sampled signals.[34]

Signal Processing is a very mathematically oriented and intensive area forming the core of digital signal processing and it is rapidly expanding with new applications in every field of electrical engineering such as communications, control, radar, audioengineering, broadcast engineering, power electronics and bio-medical engineering as many already existing analog systems are replaced with their digital counterparts. Analog signal processing is still important in the design of many control systems.

Page 71: Tugas tik di kelas xi ips 3

DSP processor ICs are found in every type of modern electronicsystems and products including, SDTV | HDTV sets,[35] radios and mobile communication devices, Hi-Fi audio equipment, Dolby noise reduction algorithms, GSM mobile phones, mp3 multimedia players, camcorders and digital cameras, automobile control systems, noise cancelling headphones, digital spectrum analyzers, intelligent missile guidance, radar, GPS based cruise control systems and all kinds of image processing, video processing, audio processing and speech processing systems.[36]

Telecommunications

Main article: Telecommunications engineering

Satellite dishes are a crucial component in the analysis of satellite information.

Telecommunications engineering focuses on the transmission of information across a channel such as a coax cable, optical fiber or free space.[37] Transmissions across free space require information to be encoded in a carrier wave to shift the information to a carrier frequency suitable for transmission, this is known as modulation. Popular analog modulation techniques include amplitude modulation and frequency modulation.[38] The choice of modulation affects the cost and performance of a system and these two factors must be balanced carefully by the engineer.

Once the transmission characteristics of a system are determined, telecommunication engineers design the transmitters and receivers needed for such systems. These two are sometimes combined to form a two-way communication device known as a transceiver. A keyconsideration in the design of transmitters is their power consumption as this is closely related to their signal strength.[39][40] If the signal strength of a transmitter is insufficient the signal's information will be corrupted by noise.

Instrumentation

Main article: Instrumentation engineering

Page 72: Tugas tik di kelas xi ips 3

Flight instruments provide pilots with the tools to control aircraft analytically.

Instrumentation engineering deals with the design of devices to measure physical quantities such as pressure, flow and temperature.[41] The design of such instrumentation requires a good understanding of physics that often extends beyond electromagnetic theory. For example, flight instruments measure variables such as wind speed and altitude to enable pilots the control of aircraft analytically. Similarly, thermocouples use the Peltier-Seebeck effect to measure the temperature difference between two points.[42]

Often instrumentation is not used by itself, but instead as the sensors of larger electrical systems. For example, a thermocouple might be used to help ensure a furnace's temperature remains constant.[43] For this reason, instrumentation engineering is often viewed as the counterpart of control engineering.

Computers

Main article: Computer engineering

Supercomputers are used in fields as diverse as computational biology and geographic information systems.

Computer engineering deals with the design of computers and computer systems. This may involve the design of new hardware, the design of PDAs, tablets and supercomputers or the use of computers to control an industrial plant.[44] Computer engineers may also workon a system's software. However, the design of complex software systems is often the domain of software engineering, which is usually considered a separate discipline.[45] Desktop computers represent a tiny fraction of the devices a computer engineer might work on, as computer-like architectures are now found in a range of devices including video game consoles and DVD players.

Related disciplines

The Bird VIP Infant ventilator

Page 73: Tugas tik di kelas xi ips 3

Mechatronics is an engineering discipline which deals with theconvergence of electrical and mechanical systems. Such combined systems are known as electromechanical systems and have widespread adoption. Examples include automated manufacturing systems,[46] heating, ventilation and air-conditioning systems[47] and various subsystems of aircraft and automobiles. [48]

The term mechatronics is typically used to refer to macroscopic systems but futurists have predicted the emergence of very small electromechanical devices. Already such small devices, known as Microelectromechanical systems (MEMS), are used in automobiles to tell airbags when to deploy,[49] in digital projectors to create sharper images and in inkjet printers to createnozzles for high definition printing. In the future it is hoped the devices will help build tiny implantable medical devices and improveoptical communication.[50]

Biomedical engineering is another related discipline, concerned with the design of medical equipment. This includes fixed equipment such as ventilators, MRI scanners[51] and electrocardiograph monitors as well as mobile equipment such as cochlear implants, artificial pacemakers and artificial hearts.

Aerospace engineering and robotics an example is the most recent electric propulsion and ion propulsion.

Education

Main article: Education and training of electrical and electronics engineers

Oscilloscope

Electrical engineers typically possess an academic degree witha major in electrical engineering, electronics engineering,

Page 74: Tugas tik di kelas xi ips 3

electrical engineering technology,[52] or electrical and electronic engineering.[53][54] The same fundamental principles are taught in all programs, though emphasis may vary according to title. The length of study for such a degree is usually four or five years and the completed degree may be designated as a Bachelor of Science in Electrical/Electronics Engineering Technology, Bachelor of Engineering, Bachelor of Science, Bachelor of Technology, or Bachelor of Applied Science depending on the university. The bachelor's degree generally includes units covering physics, mathematics, computer science, project management, and a variety of topics in electrical engineering.[55] Initially such topics cover most, if not all, of the subdisciplines of electrical engineering. At some schools, the students can then choose to emphasize one or more subdisciplines towards the end of their courses of study.

Typical electrical engineering diagram used as a troubleshooting tool

At many schools, electronic engineering is included as part ofan electrical award, sometimes explicitly, such as a Bachelor of Engineering (Electrical and Electronic), but in others electrical and electronic engineering are both considered to be sufficiently broad and complex that separate degrees are offered.[56]

Some electrical engineers choose to study for a postgraduate degree such as a Master of Engineering/Master of Science (M.Eng./M.Sc.), a Master of Engineering Management, a Doctor of Philosophy (Ph.D.) in Engineering, an Engineering Doctorate (Eng.D.), or an Engineer's degree. The master's and engineer's degrees may consist of either research, coursework or a mixture of the two. The Doctor of Philosophy and Engineering Doctorate degrees consist of a significant research component and are often viewed as the entry point to academia. In the United Kingdom and some other European countries, Master of Engineering is often considered to be an undergraduate degree of slightly longer duration than the Bachelor of Engineering rather than postgraduate.[57]

Practicing engineers

Page 75: Tugas tik di kelas xi ips 3

Belgian electrical engineers inspecting the rotor of a 40,000 kilowatt turbine of the General Electric Company in New York City

In most countries, a bachelor's degree in engineering represents the first step towards professional certification and thedegree program itself is certified by a professional body.[58] Aftercompleting a certified degree program the engineer must satisfy a range of requirements (including work experience requirements) before being certified. Once certified the engineer is designated the title of Professional Engineer (in the United States, Canada andSouth Africa), Chartered Engineer or Incorporated Engineer (in India, Pakistan, the United Kingdom, Ireland and Zimbabwe), Chartered Professional Engineer (in Australia and New Zealand) or European Engineer (in much of the European Union).

The IEEE corporate office is on the 17th floor of 3 Park Avenue in New York City

The advantages of certification vary depending upon location. For example, in the United States and Canada "only a licensed engineer may seal engineering work for public and private clients".[59] This requirement is enforced by state and provincial legislation such as Quebec's Engineers Act.[60] In other countries, no such legislation exists. Practically all certifying bodies maintain a code of ethics that they expect all members to abide by or risk expulsion.[61] In this way these organizations play an important role in maintaining ethical standards for the profession. Even in jurisdictions where certification has little or no legal bearing on work, engineers are subject to contract law. In cases where an engineer's work fails he or she may be subject to the tort of negligence and, in extreme cases, the charge of criminal negligence. An engineer's work must also comply with numerous other rules and regulations such as building codes and legislation pertaining to environmental law.

Professional bodies of note for electrical engineers include the Institute of Electrical and Electronics Engineers (IEEE) and the

Page 76: Tugas tik di kelas xi ips 3

Institution of Engineering and Technology (IET). The IEEE claims to produce 30% of the world's literature in electrical engineering, hasover 360,000 members worldwide and holds over 3,000 conferences annually.[62] The IET publishes 21 journals, has a worldwide membership of over 150,000, and claims to be the largest professional engineering society in Europe.[63][64] Obsolescence of technical skills is a serious concern for electrical engineers. Membership and participation in technical societies, regular reviewsof periodicals in the field and a habit of continued learning are therefore essential to maintaining proficiency. MIET(Member of the Institution of Engineering and Technology) is recognised in Europe as Electrical and computer (technology) engineer.[65]

In Australia, Canada and the United States electrical engineers make up around 0.25% of the labor force (see note).

Tools and work

From the Global Positioning System to electric power generation, electrical engineers have contributed to the developmentof a wide range of technologies. They design, develop, test and supervise the deployment of electrical systems and electronic devices. For example, they may work on the design of telecommunication systems, the operation of electric power stations,the lighting and wiring of buildings, the design of household appliances or the electrical control of industrial machinery.[66]

Satellite communications is typical of what electrical engineers work on.

Fundamental to the discipline are the sciences of physics and mathematics as these help to obtain both a qualitative and quantitative description of how such systems will work. Today most engineering work involves the use of computers and it is commonplaceto use computer-aided design programs when designing electrical systems. Nevertheless, the ability to sketch ideas is still invaluable for quickly communicating with others.

Page 77: Tugas tik di kelas xi ips 3

The Shadow robot hand system

Although most electrical engineers will understand basic circuit theory (that is the interactions of elements such as resistors, capacitors, diodes, transistors and inductors in a circuit), the theories employed by engineers generally depend upon the work they do. For example, quantum mechanics and solid state physics might be relevant to an engineer working on VLSI (the designof integrated circuits), but are largely irrelevant to engineers working with macroscopic electrical systems. Even circuit theory maynot be relevant to a person designing telecommunication systems thatuse off-the-shelf components. Perhaps the most important technical skills for electrical engineers are reflected in university programs, which emphasize strong numerical skills, computer literacyand the ability to understand the technical language and concepts that relate to electrical engineering.[67]

A laser bouncing down an acrylic rod, illustrating the total internal reflection of light in a multi-mode optical fiber.

A wide range of instrumentation is used by electrical engineers. For simple control circuits and alarms, a basic multimeter measuring voltage, current and resistance may suffice. Where time-varying signals need to be studied, the oscilloscope is also an ubiquitous instrument. In RF engineering and high frequency telecommunications spectrum analyzers and network analyzers are used. In some disciplines safety can be a particular concern with instrumentation. For instance medical electronics designers must take into account that much lower voltages than normal can be dangerous when electrodes are directly in contact with internal bodyfluids.[68] Power transmission engineering also has great safety concerns due to the high voltages used; although voltmeters may in principle be similar to their low voltage equivalents, safety and calibration issues make them very different.[69] Many disciplines ofelectrical engineering use tests specific to their discipline. Audioelectronics engineers use audio test sets consisting of a signal generator and a meter, principally to measure level but also other parameters such as harmonic distortion and noise. Likewise

Page 78: Tugas tik di kelas xi ips 3

information technology have their own test sets, often specific to aparticular data format, and the same is true of television broadcasting.

Radome at the Misawa Air Base Misawa Security Operations Center, Misawa, Japan

For many engineers, technical work accounts for only a fraction of the work they do. A lot of time may also be spent on tasks such as discussing proposals with clients, preparing budgets and determining project schedules.[70] Many senior engineers manage a team of technicians or other engineers and for this reason projectmanagement skills are important. Most engineering projects involve some form of documentation and strong written communication skills are therefore very important.

The workplaces of engineers are just as varied as the types ofwork they do. Electrical engineers may be found in the pristine lab environment of a fabrication plant, the offices of a consulting firmor on site at a mine. During their working life, electrical engineers may find themselves supervising a wide range of individuals including scientists, electricians, computer programmersand other engineers.[71]

Electrical engineering has an intimate relationship with the physical sciences. For instance the physicist Lord Kelvin played a major role in the engineering of the first transatlantic telegraph cable.[72] Conversely, the engineer Oliver Heaviside produced major work on the mathematics of transmission on telegraph cables.[73] Electrical engineers are often required on major science projects. For instance, large particle accelerators such as CERN need electrical engineers to deal with many aspects of the project: from the power distribution, to the instrumentation, to the manufacture and installation of the superconducting electromagnets.[74][75]

"Cash machine" redirects here. For the Hard-Fi song, see Cash Machine.

Page 79: Tugas tik di kelas xi ips 3

An NCR Personas 75-Series interior, multi-function ATM in the United States

Smaller indoor ATMs dispense money inside convenience stores and other busy areas, such as this off-premises Wincor Nixdorf mono-function ATM in Sweden.

An automated teller machine or automatic teller machine[1][2][3] (ATM, American, Australian, Malaysian, Singaporean, Indian, Maldivian, Hiberno, Philippine and Sri Lankan English), also known as an automated banking machine (ABM, Canadian English), cash machine, cashpoint, cashline, minibank, or colloquially hole in the wall (British and South African English), is an electronic telecommunications device that enables the customers of a financial institution to perform financial transactions, particularly cash withdrawal, without the need for a human cashier, clerk or bank teller.

On most modern ATMs, the customer is identified by inserting aplastic ATM card with a magnetic stripe or a plastic smart card witha chip that contains a unique card number and some security information such as an expiration date or CVVC (CVV). Authenticationis provided by the customer entering a personal identification number (PIN).

Using an ATM, customers can access their bank deposit or credit accounts in order to make a variety of transactions such as cash withdrawals, check balances, or credit mobile phones. If the currency being withdrawn from the ATM is different from that in which the bank account is denominated the money will be converted atan official exchange rate. Thus, ATMs often provide the best possible exchange rates for foreign travellers, and are widely used for this purpose.[4] Yet it seems on modern ATMs foreign cash currency is not processed or possibly rejected when deposited or at least is not originally expected by the machine.

Page 80: Tugas tik di kelas xi ips 3

Contents

1 History

1.1 Docutel United States 1969

1.2 Continued Improvements

2 Location

3 Financial networks

4 Global use

5 Hardware

6 Software

7 Security

7.1 Physical

7.2 Transactional secrecy and integrity

7.3 Customer identity integrity

7.4 Device operation integrity

7.5 Customer security

7.6 Uses

8 Reliability

9 Fraud

9.1 Card fraud

10 Related devices

11 In popular culture

12 See also

13 References

14 Further reading

15 External links

Page 81: Tugas tik di kelas xi ips 3

History

An old Nixdorf ATM

The idea of out-of-hours cash distribution developed from banker's needs in Asia (Japan), Europe (Sweden and the United Kingdom) and North America (the United States).[5][6] LIttle is known of the Japanese device. In the US patent record, Luther GeorgeSimjian has been credited with developing a "prior art device". Specifically his 132nd patent (US3079603), which was first filed on 30 June 1960 (and granted 26 February 1963). The roll-out of this machine, called Bankograph, was delayed by a couple of years, due inpart to Simjian's Reflectone Electronics Inc. being acquired by Universal Match Corporation.[7] An experimental Bankograph was installed in New York City in 1961 by the City Bank of New York, butremoved after six months due to the lack of customer acceptance. TheBankograph was an automated envelope deposit machine (accepting coins, cash and cheques) and did not have cash dispensing features.[8][9]

Actor Reg Varney using the world's first cash machine in Enfield Town, north London on 27 June 1967

It is widely accepted that the first ATM was put into use by Barclays Bank in its Enfield Town branch in north London, United Kingdom, on 27 June 1967.[10] This machine was inaugurated by English comedy actor Reg Varney.[11] This instance of the invention is credited to John Shepherd-Barron of printing firm De La Rue,[12] who was awarded an OBE in the 2005 New Year Honours.[13] This designused paper cheques issued by a teller or cashier, marked with carbon-14 for machine readability and security, which in a latter model were matched with a personal identification number (PIN).[12][14] Shepherd-Barron stated; "It struck me there must be a way I could get my own money, anywhere in the world or the UK. I hit upon the idea of a chocolate bar dispenser, but replacing chocolate with cash."[12]

Page 82: Tugas tik di kelas xi ips 3

The Barclays-De La Rue machine (called De La Rue Automatic Cash System or DACS)[15] beat the Swedish saving banks' and a company called Metior's machine (a device called Bankomat) by a merenine days and Westminster Bank’s-Smith Industries-Chubb system (called Chubb MD2) by a month.[16] The online version of the Swedishmachine is listed to have been operational on 6 May 1968, while claiming to be the first online cash machine in the world (ahead of a similar claim by IBM and Lloyds Bank in 1971).[17] The collaboration of a small start-up called Speytec and Midland Bank developed a fourth machine which was marketed after 1969 in Europe and the US by the Burroughs Corporation. The patent for this device (GB1329964) was filed on September 1969 (and granted in 1973) by John David Edwards, Leonard Perkins, John Henry Donald, Peter Lee Chappell, Sean Benjamin Newcombe & Malcom David Roe.

ATM of Sberbank in Tolyatti, Russia

Both the DACS and MD2 accepted only a single-use token or voucher which was retained by the machine while the Speytec worked with a card with a magnetic strip at the back. They used principles including Carbon-14 and low-coercivity magnetism in order to make fraud more difficult. The idea of a PIN stored on the card was developed by a British engineer working on the MD2 named James Goodfellow in 1965 (patent GB1197183 filed on 2 May 1966 with Anthony Davies). The essence of this system was that it enabled the verification of the customer with the debited account without human intervention. This patent is also the earliest instance of a complete "currency dispenser system" in the patent record. This patent was filed on 5 March 1968 in the US (US 3543904) and granted on 1 December 1970. It had a profound influence on the industry as awhole. Not only did future entrants into the cash dispenser market such as NCR Corporation and IBM licence Goodfellow’s PIN system, buta number of later patents reference this patent as "Prior Art Device".[18]

In January 9, 1969's ABC newspaper (Madrid edition) there was an article about the new Bancomat, a teller machine installed in

Page 83: Tugas tik di kelas xi ips 3

downtown Madrid, Spain, by Banesto, dispensing 1,000 peseta bills (1to 5 max). Each user had to introduce a security personal key using a combination of the ten numeric buttons.[19] In March of the same year an ad with the instructions to use the Bancomat was published in the same newspaper [20] Bancomat was the first cash machine installed in Spain, one of the first in Europe.

File:ABC ATMs.ogvPlay media

1969 ABC news report on the introduction of ATMs in Sydney, Australia. People could only receive AUS $25 at a time and the bank card was sent back to the user at a later date.

Docutel United States 1969

After looking first hand at the experiences in Europe, in 1968the networked ATM was pioneered in the US, in Dallas, Texas, by Donald Wetzel, who was a department head at an automated baggage-handling company called Docutel. Recognised by the United States Patent Office for having invented the ATM are Kenneth S. Goldstein and John D. White, under US Patent # 3,662,343. Recognised by the United States Patent Office for having invented the ATM network are Fred J. Gentile and Jack Wu Chang, under US Patent # 3,833,885. On September 2, 1969, Chemical Bank installed the first ATM in the U.S.at its branch in Rockville Centre, New York. The first ATMs were designed to dispense a fixed amount of cash when a user inserted a specially coded card.[21] A Chemical Bank advertisement boasted "On Sept. 2 our bank will open at 9:00 and never close again."[22] Chemical's ATM, initially known as a Docuteller was designed by Donald Wetzel and his company Docutel. Chemical executives were initially hesitant about the electronic banking transition given thehigh cost of the early machines. Additionally, executives were concerned that customers would resist having machines handling theirmoney.[23] In 1995, the Smithsonian National Museum of American History recognised Docutel and Wetzel as the inventors of the networked ATM.[24]

Continued Improvements

Page 84: Tugas tik di kelas xi ips 3

The first modern ATM was an IBM 2984 and came into use at Lloyds Bank, Brentwood High Street, Essex, England in December 1972.The IBM 2984 was designed at the request of Lloyds Bank. The 2984 Cash Issuing Terminal was the first true ATM, similar in function totoday's machines and named by Lloyds Bank: Cashpoint; Cashpoint is still a registered trademark of Lloyds TSB in the UK. All were online and issued a variable amount which was immediately deducted from the account. A small number of 2984s were supplied to a US bank. A couple of well known historical models of ATMs include the IBM 3614, IBM 3624 and 473x series, Diebold 10xx and TABS 9000 series, NCR 1780 and earlier NCR 770 series.

The first switching system to enable shared automated teller machines between banks went into production operation on February 3,1979 in Denver, Colorado, in an effort by Colorado National Bank of Denver and Kranzley and Company of Cherry Hill, New Jersey.[25]

The newest ATM at Royal Bank of Scotland allows customers to withdraw cash up to £100 without a card by inputting a six-digit code requested through their smartphones.[26]

Location

This section does not cite any references or sources.Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (February2011)

An ATM Encrypting PIN Pad (EPP) with German markings

ATM in Vatican City with menu in Latin

ATMs are placed not only near or inside the premises of banks,but also in locations such as shopping centers/malls, airports, grocery stores, petrol/gas stations, restaurants, or anywhere frequented by large numbers of people. There are two types of ATM installations: on- and off-premises. On-premises ATMs are typically more advanced, multi-function machines that complement a bank

Page 85: Tugas tik di kelas xi ips 3

branch's capabilities, and are thus more expensive. Off-premises machines are deployed by financial institutions and Independent Sales Organisations (ISOs) where there is a simple need for cash, sothey are generally cheaper single function devices. In Canada, ATMs (also known there as ABMs) not operated by a financial institution are known as "white-label ABMs".

In the U.S., Canada and some Gulf countries, banks often have drive-thru lanes providing access to ATMs using an automobile.

Many ATMs have a sign above them, indicating the name of the bank or organisation owning the ATM and possibly including the list of ATM networks to which that machine is connected.

ATMs can also be found in railway stations and metro stations.In recent times, countries like India and some countries in Africa are installing ATM's in rural areas which are solar powered and do not require air conditioning.[27]

Financial networks

An ATM in the Netherlands. The logos of a number of interbank networks this ATM is connected to are shown

Most ATMs are connected to interbank networks, enabling peopleto withdraw and deposit money from machines not belonging to the bank where they have their accounts or in the countries where their accounts are held (enabling cash withdrawals in local currency). Some examples of interbank networks include NYCE, PULSE, PLUS, Cirrus, AFFN, Interac, Interswitch, STAR, LINK, MegaLink and BancNet.

ATMs rely on authorisation of a financial transaction by the card issuer or other authorising institution on a communications

Page 86: Tugas tik di kelas xi ips 3

network. This is often performed through an ISO 8583 messaging system.

Many banks charge ATM usage fees. In some cases, these fees are charged solely to users who are not customers of the bank where the ATM is installed; in other cases, they apply to all users.

In order to allow a more diverse range of devices to attach totheir networks, some interbank networks have passed rules expanding the definition of an ATM to be a terminal that either has the vault within its footprint or utilises the vault or cash drawer within themerchant establishment, which allows for the use of a scrip cash dispenser.

A Diebold 1063ix with a dial-up modem visible at the base

ATMs typically connect directly to their host or ATM Controller on either ADSL or dial-up modem over a telephone line or directly on a leased line. Leased lines are preferable to plain old telephone service (POTS) lines because they require less time to establish a connection. Less-trafficked machines will usually rely on a dial-up modem on a POTS line rather than using a leased line, since a leased line may be comparatively more expensive to operate compared to a POTS line. That dilemma may be solved as high-speed Internet VPN connections become more ubiquitous. Common lower-level layer communication protocols used by ATMs to communicate back to the bank include SNA over SDLC, TC500 over Async, X.25, and TCP/IP over Ethernet.

In addition to methods employed for transaction security and secrecy, all communications traffic between the ATM and the Transaction Processor may also be encrypted using methods such as SSL.[28]

Global use

ATMs at the railway station in Poznań

Page 87: Tugas tik di kelas xi ips 3

There are no hard international or government-compiled numberstotaling the complete number of ATMs in use worldwide. Estimates developed by ATMIA place the number of ATMs in use currently at over2.2 million, or approximately 1 ATM per 3000 people in the world.[29]

To simplify the analysis of ATM usage around the world, financial institutions generally divide the world into seven regions, due to the penetration rates, usage statistics, and features deployed. Four regions (USA, Canada, Europe, and Japan) have high numbers of ATMs per million people.[30][31] Despite the large number of ATMs, there is additional demand for machines in theAsia/Pacific area as well as in Latin America.[32][33] ATMs have yetto reach high numbers in the Near East and Africa.[34]

One of the world's most northerly installed ATMs is located atLongyearbyen, Svalbard, Norway.

The world's most southerly installed ATM is located at McMurdoStation, located in New Zealand's Ross Dependency, in Antarctica since 1997.[35] There are two ATMs at McMurdo, but only one active at any time, that are owned by Wells Fargo[36] and serviced once every two years by NCR.[37]

According to international statistics, the highest installed ATM in the world is located at Nathu La Pass, in India, installed bythe Indian Axis Bank at 4,023 metres (13,199 ft).[38] According to the Mainland Chinese media and CPC statistics, the highest installedATM in the world is located in Nagchu County, Tibet, China, at 4500 metres, allegedly installed by the Agricultural Bank of China.[39][unreliable source?][40][unreliable source?]

Page 88: Tugas tik di kelas xi ips 3

Israel has the world's lowest installed ATM at Ein Bokek at the Dead Sea, installed independently by a grocery store at 421 metres below sea level.[41]

While ATMs are ubiquitous on modern cruise ships, ATMs can also be found on some US Navy ships.[42]

Welcome message displayed on the world's most northerly ATM located in the post office at Longyearbyen

Hardware

A block diagram of an ATM

An ATM is typically made up of the following devices:

CPU (to control the user interface and transaction devices)

Magnetic or chip card reader (to identify the customer)

PIN pad EEP4 (similar in layout to a touch tone or calculator keypad), manufactured as part of a secure enclosure

Secure cryptoprocessor, generally within a secure enclosure

Display (used by the customer for performing the transaction)

Function key buttons (usually close to the display) or a touchscreen (used to select the various aspects of the transaction)

Record printer (to provide the customer with a record of the transaction)

Vault (to store the parts of the machinery requiring restricted access)

Housing (for aesthetics and to attach signage to)

Sensors and indicators

Page 89: Tugas tik di kelas xi ips 3

Due to heavier computing demands and the falling price of personal computer–like architectures, ATMs have moved away from custom hardware architectures using microcontrollers or application-specific integrated circuits and have adopted the hardware architecture of a personal computer, such as USB connections for peripherals, Ethernet and IP communications, and use personal computer operating systems.

Business owners often lease ATM terminals from ATM service providers, however based on the economies of scale, the price of equipment has dropped to the point where many business owners are simply paying for ATMs using a credit card.

New ADA voice and text-to-speech guidelines imposed in 2010, but required by March 2012[43] have forced many ATM owners to eitherupgrade non-compliant machines or dispose them if they are not up-gradable, and purchase new compliant equipment. This has created an avenue for hackers and thieves to obtain ATM hardware at junkyards from improperly disposed decommissioned ATMs.[44]

Two Loomis employees refilling an ATM at the Downtown Seattle REI

The vault of an ATM is within the footprint of the device itself and is where items of value are kept. Scrip cash dispensers do not incorporate a vault.

Mechanisms found inside the vault may include:

Dispensing mechanism (to provide cash or other items of value)

Deposit mechanism including a check processing module and bulk note acceptor (to allow the customer to make deposits)

Page 90: Tugas tik di kelas xi ips 3

Security sensors (magnetic, thermal, seismic, gas)

Locks (to ensure controlled access to the contents of the vault)

Journaling systems; many are electronic (a sealed flash memory device based on in-house standards) or a solid-state device (an actual printer) which accrues all records of activity including access timestamps, number of notes dispensed, etc. This is considered sensitive data and is secured in similar fashion to the cash as it is a similar liability.

ATM vaults are supplied by manufacturers in several grades. Factors influencing vault grade selection include cost, weight, regulatory requirements, ATM type, operator risk avoidance practicesand internal volume requirements.[45] Industry standard vault configurations include Underwriters Laboratories UL-291 "Business Hours" and Level 1 Safes,[46] RAL TL-30 derivatives,[47] and CEN EN 1143-1 - CEN III and CEN IV.[48][49]

ATM manufacturers recommend that an ATM vault be attached to the floor to prevent theft,[50] though there is a record of a theft conducted by tunnelling into an ATM floor.[citation needed]

Software

With the migration to commodity Personal Computer hardware, standard commercial "off-the-shelf" operating systems, and programming environments can be used inside of ATMs. Typical platforms previously used in ATM development include RMX or OS/2.

A Wincor Nixdorf ATM running Windows 2000.

Today the vast majority of ATMs worldwide use a Microsoft Windows operating system, primarily Windows XP Professional or Windows XP Embedded.[citation needed] A small number of deployments

Page 91: Tugas tik di kelas xi ips 3

may still be running older versions of Windows OS such as Windows NT, Windows CE, or Windows 2000.

There is a computer industry security view that general publicdesktop operating systems(os) have greater risks as operating systems for cash dispensing machines than other types of operating systems like (secure) real-time operating systems (RTOS). RISKS Digest has many articles about cash machine operating system vulnerabilities.[51]

Linux is also finding some reception in the ATM marketplace. An example of this is Banrisul, the largest bank in the south of Brazil, which has replaced the MS-DOS operating systems in its ATMs with Linux. Banco do Brasil is also migrating ATMs to Linux. Indian-based Vortex Engineering is manufacturing ATMs which operate only with Linux. Common application layer transaction protocols, such as Diebold 91x (911 or 912) and NCR NDC or NDC+ provide emulation of older generations of hardware on newer platforms with incremental extensions made over time to address new capabilities, although companies like NCR continuously improve these protocols issuing newer versions (e.g. NCR's AANDC v3.x.y, where x.y are subversions).Most major ATM manufacturers provide software packages that implement these protocols. Newer protocols such as IFX have yet to find wide acceptance by transaction processors.[52]

With the move to a more standardised software base, financial institutions have been increasingly interested in the ability to pick and choose the application programs that drive their equipment.WOSA/XFS, now known as CEN XFS (or simply XFS), provides a common API for accessing and manipulating the various devices of an ATM. J/XFS is a Java implementation of the CEN XFS API.

While the perceived benefit of XFS is similar to the Java's "Write once, run anywhere" mantra, often different ATM hardware vendors have different interpretations of the XFS standard. The

Page 92: Tugas tik di kelas xi ips 3

result of these differences in interpretation means that ATM applications typically use a middleware to even out the differences between various platforms.

With the onset of Windows operating systems and XFS on ATM's, the software applications have the ability to become more intelligent. This has created a new breed of ATM applications commonly referred to as programmable applications. These types of applications allows for an entirely new host of applications in which the ATM terminal can do more than only communicate with the ATM switch. It is now empowered to connected to other content servers and video banking systems.

Notable ATM software that operates on XFS platforms include Triton PRISM, Diebold Agilis EmPower, NCR APTRA Edge, Absolute Systems AbsoluteINTERACT, KAL Kalignite Software Platform, Phoenix Interactive VISTAatm, Wincor Nixdorf ProTopas, Euronet EFTS and Intertech inter-ATM.

With the move of ATMs to industry-standard computing environments, concern has risen about the integrity of the ATM's software stack.[53]

Security

Main article: Security of automated teller machines

Security, as it relates to ATMs, has several dimensions. ATMs also provide a practical demonstration of a number of security systems and concepts operating together and how various security concerns are dealt with.

Physical

A Wincor Nixdorf Procash 2100xe Frontload that was opened withan angle grinder

Automated Teller Machine In Dezfull in southwest of Iran

Page 93: Tugas tik di kelas xi ips 3

Early ATM security focused on making the ATMs invulnerable to physical attack; they were effectively safes with dispenser mechanisms. A number of attacks on ATMs resulted, with thieves attempting to steal entire ATMs by ram-raiding.[54] Since late 1990s, criminal groups operating in Japan improved ram-raiding by stealing and using a truck loaded with heavy construction machinery to effectively demolish or uproot an entire ATM and any housing to steal its cash.[55]

Another attack method, plofkraak, is to seal all openings of the ATM with silicone and fill the vault with a combustible gas or to place an explosive inside, attached, or near the ATM. This gas orexplosive is ignited and the vault is opened or distorted by the force of the resulting explosion and the criminals can break in.[56]This type of theft has occurred in the Netherlands, Belgium, France,Denmark, Germany and Australia.[57][58] These types of attacks can be prevented by a number of gas explosion prevention devices also known as gas suppression system. These systems use explosive gas detection sensor to detect explosive gas and to neutralise it by releasing a special explosion suppression chemical which changes thecomposition of the explosive gas and renders it ineffective.

Several attacks in the UK (at least one of which was successful) have involved digging a concealed tunnel under the ATM and cutting through the reinforced base to remove the money.[59]

Modern ATM physical security, per other modern money-handling security, concentrates on denying the use of the money inside the machine to a thief, by using different types of Intelligent BanknoteNeutralisation Systems.

A common method is to simply rob the staff filling the machinewith money. To avoid this, the schedule for filling them is kept

Page 94: Tugas tik di kelas xi ips 3

secret, varying and random. The money is often kept in cassettes, which will dye the money if incorrectly opened.

Transactional secrecy and integrity

A Triton brand ATM with a dip style card reader and a triple DES keypad

The security of ATM transactions relies mostly on the integrity of the secure cryptoprocessor: the ATM often uses general commodity components that sometimes are not considered to be "trusted systems".

Encryption of personal information, required by law in many jurisdictions, is used to prevent fraud. Sensitive data in ATM transactions are usually encrypted with DES, but transaction processors now usually require the use of Triple DES.[60] Remote KeyLoading techniques may be used to ensure the secrecy of the initialisation of the encryption keys in the ATM. Message Authentication Code (MAC) or Partial MAC may also be used to ensure messages have not been tampered with while in transit between the ATM and the financial network.

Customer identity integrity

A BTMU ATM with a palm scanner (to the right of the screen)

There have also been a number of incidents of fraud by Man-in-the-middle attacks, where criminals have attached fake keypads or card readers to existing machines. These have then been used to record customers' PINs and bank card information in order to gain unauthorised access to their accounts. Various ATM manufacturers have put in place countermeasures to protect the equipment they manufacture from these threats.[61][62]

Alternative methods to verify cardholder identities have been tested and deployed in some countries, such as finger and palm vein

Page 95: Tugas tik di kelas xi ips 3

patterns,[63] iris, and facial recognition technologies. Cheaper mass-produced equipment has been developed and is being installed inmachines globally that detect the presence of foreign objects on thefront of ATMs, current tests have shown 99% detection success for all types of skimming devices.[64]

Device operation integrity

ATMs that are exposed to the outside must be vandal and weather resistant

Openings on the customer-side of ATMs are often covered by mechanical shutters to prevent tampering with the mechanisms when they are not in use. Alarm sensors are placed inside the ATM and in ATM servicing areas to alert their operators when doors have been opened by unauthorised personnel.

To protect against hackers, ATM's have a built-in firewall. Once the firewall has detected malicious attempts to break into the machine remotely, the firewall locks down the machine.[65]

Rules are usually set by the government or ATM operating body that dictate what happens when integrity systems fail. Depending on the jurisdiction, a bank may or may not be liable when an attempt ismade to dispense a customer's money from an ATM and the money eithergets outside of the ATM's vault, or was exposed in a non-secure fashion, or they are unable to determine the state of the money after a failed transaction.[66] Customers often commented that it isdifficult to recover money lost in this way, but this is often complicated by the policies regarding suspicious activities typical of the criminal element.[67]

Customer security

Dunbar Armored ATM Techs watching over ATMs that have been installed in a van

Page 96: Tugas tik di kelas xi ips 3

In some countries, multiple security cameras and security guards are a common feature.[68] In the United States, The New York State Comptroller's Office has advised the New York State Departmentof Banking to have more thorough safety inspections of ATMs in high crime areas.[69]

Consultants of ATM operators assert that the issue of customersecurity should have more focus by the banking industry;[70] it has been suggested that efforts are now more concentrated on the preventive measure of deterrent legislation than on the problem of ongoing forced withdrawals.[71]

At least as far back as July 30, 1986, consultants of the industry have advised for the adoption of an emergency PIN system for ATMs, where the user is able to send a silent alarm in response to a threat.[72] Legislative efforts to require an emergency PIN system have appeared in Illinois,[73] Kansas[74] and Georgia,[75] but none have succeeded yet. In January 2009, Senate Bill 1355 was proposed in the Illinois Senate that revisits the issue of the reverse emergency PIN system.[76] The bill is again supported by thepolice and denied by the banking lobby.[77]

In 1998 three towns outside the Cleveland, Ohio, in response to an ATM crime wave, adopted ATM Consumer Security Legislation requiring that an emergency telephone number switch be installed at all outside ATMs within their jurisdiction. In the wake of an ATM Murder in Sharon Hill, Pennsylvania, The City Council of Sharon Hillpassed an ATM Consumer Security Bill as well. As of July 2009, ATM Consumer Security Legislation is currently pending in New York, New Jersey, and Washington D.C.

In China and elsewhere, many efforts to promote security have been made. On-premises ATMs are often located inside the bank's lobby which may be accessible 24 hours a day. These lobbies have extensive security camera coverage, a courtesy telephone for

Page 97: Tugas tik di kelas xi ips 3

consulting with the bank staff, and a security guard on the premises. Bank lobbies that are not guarded 24 hours a day may also have secure doors that can only be opened from outside by swiping the bank card against a wall-mounted scanner, allowing the bank to identify which card enters the building. Most ATMs will also displayon-screen safety warnings and may also be fitted with convex mirrorsabove the display allowing the user to see what is happening behind them.

As of 2013, the only claim available about the extent of ATM connected homicides is that they range from 500 to 1000 per year in the US, covering only cases where the victim had an ATM card and thecard was used by the killer after the known time of death.[78]

Uses

Two NCR Personas 84 ATMs at a bank in Jersey dispensing two types of pound sterling banknotes: Bank of England on the left, and States of Jersey on the right

Although ATMs were originally developed as just cash dispensers, they have evolved to include many other bank-related functions:

Paying routine bills, fees, and taxes (utilities, phone bills, social security, legal fees, taxes, etc.)

Printing bank statements

Updating passbooks

Cash advances

Cheque Processing Module

Paying (in full or partially) the credit balance on a cardlinked to a specific current account.

Transferring money between linked accounts (such as transferring between checking and savings accounts)

Page 98: Tugas tik di kelas xi ips 3

Deposit currency recognition, acceptance, and recycling[79][80]

In some countries, especially those which benefit from a fullyintegrated cross-bank ATM network (e.g.: Multibanco in Portugal), ATMs include many functions which are not directly related to the management of one's own bank account, such as:

Gold vending ATM in New York City

Loading monetary value into stored value cards

Adding pre-paid cell phone / mobile phone credit.

Purchasing

Postage stamps.

Lottery tickets

Train tickets

Concert tickets

Movie tickets

Shopping mall gift certificates.

Gold[81]

Donating to charities[82]

Increasingly banks are seeking to use the ATM as a sales device to deliver pre approved loans and targeted advertising using products such as ITM (the Intelligent Teller Machine) from Aptra Relate from NCR.[83] ATMs can also act as an advertising channel forother companies.[84]*

A South Korean ATM with mobile bank port and bar code reader

Page 99: Tugas tik di kelas xi ips 3

However several different technologies on ATMs have not yet reached worldwide acceptance, such as:

Videoconferencing with human tellers, known as video tellers[85]

Biometrics, where authorisation of transactions is based on the scanning of a customer's fingerprint, iris, face, etc.[86][87][88]

Cheque/Cash Acceptance, where the ATM accepts and recognise cheques and/or currency without using envelopes[89] Expected to grow in importance in the US through Check 21 legislation.

Bar code scanning[90]

On-demand printing of "items of value" (such as movie tickets, traveler's cheques, etc.)

Dispensing additional media (such as phone cards)

Co-ordination of ATMs with mobile phones[91]

Integration with non-banking equipment[92][93]

Games and promotional features[94]

CRM at the ATM

E.g. In Canada, ATMs are called guichets automatiques in French and sometimes "Bank Machines" in English. The Interac shared cash network does not allow for the selling of goods from ATMs due to specific security requirements for PIN entry when buying goods.[95] CIBC machines in Canada, are able to top-up the minutes on certain pay as you go phones.

Reliability

An ATM running Microsoft Windows that has crashed due to a peripheral component failure

Page 100: Tugas tik di kelas xi ips 3

Before an ATM is placed in a public place, it typically has undergone extensive testing with both test money and the backend computer systems that allow it to perform transactions. Banking customers also have come to expect high reliability in their ATMs,[96] which provides incentives to ATM providers to minimise machine and network failures. Financial consequences of incorrect machine operation also provide high degrees of incentive to minimise malfunctions.[97]

ATMs and the supporting electronic financial networks are generally very reliable, with industry benchmarks typically producing 98.25% customer availability for ATMs[98] and up to 99.999% availability for host systems that manage the networks of ATMs. If ATM networks do go out of service, customers could be left without the ability to make transactions until the beginning of their bank's next time of opening hours.

This said, not all errors are to the detriment of customers; there have been cases of machines giving out money without debiting the account, or giving out higher value notes as a result of incorrect denomination of banknote being loaded in the money cassettes.[99] The result of receiving too much money may be influenced by the card holder agreement in place between the customer and the bank.[100][101]

Errors that can occur may be mechanical (such as card transport mechanisms; keypads; hard disk failures; envelope deposit mechanisms); software (such as operating system; device driver; application); communications; or purely down to operator error.

To aid in reliability, some ATMs print each transaction to a roll paper journal that is stored inside the ATM, which allows both the users of the ATMs and the related financial institutions to settle things based on the records in the journal in case there is adispute. In some cases, transactions are posted to an electronic

Page 101: Tugas tik di kelas xi ips 3

journal to remove the cost of supplying journal paper to the ATM andfor more convenient searching of data.

Improper money checking can cause the possibility of a customer receiving counterfeit banknotes from an ATM. While bank personnel are generally trained better at spotting and removing counterfeit cash,[102][103] the resulting ATM money supplies used bybanks provide no guarantee for proper banknotes, as the Federal Criminal Police Office of Germany has confirmed that there are regularly incidents of false banknotes having been dispensed throughbank ATMs.[104] Some ATMs may be stocked and wholly owned by outsidecompanies, which can further complicate this problem. Bill validation technology can be used by ATM providers to help ensure the authenticity of the cash before it is stocked in an ATM; ATMs that have cash recycling capabilities include this capability.[105]

Fraud

ATM lineup

Some ATMs may put up warning messages to customers to be vigilant of possible tampering.

Banknotes from a cash machine robbery made unusable with red paint.

As with any device containing objects of value, ATMs and the systems they depend on to function are the targets of fraud. Fraud against ATMs and people's attempts to use them takes several forms.

The first known instance of a fake ATM was installed at a shopping mall in Manchester, Connecticut in 1993. By modifying the inner workings of a Fujitsu model 7020 ATM, a criminal gang known asThe Bucklands Boys were able to steal information from cards inserted into the machine by customers.[106]

Page 102: Tugas tik di kelas xi ips 3

WAVY-TV reported an incident in Virginia Beach in September 2006 where a hacker who had probably obtained a factory-default administrator password for a gas station's white label ATM caused the unit to assume it was loaded with US$5 bills instead of $20s, enabling himself—and many subsequent customers—to walk away with four times the money they wanted to withdraw.[107] This type of scamwas featured on the TV series The Real Hustle.

ATM behavior can change during what is called "stand-in" time,where the bank's cash dispensing network is unable to access databases that contain account information (possibly for database maintenance). In order to give customers access to cash, customers may be allowed to withdraw cash up to a certain amount that may be less than their usual daily withdrawal limit, but may still exceed the amount of available money in their accounts, which could result in fraud if the customers intentionally withdraw more money than what they had in their accounts.[108]

Card fraud

In an attempt to prevent criminals from shoulder surfing the customer's personal identification number (PIN), some banks draw privacy areas on the floor.

For a low-tech form of fraud, the easiest is to simply steal acustomer's card along with its PIN. A later variant of this approachis to trap the card inside of the ATM's card reader with a device often referred to as a Lebanese loop. When the customer gets frustrated by not getting the card back and walks away from the machine, the criminal is able to remove the card and withdraw cash from the customer's account, using the card and its PIN.

This type of ATM fraud has spread globally. Although somewhat replaced in terms of volume by ATM skimming incidents, a re-emergence of card trapping has been noticed in regions such as

Page 103: Tugas tik di kelas xi ips 3

Europe, where EMV chip and PIN cards have increased in circulation.[109]

Another simple form of fraud involves attempting to get the customer's bank to issue a new card and its PIN and stealing them from their mail.[110]

By contrast, a newer high-tech method of operating, sometimes called card skimming or card cloning, involves the installation of amagnetic card reader over the real ATM's card slot and the use of a wireless surveillance camera or a modified digital camera or a falsePIN keypad to observe the user's PIN. Card data is then cloned into a duplicate card and the criminal attempts a standard cash withdrawal. The availability of low-cost commodity wireless cameras,keypads, card readers, and card writers has made it a relatively simple form of fraud, with comparatively low risk to the fraudsters.[111]

In an attempt to stop these practices, countermeasures againstcard cloning have been developed by the banking industry, in particular by the use of smart cards which cannot easily be copied or spoofed by unauthenticated devices, and by attempting to make theoutside of their ATMs tamper evident. Older chip-card security systems include the French Carte Bleue, Visa Cash, Mondex, Blue fromAmerican Express[112] and EMV '96 or EMV 3.11. The most actively developed form of smart card security in the industry today is knownas EMV 2000 or EMV 4.x.

EMV is widely used in the UK (Chip and PIN) and other parts ofEurope, but when it is not available in a specific area, ATMs must fall back to using the easy–to–copy magnetic strip to perform transactions. This fallback behaviour can be exploited.[113] Howeverthe fall-back option has been removed on the ATMs of some UK banks, meaning if the chip is not read, the transaction will be declined.

Page 104: Tugas tik di kelas xi ips 3

Card cloning and skimming can be detected by the implementation of magnetic card reader heads and firmware that can read a signature embedded in all magnetic strips during the card production process. This signature, known as a "MagnePrint" or "BluPrint", can be used in conjunction with common two-factor authentication schemes used in ATM, debit/retail point-of-sale and prepaid card applications.

The concept and various methods of copying the contents of an ATM card's magnetic strip onto a duplicate card to access other people's financial information was well known in the hacking communities by late 1990.[114]

In 1996, Andrew Stone, a computer security consultant from Hampshire in the UK, was convicted of stealing more than £1 million by pointing high-definition video cameras at ATMs from a considerable distance, and by recording the card numbers, expiry dates, etc. from the embossed detail on the ATM cards along with video footage of the PINs being entered. After getting all the information from the videotapes, he was able to produce clone cards which not only allowed him to withdraw the full daily limit for eachaccount, but also allowed him to sidestep withdrawal limits by usingmultiple copied cards. In court, it was shown that he could withdrawas much as £10,000 per hour by using this method. Stone was sentenced to five years and six months in prison.[115]

In February 2009, a group of criminals used counterfeit ATM cards to steal $9 million from 130 ATMs in 49 cities around the world, all within a period of 30 minutes.[116]

Related devices

A talking ATM is a type of ATM that provides audible instructions so that people who cannot read an ATM screen can independently use the machine, therefore effectively eliminating theneed for assistance from an external, potentially malevolent source.

Page 105: Tugas tik di kelas xi ips 3

All audible information is delivered privately through a standard headphone jack on the face of the machine. Alternatively, some bankssuch as the Nordea and Swedbank use a built-in external speaker which may be invoked by pressing the talk button on the keypad.[117]Information is delivered to the customer either through pre-recordedsound files or via text-to-speech speech synthesis.

A postal interactive kiosk may also share many of the same components as an ATM (including a vault), but only dispenses items related to postage.[118][119]

A scrip cash dispenser may share many of the same components as an ATM, but lacks the ability to dispense physical cash and consequently requires no vault. Instead, the customer requests a withdrawal transaction from the machine, which prints a receipt. Thecustomer then takes this receipt to a nearby sales clerk, who then exchanges it for cash from the till.[120]

A teller assist unit (TAU) may also share many of the same components as an ATM (including a vault), but they are distinct in that they are designed to be operated solely by trained personnel and not by the general public, they do not integrate directly into interbank networks, and are usually controlled by a computer that isnot directly integrated into the overall construction of the unit.

A web ATM is an online interface for ATM card banking that uses a smart card reader. All the usual ATM functions are available except for withdrawing cash. Most banks in Taiwan provide these online ATM services. [121][122]

In popular culture

One of the banking innovations that Arthur Hailey mentioned inhis 1975 bestselling novel The Moneychangers is Docutel, an

Page 106: Tugas tik di kelas xi ips 3

automated teller machine,(Hailey & 1975 308)[123] based on real technology that was issued a patent in 1974 in the United States.

In the novel, Jill Peacock, a journalist, interviewed First Mercantile American Bank executive VP Alexander Vandervoort in a suburban shopping plaza where the bank had installed the first two stainless-steel Docutel automatic tellers. Vandervoort, whose clothes looked like they were from the "fashion section of Esquire",was not at all like the classical solemn, cautious banker in a double-breasted, dark blue suit. Peacock compared him to the new ATMs which embodied modern banking.[123] In GTA Online, players can use ATMs to store their collected money into their bank account.

For other uses, see Password (disambiguation).

The sign in form for the English Wikipedia, which requests a username and password

A password is a word or string of characters used for user authentication to prove identity or access approval to gain access to a resource (example: an access code is a type of password), whichshould be kept secret from those not allowed access.

The use of passwords is known to be ancient. Sentries would challenge those wishing to enter an area or approaching it to supplya password or watchword, and would only allow a person or group to pass if they knew the password. In modern times, user names and passwords are commonly used by people during a log in process that controls access to protected computer operating systems, mobile phones, cable TV decoders, automated teller machines (ATMs), etc. A typical computer user has passwords for many purposes: logging into accounts, retrieving e-mail, accessing applications, databases, networks, web sites, and even reading the morning newspaper online.

Despite the name, there is no need for passwords to be actual words; indeed passwords which are not actual words may be harder to

Page 107: Tugas tik di kelas xi ips 3

guess, a desirable property. Some passwords are formed from multiplewords and may more accurately be called a passphrase. The terms passcode and passkey are sometimes used when the secret information is purely numeric, such as the personal identification number (PIN) commonly used for ATM access. Passwords are generally short enough to be easily memorized and typed.

Most organizations specify a password policy that sets requirements for the composition and usage of passwords, typically dictating minimum length, required categories (e.g. upper and lower case, numbers, and special characters), prohibited elements (e.g. own name, date of birth, address, telephone number). Some governments have national authentication frameworks[1] that define requirements for user authentication to government services, including requirements for passwords.

Contents

1 Choosing a secure and memorable password

2 Factors in the security of a password system

2.1 Rate at which an attacker can try guessed passwords

2.2 Limits on the number of password guesses

2.3 Form of stored passwords

2.4 Methods of verifying a password over a network

2.4.1 Simple transmission of the password

2.4.2 Transmission through encrypted channels

2.4.3 Hash-based challenge-response methods

2.4.4 Zero-knowledge password proofs

2.5 Procedures for changing passwords

Page 108: Tugas tik di kelas xi ips 3

2.6 Password longevity

2.7 Number of users per password

2.8 Password security architecture

2.9 Password reuse

2.10 Writing down passwords on paper

2.11 After death

3 Password cracking

3.1 Incidents

4 Alternatives to passwords for authentication

5 "The Password is dead"

6 Website password systems

7 History of passwords

8 See also

9 References

10 External links

Choosing a secure and memorable password

The easier a password is for the owner to remember generally means it will be easier for an attacker to guess.[2] However, passwords which are difficult to remember may also reduce the security of a system because (a) users might need to write down or electronically store the password, (b) users will need frequent password resets and (c) users are more likely to re-use the same password. Similarly, the more stringent requirements for password strength, e.g. "have a mix of uppercase and lowercase letters and digits" or "change it monthly", the greater the degree to which users will subvert the system.[3] Others argue longer passwords provide more security (e.g., entropy) than shorter passwords with a wide variety of characters.[4]

Page 109: Tugas tik di kelas xi ips 3

In The Memorability and Security of Passwords,[5] Jeff Yan et al. examine the effect of advice given to users about a good choice of password. They found that passwords based on thinking of a phraseand taking the first letter of each word are just as memorable as naively selected passwords, and just as hard to crack as randomly generated passwords. Combining two or more unrelated words is another good method, but a single dictionary word is not. Having a personally designed algorithm for generating obscure passwords is another good method.

However, asking users to remember a password consisting of a "mix of uppercase and lowercase characters" is similar to asking them to remember a sequence of bits: hard to remember, and only a little bit harder to crack (e.g. only 128 times harder to crack for 7-letter passwords, less if the user simply capitalises one of the letters). Asking users to use "both letters and digits" will often lead to easy-to-guess substitutions such as 'E' → '3' and 'I' → '1',substitutions which are well known to attackers. Similarly typing the password one keyboard row higher is a common trick known to attackers.[6]

A method to memorize a complex password is to remember a sentence like 'This year I go to Italy on Friday July 6!' and use the first characters as the actual password. In this case 'TyIgtIoFJ6!'.

In 2013, Google released a list of the most common password types, all of which are considered insecure because they are too easy to guess (especially after researching an individual on social media):[7]

The name of a pet, child, family member, or significant other

Page 110: Tugas tik di kelas xi ips 3

Anniversary dates and birthdays

Birthplace

Name of a favorite holiday

Something related to a favorite sports team

The word "password"

Factors in the security of a password system

The security of a password-protected system depends on severalfactors. The overall system must, of course, be designed for sound security, with protection against computer viruses, man-in-the-middle attacks and the like. Physical security issues are also a concern, from deterring shoulder surfing to more sophisticated physical threats such as video cameras and keyboard sniffers. And, of course, passwords should be chosen so that they are hard for an attacker to guess and hard for an attacker to discover using any (and all) of the available automatic attack schemes. See password strength and computer security.

Nowadays, it is a common practice for computer systems to hidepasswords as they are typed. The purpose of this measure is to avoidbystanders reading the password. However, some argue that this practice may lead to mistakes and stress, encouraging users to choose weak passwords. As an alternative, users should have the option to show or hide passwords as they type them.[8]

Effective access control provisions may force extreme measureson criminals seeking to acquire a password or biometric token.[9] Less extreme measures include extortion, rubber hose cryptanalysis, and side channel attack.

Page 111: Tugas tik di kelas xi ips 3

Here are some specific password management issues that must beconsidered in thinking about, choosing, and handling, a password.

Rate at which an attacker can try guessed passwords

The rate at which an attacker can submit guessed passwords to the system is a key factor in determining system security. Some systems impose a time-out of several seconds after a small number (e.g., three) of failed password entry attempts. In the absence of other vulnerabilities, such systems can be effectively secure with relatively simple passwords, if they have been well chosen and are not easily guessed.[10]

Many systems store a cryptographic hash of the password. If anattacker gets access to the file of hashed passwords guessing can bedone off-line, rapidly testing candidate passwords against the true password's hash value. In the example of a web-server, an online attacker can guess only at the rate at which the server will respond, while an off-line attacker (who gains access to the file) can guess at a rate limited only by the hardware that is brought to bear.

Passwords that are used to generate cryptographic keys (e.g., for disk encryption or Wi-Fi security) can also be subjected to highrate guessing. Lists of common passwords are widely available and can make password attacks very efficient. (See Password cracking.) Security in such situations depends on using passwords or passphrases of adequate complexity, making such an attack computationally infeasible for the attacker. Some systems, such as PGP and Wi-Fi WPA, apply a computation-intensive hash to the password to slow such attacks. See key stretching.

Limits on the number of password guesses

An alternative to limiting the rate at which an attacker can make guesses on a password is to limit the total number of guesses

Page 112: Tugas tik di kelas xi ips 3

that can be made. The password can be disabled, requiring a reset, after a small number of consecutive bad guesses (say 5); and the user may be required to change the password after a larger cumulative number of bad guesses (say 30), to prevent an attacker from making an arbitrarily large number of bad guesses by interspersing them between good guesses made by the legitimate password owner. [11] The username associated with the password can be changed to counter a denial of service attack.

Form of stored passwords

Some computer systems store user passwords as plaintext, against which to compare user log on attempts. If an attacker gains access to such an internal password store, all passwords—and so all user accounts—will be compromised. If some users employ the same password for accounts on different systems, those will be compromised as well.

More secure systems store each password in a cryptographicallyprotected form, so access to the actual password will still be difficult for a snooper who gains internal access to the system, while validation of user access attempts remains possible. The most secure don't store passwords at all, but a one-way derivation, such as a polynomial, modulus, or an advanced hash function.[4] Roger Needham invented the now common approach of storing only a “hashed” form of the plaintext password. When a user types in a password on such a system, the password handling software runs through a cryptographic hash algorithm, and if the hash value generated from the user’s entry matches the hash stored in the password database, the user is permitted access. The hash value is created by applying a cryptographic hash function to a string consisting of the submitted password and, in many implementations, another value knownas a salt. A salt prevents attackers from easily building a list of hash values for common passwords and prevents password cracking efforts from scaling across all users.[12] MD5 and SHA1 are frequently used cryptographic hash functions but they are not recommended for password hashing unless they are used as part of a larger construction such as in PBKDF2.[13]

Page 113: Tugas tik di kelas xi ips 3

The stored data—sometimes called the "password verifier" or the "password hash"—is often stored in Modular Crypt Format or RFC 2307 hash format, sometimes in the /etc/passwd file or the /etc/shadow file.[14]

The main storage formats for passwords are plaintext, hashed, hashed and salted, and reversibly encrypted.[15] If an attacker gains access to the password file then if it is plaintext no cracking is necessary. If it is hashed but not salted then it is subject to rainbow table attacks (which are more efficient than cracking). If it is reversibly encrypted then if the attacker gets the decryption key along with the file no cracking is necessary, while if he fails to get the key cracking is not possible (guesses cannot be verified without the key). Thus, of the common storage formats for passwords only when passwords have been salted and hashed is cracking both necessary and possible.[15]

If a cryptographic hash function is well designed, it is computationally infeasible to reverse the function to recover a plaintext password. An attacker can, however, use widely available tools to attempt to guess the passwords. These tools work by hashingpossible passwords and comparing the result of each guess to the actual password hashes. If the attacker finds a match, they know that their guess is the actual password for the associated user. Password cracking tools can operate by brute force (i.e. trying every possible combination of characters) or by hashing every word from a list; large lists of possible passwords in many languages arewidely available on the Internet.[4] The existence of password cracking tools allows attackers to easily recover poorly chosen passwords. In particular, attackers can quickly recover passwords that are short, dictionary words, simple variations on dictionary words or that use easily guessable patterns.[16] A modified version of the DES algorithm was used as the basis for the password hashing algorithm in early Unix systems.[17] The crypt algorithm used a 12-bit salt value so that each user’s hash was unique and iterated the DES algorithm 25 times in order to make the hash function slower,

Page 114: Tugas tik di kelas xi ips 3

both measures intended to frustrate automated guessing attacks.[17] The user’s password was used as a key to encrypt a fixed value. Morerecent Unix or Unix like systems (e.g., Linux or the various BSD systems) use more secure password hashing algorithms such as PBKDF2,bcrypt, and scrypt which have large salts and an adjustable cost or number of iterations.[18] A poorly designed hash function can make attacks feasible even if a strong password is chosen. See LM hash for a widely deployed, and insecure, example.[19]

Methods of verifying a password over a network

Simple transmission of the password

Passwords are vulnerable to interception (i.e., "snooping") while being transmitted to the authenticating machine or person. If the password is carried as electrical signals on unsecured physical wiring between the user access point and the central system controlling the password database, it is subject to snooping by wiretapping methods. If it is carried as packetized data over the Internet, anyone able to watch the packets containing the logon information can snoop with a very low probability of detection.

Email is sometimes used to distribute passwords but this is generally an insecure method. Since most email is sent as plaintext,a message containing a password is readable without effort during transport by any eavesdropper. Further, the message will be stored as plaintext on at least two computers: the sender's and the recipient's. If it passes through intermediate systems during its travels, it will probably be stored on there as well, at least for some time, and may be copied to backup, cache or history files on any of these systems.

Using client-side encryption will only protect transmission from the mail handling system server to the client machine. Previousor subsequent relays of the email will not be protected and the email will probably be stored on multiple computers, certainly on the originating and receiving computers, most often in cleartext.

Page 115: Tugas tik di kelas xi ips 3

Transmission through encrypted channels

The risk of interception of passwords sent over the Internet can be reduced by, among other approaches, using cryptographic protection. The most widely used is the Transport Layer Security (TLS, previously called SSL) feature built into most current Internet browsers. Most browsers alert the user of a TLS/SSL protected exchange with a server by displaying a closed lock icon, or some other sign, when TLS is in use. There are several other techniques in use; see cryptography.

Hash-based challenge-response methods

Unfortunately, there is a conflict between stored hashed-passwords and hash-based challenge-response authentication; the latter requires a client to prove to a server that they know what the shared secret (i.e., password) is, and to do this, the server must be able to obtain the shared secret from its stored form. On many systems (including Unix-type systems) doing remote authentication, the shared secret usually becomes the hashed form and has the serious limitation of exposing passwords to offline guessing attacks. In addition, when the hash is used as a shared secret, an attacker does not need the original password to authenticate remotely; they only need the hash.

Zero-knowledge password proofs

Rather than transmitting a password, or transmitting the hash of the password, password-authenticated key agreement systems can perform a zero-knowledge password proof, which proves knowledge of the password without exposing it.

Moving a step further, augmented systems for password-authenticated key agreement (e.g., AMP, B-SPEKE, PAK-Z, SRP-6) avoidboth the conflict and limitation of hash-based methods. An augmentedsystem allows a client to prove knowledge of the password to a

Page 116: Tugas tik di kelas xi ips 3

server, where the server knows only a (not exactly) hashed password,and where the unhashed password is required to gain access.

Procedures for changing passwords

Usually, a system must provide a way to change a password, either because a user believes the current password has been (or might have been) compromised, or as a precautionary measure. If a new password is passed to the system in unencrypted form, security can be lost (e.g., via wiretapping) before the new password can evenbe installed in the password database. And, of course, if the new password is given to a compromised employee, little is gained. Some web sites include the user-selected password in an unencrypted confirmation e-mail message, with the obvious increased vulnerability.

Identity management systems are increasingly used to automate issuance of replacements for lost passwords, a feature called self service password reset. The user's identity is verified by asking questions and comparing the answers to ones previously stored (i.e.,when the account was opened).

Some password reset questions ask for personal information that could be found on social media, such as mother's maiden name. As a result, some security experts recommend either making up one's own questions or giving false answers.[20]

Password longevity

"Password aging" is a feature of some operating systems which forces users to change passwords frequently (e.g., quarterly, monthly or even more often). Such policies usually provoke user protest and foot-dragging at best and hostility at worst. There is often an increase in the people who note down the password and leaveit where it can easily be found, as well as helpdesk calls to reset a forgotten password. Users may use simpler passwords or develop

Page 117: Tugas tik di kelas xi ips 3

variation patterns on a consistent theme to keep their passwords memorable.[21] Because of these issues, there is some debate[22] as to whether password aging is effective. The intended benefit is mainly that a stolen password will be made ineffective if it is reset; however in many cases, particularly with administrative or "root" accounts, once an attacker has gained access, they can make alterations to the operating system that will allow them future access even after the initial password they used expires. (See rootkit.)

The other less-frequently cited, and possibly more valid reason is that in the event of a long brute force attack, the password will be invalid by the time it has been cracked. Specifically, in an environment where it is considered important to know the probability of a fraudulent login in order to accept the risk, one can ensure that the total number of possible passwords multiplied by the time taken to try each one (assuming the greatest conceivable computing resources) is much greater than the password lifetime. However there is no documented evidence that the policy ofrequiring periodic changes in passwords increases system security.

Password aging may be required because of the nature of IT systems the password allows access to; if personal data is involved the EU Data Protection Directive is in force. Implementing such a policy, however, requires careful consideration of the relevant human factors. Humans memorize by association, so it is impossible to simply replace one memory with another. Two psychological phenomena interfere with password substitution. "Primacy" describes the tendency for an earlier memory to be retained more strongly thana later one. "Interference" is the tendency of two memories with thesame association to conflict. Because of these effects most users must resort to a simple password containing a number that can be incremented each time the password is changed.

Number of users per password

Page 118: Tugas tik di kelas xi ips 3

Sometimes a single password controls access to a device, for example, for a network router, or password-protected mobile phone. However, in the case of a computer system, a password is usually stored for each user account, thus making all access traceable (save, of course, in the case of users sharing passwords). A would-be user on most systems must supply a username as well as a password, almost always at account set up time, and periodically thereafter. If the user supplies a password matching the one stored for the supplied username, he or she is permitted further access into the computer system. This is also the case for a cash machine, except that the 'user name' is typically the account number stored on the bank customer's card, and the PIN is usually quite short (4 to 6 digits).

Allotting separate passwords to each user of a system is preferable to having a single password shared by legitimate users ofthe system, certainly from a security viewpoint. This is partly because users are more willing to tell another person (who may not be authorized) a shared password than one exclusively for their use.Single passwords are also much less convenient to change because many people need to be told at the same time, and they make removal of a particular user's access more difficult, as for instance on graduation or resignation. Per-user passwords are also essential if users are to be held accountable for their activities, such as making financial transactions or viewing medical records.

Password security architecture

Common techniques used to improve the security of computer systems protected by a password include:

Not displaying the password on the display screen as it isbeing entered or obscuring it as it is typed by using asterisks (*) or bullets (•).

Allowing passwords of adequate length. (Some legacy operating systems, including early versions[which?] of Unix and

Page 119: Tugas tik di kelas xi ips 3

Windows, limited passwords to an 8 character maximum,[23][24][25] reducing security.)

Requiring users to re-enter their password after a period of inactivity (a semi log-off policy).

Enforcing a password policy to increase password strength and security.

Requiring periodic password changes.

Assigning randomly chosen passwords.

Requiring minimum password lengths.[13]

Some systems require characters from various characterclasses in a password—for example, "must have at least one uppercaseand at least one lowercase letter". However, all-lowercase passwordsare more secure per keystroke than mixed capitalization passwords.[26]

Providing an alternative to keyboard entry (e.g., spoken passwords, or biometric passwords).

Requiring more than one authentication system, such as2-factor authentication (something a user has and something the userknows).

Using encrypted tunnels or password-authenticated key agreement to prevent access to transmitted passwords via network attacks

Limiting the number of allowed failures within a given time period (to prevent repeated password guessing). After the limitis reached, further attempts will fail (including correct password attempts) until the beginning of the next time period. However, thisis vulnerable to a form of denial of service attack.

Introducing a delay between password submission attempts to slow down automated password guessing programs.

Some of the more stringent policy enforcement measures can pose a risk of alienating users, possibly decreasing security as a result.

Page 120: Tugas tik di kelas xi ips 3

Password reuse

It is common practice amongst computer users to reuse the samepassword on multiple sites. This presents a substantial security risk, since an attacker need only compromise a single site in order to gain access to other sites the victim uses. This problem is exacerbated by also reusing usernames, and by websites requiring email logins, as it makes it easier for an attacker to track a single user across multiple sites. Password reuse can be avoided or minimused by using mnemonic techniques, writing passwords down on paper, or using a password manager.[27]

It has been argued by Redmond researchers Dinei Florencio and Cormac Herley, together with Paul C. van Oorschot of Carleton University, Canada, that password reuse is inevitable, and that users should reuse passwords for low-security websites (which contain little personal data and no financial information, for example) and instead focus their efforts on remember long, complex passwords for a few important accounts, such as banks accounts.[28] Similar arguments were made by Forbes cybersecurity columnist, Joseph Steinberg, who also argued that people should not change passwords as often as many "experts" advise, due to the same limiations in human memory.[21]

Writing down passwords on paper

Historically, many security experts asked people to memorize their passwords: "Never write down a password". More recently, many security experts such as Bruce Schneier recommend that people use passwords that are too complicated to memorize, write them down on paper, and keep them in a wallet.[29][30][31][32][33][34][35]

Password manager software can also store passwords relatively safely, in an encrypted file sealed with a single password.

After death

Page 121: Tugas tik di kelas xi ips 3

According to a survey by the University of London, one in ten people are now leaving their passwords in their wills to pass on this important information when they die. One third of people, according to the poll, agree that their password protected data is important enough to pass on in their will.[36]

Password cracking

Main article: Password cracking

Attempting to crack passwords by trying as many possibilities as time and money permit is a brute force attack. A related method, rather more efficient in most cases, is a dictionary attack. In a dictionary attack, all words in one or more dictionaries are tested.Lists of common passwords are also typically tested.

Password strength is the likelihood that a password cannot be guessed or discovered, and varies with the attack algorithm used. Cryptologists and computer scientists often refer to the strength or'hardness' in terms of entropy.[4]

Passwords easily discovered are termed weak or vulnerable; passwords very difficult or impossible to discover are considered strong. There are several programs available for password attack (oreven auditing and recovery by systems personnel) such as L0phtCrack,John the Ripper, and Cain; some of which use password design vulnerabilities (as found in the Microsoft LANManager system) to increase efficiency. These programs are sometimes used by system administrators to detect weak passwords proposed by users.

Studies of production computer systems have consistently shownthat a large fraction of all user-chosen passwords are readily guessed automatically. For example, Columbia University found 22% ofuser passwords could be recovered with little effort.[37] According to Bruce Schneier, examining data from a 2006 phishing attack, 55%

Page 122: Tugas tik di kelas xi ips 3

of MySpace passwords would be crackable in 8 hours using a commercially available Password Recovery Toolkit capable of testing 200,000 passwords per second in 2006.[38] He also reported that the single most common password was password1, confirming yet again the general lack of informed care in choosing passwords among users. (Henevertheless maintained, based on these data, that the general quality of passwords has improved over the years—for example, average length was up to eight characters from under seven in previous surveys, and less than 4% were dictionary words.[39])

Incidents

On July 16, 1998, CERT reported an incident where an attacker had found 186,126 encrypted passwords. At the time the attacker was discovered, 47,642 passwords had already been cracked.[40]

In September, 2001, after the deaths of 960 New York employees in the September 11 attacks, financial services firm Cantor Fitzgerald through Microsoft broke the passwords of deceased employees to gain access to files needed for servicing client accounts.[41] Technicians used brute-force attacks, and interviewerscontacted families to gather personalized information that might reduce the search time for weaker passwords.[41]

In December 2009, a major password breach of the Rockyou.com website occurred that led to the release of 32 million passwords. The hacker then leaked the full list of the 32 million passwords (with no other identifiable information) to the Internet. Passwords were stored in cleartext in the database and were extracted through a SQL injection vulnerability. The Imperva Application Defense Center (ADC) did an analysis on the strength of the passwords.[42]

In June, 2011, NATO (North Atlantic Treaty Organization) experienced a security breach that led to the public release of first and last names, usernames, and passwords for more than 11,000 registered users of their e-bookshop. The data was leaked as part ofOperation AntiSec, a movement that includes Anonymous, LulzSec, as well as other hacking groups and individuals. The aim of AntiSec is

Page 123: Tugas tik di kelas xi ips 3

to expose personal, sensitive, and restricted information to the world, using any means necessary.[43]

On July 11, 2011, Booz Allen Hamilton, a consulting firm that does work for the Pentagon, had their servers hacked by Anonymous and leaked the same day. "The leak, dubbed 'Military Meltdown Monday,' includes 90,000 logins of military personnel—including personnel from USCENTCOM, SOCOM, the Marine corps, variousAir Force facilities, Homeland Security, State Department staff, andwhat looks like private sector contractors."[44] These leaked passwords wound up being hashed in SHA1, and were later decrypted and analyzed by the ADC team at Imperva, revealing that even military personnel look for shortcuts and ways around the password requirements.[45]

Alternatives to passwords for authentication

The numerous ways in which permanent or semi-permanent passwords can be compromised has prompted the development of other techniques. Unfortunately, some are inadequate in practice, and in any case few have become universally available for users seeking a more secure alternative.[citation needed] A 2012 paper[46] examines why passwords have proved so hard to supplant (despite numerous predictions that they would soon be a thing of the past[47]); in examining thirty representative proposed replacements with respect to security, usability and deployability they conclude "none even retains the full set of benefits that legacy passwords already provide."

Single-use passwords. Having passwords which are only valid once makes many potential attacks ineffective. Most users findsingle use passwords extremely inconvenient. They have, however, been widely implemented in personal online banking, where they are known as Transaction Authentication Numbers (TANs). As most home users only perform a small number of transactions each week, the single use issue has not led to intolerable customer dissatisfactionin this case.

Page 124: Tugas tik di kelas xi ips 3

Time-synchronized one-time passwords are similar in some ways to single-use passwords, but the value to be entered is displayed on a small (generally pocketable) item and changes every minute or so.

PassWindow one-time passwords are used as single-use passwords, but the dynamic characters to be entered are visible onlywhen a user superimposes a unique printed visual key over a server generated challenge image shown on the user's screen.

Access controls based on public key cryptography e.g. ssh.The necessary keys are usually too large to memorize (but see proposal Passmaze)[48] and must be stored on a local computer, security token or portable memory device, such as a USB flash drive or even floppy disk.

Biometric methods promise authentication based on unalterable personal characteristics, but currently (2008) have higherror rates and require additional hardware to scan, for example, fingerprints, irises, etc. They have proven easy to spoof in some famous incidents testing commercially available systems, for example, the gummie fingerprint spoof demonstration,[49] and, because these characteristics are unalterable, they cannot be changed if compromised; this is a highly important consideration in access control as a compromised access token is necessarily insecure.

Single sign-on technology is claimed to eliminate the needfor having multiple passwords. Such schemes do not relieve user and administrators from choosing reasonable single passwords, nor systemdesigners or administrators from ensuring that private access control information passed among systems enabling single sign-on is secure against attack. As yet, no satisfactory standard has been developed.

Envaulting technology is a password-free way to secure data on e.g. removable storage devices such as USB flash drives. Instead of user passwords, access control is based on the user's access to a network resource.

Non-text-based passwords, such as graphical passwords or mouse-movement based passwords.[50] Graphical passwords are an alternative means of authentication for log-in intended to be used

Page 125: Tugas tik di kelas xi ips 3

in place of conventional password; they use images, graphics or colours instead of letters, digits or special characters. One systemrequires users to select a series of faces as a password, utilizing the human brain's ability to recall faces easily.[51] In some implementations the user is required to pick from a series of imagesin the correct sequence in order to gain access.[52] Another graphical password solution creates a one-time password using a randomly generated grid of images. Each time the user is required toauthenticate, they look for the images that fit their pre-chosen categories and enter the randomly generated alphanumeric character that appears in the image to form the one-time password.[53][54] So far, graphical passwords are promising, but are not widely used. Studies on this subject have been made to determine its usability inthe real world. While some believe that graphical passwords would beharder to crack, others suggest that people will be just as likely to pick common images or sequences as they are to pick common passwords.[citation needed]

2D Key (2-Dimensional Key)[55] is a 2D matrix-like key input method having the key styles of multiline passphrase, crossword, ASCII/Unicode art, with optional textual semantic noises,to create big password/key beyond 128 bits to realize the MePKC (Memorizable Public-Key Cryptography)[56] using fully memorizable private key upon the current private key management technologies like encrypted private key, split private key, and roaming private key.

Cognitive passwords use question and answer cue/response pairs to verify identity.

"The Password is dead"

That "the password is dead" is a recurring idea in Computer Security. It often accompanies arguments that the replacement of passwords by a more secure means of authentication is both necessaryand imminent. This claim has been made by numerous people at least since 2004. Notably, Bill Gates, speaking at the 2004 RSA Conferencepredicted the demise of passwords saying "they just don't meet the challenge for anything you really want to secure."[47] In 2011 IBM

Page 126: Tugas tik di kelas xi ips 3

predicted that, within five years, "You will never need a password again."[57] Matt Honan, a journalist at Wired, who was the victim ofa hacking incident, in 2012 wrote "The age of the password has come to an end."[58] Heather Adkins, manager of Information Security at Google, in 2013 said that "passwords are done at Google."[59] Eric Grosse, VP of security engineering at Google, states that "passwordsand simple bearer tokens, such as cookies, are no longer sufficient to keep users safe."[60] Christopher Mims, writing in the Wall Street Journal said the password "is finally dying" and predicted their replacement by device-based authentication.[61] Avivah Litan of Gartner said in 2014 "Passwords were dead a few years ago. Now they are more than dead."[62] The reasons given often include reference to the Usability as well as security problems of passwords.

The claim that "the password is dead" is often used by advocates of alternatives to passwords, such as Biometrics, Two-factor authentication or Single sign-on. Many initiatives have been launched with the explicit goal of eliminating passwords. These include Microsoft's Cardspace, the Higgins project, the Liberty Alliance, NSTIC, the FIDO Alliance and various Identity 2.0 proposals. Jeremy Grant, head of NSTIC initiative (the US Dept. of Commerce National Strategy for Trusted Identities in Cyberspace), declared "Passwords are a disaster from a security perspective, we want to shoot them dead."[63] The FIDO Alliance promises a "passwordless experience" in its 2015 specification document.[64]

In spite of these predictions and efforts to replace them passwords still appear the dominant form of authentication on the web. In "The Persistence of Passwords," Cormac Herley and Paul van Oorschot suggest that every effort should be made to end the "spectacularly incorrect assumption" that passwords are dead.[65] They argue that "no other single technology matches their combination of cost, immediacy and convenience" and that "passwords are themselves the bestfit for many of the scenarios in which they are currently used."

Website password systems

Page 127: Tugas tik di kelas xi ips 3

Passwords are used on websites to authenticate users and are usually maintained on the Web server, meaning the browser on a remote system sends a password to the server (by HTTP POST), the server checks the password and sends back the relevant content (or an access denied message). This process eliminates the possibility of local reverse engineering as the code used to authenticate the password does not reside on the local machine.

Transmission of the password, via the browser, in plaintext means it can be intercepted along its journey to the server. Many web authentication systems use SSL to establish an encrypted sessionbetween the browser and the server, and is usually the underlying meaning of claims to have a "secure Web site". This is done automatically by the browser and increases integrity of the session,assuming neither end has been compromised and that the SSL/TLS implementations used are high quality ones.

History of passwords

Passwords or watchwords have been used since ancient times. Polybius describes the system for the distribution of watchwords in the Roman military as follows:

The way in which they secure the passing round of the watchword for the night is as follows: from the tenth maniple of each class of infantry and cavalry, the maniple which is encamped atthe lower end of the street, a man is chosen who is relieved from guard duty, and he attends every day at sunset at the tent of the tribune, and receiving from him the watchword — that is a wooden tablet with the word inscribed on it – takes his leave, and on returning to his quarters passes on the watchword and tablet before witnesses to the commander of the next maniple, who in turn passes it to the one next him. All do the same until it reaches the first maniples, those encamped near the tents of the tribunes. These latter are obliged to deliver the tablet to the tribunes before

Page 128: Tugas tik di kelas xi ips 3

dark. So that if all those issued are returned, the tribune knows that the watchword has been given to all the maniples, and has passed through all on its way back to him. If any one of them is missing, he makes inquiry at once, as he knows by the marks from what quarter the tablet has not returned, and whoever is responsiblefor the stoppage meets with the punishment he merits.[66]

Passwords in military use evolved to include not just a password, but a password and a counterpassword; for example in the opening days of the Battle of Normandy, paratroopers of the U.S. 101st Airborne Division used a password — flash — which was presented as a challenge, and answered with the correct response — thunder. The challenge and response were changed every three days. American paratroopers also famously used a device known as a "cricket" on D-Day in place of a password system as a temporarily unique method of identification; one metallic click given by the device in lieu of a password was to be met by two clicks in reply.[67]

Passwords have been used with computers since the earliest days of computing. MIT's CTSS, one of the first time sharing systems, was introduced in 1961. It had a LOGIN command that requested a user password. "After typing PASSWORD, the system turns off the printing mechanism, if possible, so that the user may type in his password with privacy."[68] In the early 1970s, Robert Morrisdeveloped a system of storing login passwords in a hashed form as part of the Unix operating system. The system was based on a simulated Hagelin rotor crypto machine, and first appeared in 6th Edition Unix in 1974. A later version of his algorithm, known as crypt(3), used a 12-bit salt and invoked a modified form of the DES algorithm 25 times to reduce the risk of pre-computed dictionary attacks.[69]

E-commerce (also written as e-Commerce, eCommerce or similar variants), short for electronic commerce, is trading in products or services using computer networks, such as the Internet. Electronic commerce draws on technologies such as mobile commerce, electronic funds transfer, supply chain management, Internet marketing, online

Page 129: Tugas tik di kelas xi ips 3

transaction processing, electronic data interchange (EDI), inventorymanagement systems, and automated data collection systems. Modern electronic commerce typically uses the World Wide Web for at least one part of the transaction's life cycle, although it may also use other technologies such as e-mail.

E-commerce businesses may employ some or all of the following:

Online shopping web sites for retail sales direct to consumers

Providing or participating in online marketplaces, which process third-party business-to-consumer or consumer-to-consumer sales

Business-to-business buying and selling

Gathering and using demographic data through web contacts and social media

Business-to-business electronic data interchange

Marketing to prospective and established customers by e-mail or fax (for example, with newsletters)

Engaging in pretail for launching new products and services

Contents

1 Timeline

2 Business applications

3 Governmental regulation

4 Forms

5 Global trends

Page 130: Tugas tik di kelas xi ips 3

6 Impact on markets and retailers

7 Impact on supply chain management

8 The social impact of e-commerce

9 Distribution channels

10 Examples of new e-commerce systems

11 See also

12 References

13 Further reading

14 External links

Timeline

A timeline for the development of e-commerce:

1971 or 1972: The ARPANET is used to arrange a sale between students at the Stanford Artificial Intelligence Laboratory and the Massachusetts Institute of Technology, later described as "the seminal act of e-commerce" in John Markoff's book What the Dormouse Said.[1]

1979: Michael Aldrich demonstrates the first online shopping system.[2]

1981: Thomson Holidays UK is first business-to-business online shopping system to be installed.[3]

1982: Minitel was introduced nationwide in France by France Télécom and used for online ordering.

1983: California State Assembly holds first hearing on "electronic commerce" in Volcano, California.[4] Testifying are CPUC, MCI Mail, Prodigy, CompuServe, Volcano Telephone, and Pacific Telesis. (Not permitted to testify is Quantum Technology, later to become AOL.)

Page 131: Tugas tik di kelas xi ips 3

1984: Gateshead SIS/Tesco is first B2C online shopping system [5] and Mrs Snowball, 72, is the first online home shopper[6]

1984: In April 1984, CompuServe launches the Electronic Mall in the USA and Canada. It is the first comprehensive electroniccommerce service.[7]

1990: Tim Berners-Lee writes the first web browser, WorldWideWeb, using a NeXT computer.[8]

1992: Book Stacks Unlimited in Cleveland opens a commercial sales website (www.books.com) selling books online with credit card processing.

1992: St. Martin's Press publishes J.H. Snider and Terra Ziporyn's Future Shop: How New Technologies Will Change the Way We Shop and What We Buy.[9]

1993: Paget Press releases edition No. 3 [10] of the first[citation needed] app store, The Electronic AppWrapper [11]

1994: Netscape releases the Navigator browser in October under the code name Mozilla. Netscape 1.0 is introduced in late 1994with SSL encryption that made transactions secure.

1994: Ipswitch IMail Server becomes the first software available online for sale and immediate download via a partnership between Ipswitch, Inc. and OpenMarket.

1994: "Ten Summoner's Tales" by Sting becomes the first secure online purchase.[12]

1995: The US National Science Foundation lifts its former strict prohibition of commercial enterprise on the Internet.[13]

1995: Thursday 27 April 1995, the purchase of a book by Paul Stanfield, Product Manager for CompuServe UK, from W H Smith's shop within CompuServe's UK Shopping Centre is the UK's first national online shopping service secure transaction. The shopping service at launch featured W H Smith, Tesco, Virgin Megastores/Our Price, Great Universal Stores (GUS), Interflora, Dixons Retail, PastTimes, PC World (retailer) and Innovations.

1995: Jeff Bezos launches Amazon.com and the first commercial-free 24-hour, internet-only radio stations, Radio HK and

Page 132: Tugas tik di kelas xi ips 3

NetRadio start broadcasting. eBay is founded by computer programmer Pierre Omidyar as AuctionWeb.

1996: IndiaMART B2B marketplace established in India.

1996: ECPlaza B2B marketplace established in Korea.

1996: The UK e-commerce platform Sellerdeck, formerly Actinic, is established.[citation needed]

1998: Electronic postal stamps can be purchased and downloaded for printing from the Web.[14]

1998: Cbazaar formerly chennaibazaar.com, India's first B2C eCommerce portal launched by Rajesh Nahar and Ritesh Katariya.

1999: Alibaba Group is established in China. Business.com sold for US $7.5 million to eCompanies, which was purchased in 1997 for US $149,000. The peer-to-peer filesharing software Napster launches. ATG Stores launches to sell decorative items for the home online.

2000: The dot-com bust.

2001: Alibaba.com achieved profitability in December 2001.

2002: eBay acquires PayPal for $1.5 billion.[15] Niche retail companies Wayfair and NetShops are founded with the concept of selling products through several targeted domains, rather than a central portal.

2003: Amazon.com posts first yearly profit.

2003: Bossgoo B2B marketplace established in China.

2004: DHgate.com, China's first online b2b transaction platform, is established, forcing other b2b sites to move away from the "yellow pages" model.[16]

2007: Business.com acquired by R.H. Donnelley for $345 million.[17]

2009: Zappos.com acquired by Amazon.com for $928 million.[18] Retail Convergence, operator of private sale website RueLaLa.com, acquired by GSI Commerce for $180 million, plus up to

Page 133: Tugas tik di kelas xi ips 3

$170 million in earn-out payments based on performance through 2012.[19]

2010: Groupon reportedly rejects a $6 billion offer from Google. Instead, the group buying websites went ahead with an IPO on4 November 2011. It was the largest IPO since Google.[20][21]

2011: Quidsi.com, parent company of Diapers.com, acquired by Amazon.com for $500 million in cash plus $45 million in debt and other obligations.[22] GSI Commerce, a company specializing in creating, developing and running online shopping sites for brick andmortar businesses, acquired by eBay for $2.4 billion.[23]

2014: Overstock.com processes over $1 million in Bitcoin sales.[24] India’s e-commerce industry is estimated to have grown more than 30% from 2012 to $12.6 billion in 2013.[25] US eCommerce and Online Retail sales projected to reach $294 billion, an increaseof 12 percent over 2013 and 9% of all retail sales.[26] Alibaba Group has the largest Initial public offering ever, worth $25 billion.

Business applications

An example of an automated online assistant on a merchandisingwebsite.

Some common applications related to electronic commerce are:

Document automation in supply chain and logistics

Domestic and international payment systems

Enterprise content management

Group buying

Print on demand

Automated online assistant

Newsgroups

Page 134: Tugas tik di kelas xi ips 3

Online shopping and order tracking

Online banking

Online office suites

Shopping cart software

Teleconferencing

Electronic tickets

Social networking

Instant messaging

Pretail

Digital Wallet

Governmental regulation

In the United States, some electronic commerce activities are regulated by the Federal Trade Commission (FTC). These activities include the use of commercial e-mails, online advertising and consumer privacy. The CAN-SPAM Act of 2003 establishes national standards for direct marketing over e-mail. The Federal Trade Commission Act regulates all forms of advertising, including online advertising, and states that advertising must be truthful and non-deceptive.[27] Using its authority under Section 5 of the FTC Act, which prohibits unfair or deceptive practices, the FTC has brought anumber of cases to enforce the promises in corporate privacy statements, including promises about the security of consumers' personal information.[28] As result, any corporate privacy policy related to e-commerce activity may be subject to enforcement by the FTC.

The Ryan Haight Online Pharmacy Consumer Protection Act of 2008, which came into law in 2008, amends the Controlled Substances Act to address online pharmacies.[29]

Page 135: Tugas tik di kelas xi ips 3

Conflict of laws in cyberspace is a major hurdle for harmonisation of legal framework for e-commerce around the world. Inorder to give a uniformity to e-commerce law around the world, many countries adopted the UNCITRAL Model Law on Electronic Commerce (1996) [30]

Internationally there is the International Consumer Protectionand Enforcement Network (ICPEN), which was formed in 1991 from an informal network of government customer fair trade organisations. The purpose was stated as being to find ways of co-operating on tackling consumer problems connected with cross-border transactions in both goods and services, and to help ensure exchanges of information among the participants for mutual benefit and understanding. From this came Econsumer.gov, an ICPEN initiative since April 2001. It is a portal to report complaints about online and related transactions with foreign companies.

There is also Asia Pacific Economic Cooperation (APEC) was established in 1989 with the vision of achieving stability, securityand prosperity for the region through free and open trade and investment. APEC has an Electronic Commerce Steering Group as well as working on common privacy regulations throughout the APEC region.

In Australia, Trade is covered under Australian Treasury Guidelines for electronic commerce,[31] and the Australian Competition and Consumer Commission[32] regulates and offers advice on how to deal with businesses online,[33][34] and offers specific advice on what happens if things go wrong.[35]

In the United Kingdom, The Financial Services Authority (FSA)[36] was formerly the regulating authority for most aspects of the EU's Payment Services Directive (PSD), until its replacement in 2013by the Prudential Regulation Authority and the Financial Conduct Authority.[37] The UK implemented the PSD through the Payment

Page 136: Tugas tik di kelas xi ips 3

Services Regulations 2009 (PSRs), which came into effect on 1 November 2009. The PSR affects firms providing payment services and their customers. These firms include banks, non-bank credit card issuers and non-bank merchant acquirers, e-money issuers, etc. The PSRs created a new class of regulated firms known as payment institutions (PIs), who are subject to prudential requirements. Article 87 of the PSD requires the European Commission to report on the implementation and impact of the PSD by 1 November 2012.[38]

In India, the Information Technology Act 2000 governs the basic applicability of e-commerce.

In China, the Telecommunications Regulations of the People's Republic of China (promulgated on 25 September 2000), stipulated theMinistry of Industry and Information Technology (MIIT) as the government department regulating all telecommunications related activities, including electronic commerce.[39] On the same day, The Administrative Measures on Internet Information Services released, is the first administrative regulation to address profit-generating activities conducted through the Internet, and lay the foundation for future regulations governing e-commerce in China.[40] In 28 August 2004, the eleventh session of the tenth NPC Standing Committee adopted The Electronic Signature Law, which regulates datamessage, electronic signature authentication and legal liability issues. It is considered the first law in China’s e-commerce legislation. It was a milestone in the course of improving China’s electronic commerce legislation, and also marks the entering of China’s rapid development stage for electronic commerce legislation.[41]

Forms

Contemporary electronic commerce involves everything from ordering "digital" content for immediate online consumption, to ordering conventional goods and services, to "meta" services to facilitate other types of electronic commerce.

Page 137: Tugas tik di kelas xi ips 3

On the institutional level, big corporations and financial institutions use the internet to exchange financial data to facilitate domestic and international business. Data integrity and security are pressing issues for electronic commerce.

Aside from traditional e-Commerce, the terms m-Commerce (mobile commerce) as well (around 2013) t-Commerce[42] have also been used.

Global trends

In 2010, the United Kingdom had the biggest e-commerce market in the world when measured by the amount spent per capita.[43] The Czech Republic is the European country where ecommerce delivers the biggest contribution to the enterprises´ total revenue. Almost a quarter (24%) of the country’s total turnover is generated via the online channel.[44]

Among emerging economies, China's e-commerce presence continues to expand every year. With 384 million internet users, China's online shopping sales rose to $36.6 billion in 2009 and one of the reasons behind the huge growth has been the improved trust level for shoppers. The Chinese retailers have been able to help consumers feel more comfortable shopping online.[45] China's cross-border e-commerce is also growing rapidly. E-commerce transactions between China and other countries increased 32% to 2.3 trillion yuan($375.8 billion) in 2012 and accounted for 9.6% of China's total international trade [46] In 2013, Alibaba had an e-commerce market share of 80% in China.[47]

Other BRIC countries are witnessing the accelerated growth of eCommerce as well. Brazil's eCommerce is growing quickly with retaileCommerce sales expected to grow at a healthy double-digit pace through 2014. By 2016, eMarketer expects retail ecommerce sales in

Page 138: Tugas tik di kelas xi ips 3

Brazil to reach $17.3 billion.[48] India has an internet user base of about 243.2 million as of January 2014. Despite being third largest userbase in world, the penetration of Internet is low compared to markets like the United States, United Kingdom or Francebut is growing at a much faster rate, adding around 6 million new entrants every month. The industry consensus is that growth is at aninflection point. In India, cash on delivery is the most preferred payment method, accumulating 75% of the e-retail activities.

E-Commerce has become an important tool for small and large businesses worldwide, not only to sell to customers, but also to engage them.[49][50]

In 2012, ecommerce sales topped $1 trillion for the first timein history.[51]

Mobile devices are playing an increasing role in the mix of eCommerce. Some estimates show that purchases made on mobile deviceswill make up 25% of the market by 2017.[52] According to Cisco Visual Networking Index,[53] in 2014 the amount of mobile devices will outnumber the number of world population.

In the past 10 years, e-commerce is in a period of rapid development. Cross-border e-commerce is called the Internet thinkingalong with traditional import and export trade. Cross-border e-commerce enables international trade towards more convenient and free open to cooperate between different countries in the world, incorporating developed and developing countries. In the short term,developing countries may be limited to IT, but in the long term, they would change the barrier to develop their IT facilities, and continuing to close to developed countries.[54] The moment, developing countries like China and India are developing e-commerce very rapidly, such as China 's Alibaba, the financing capital (£15 billions) is the highest ever in e-commerce company. In addition, China is becoming the biggest e-commerce provider in the world.[55]

Page 139: Tugas tik di kelas xi ips 3

The number of Internet users in China which amounts to 600 millions,and which is doubled than USA users in total.[56]

For traditional businesses, one research stated that information technology and cross-border e-commerce is a good opportunity for the rapid development and growth of enterprises. Many companies have invested enormous volume of investment in mobileapplications.The DeLone and McLean Model stated that 3 perspectives are contributed to a successful e-business, including information system quality, service quality and users satisfaction.[57] There isno limit of time and space, there are more opportunities to reach out to customers around the world, and to cut down unnecessary intermediate links, thereby reducing the cost price, and can benefitfrom one on one large customer data analysis, to achieve a high degree of personal customization strategic plan, in order to fully enhance the core competitiveness of the products in company[58]

Impact on markets and retailers

Economists have theorized that e-commerce ought to lead to intensified price competition, as it increases consumers' ability togather information about products and prices. Research by four economists at the University of Chicago has found that the growth ofonline shopping has also affected industry structure in two areas that have seen significant growth in e-commerce, bookshops and travel agencies. Generally, larger firms are able to use economies of scale and offer lower prices. The lone exception to this pattern has been the very smallest category of bookseller, shops with between one and four employees, which appear to have withstood the trend.[59] Depending on the category, e-commerce may shift the switching costs—procedural, relational, and financial—experienced bycustomers.[60]

Individual or business involved in e-commerce whether buyers or sellers rely on Internet-based technology in order to accomplish their transactions. E-commerce is recognized for its ability to allow business to communicate and to form transaction anytime and

Page 140: Tugas tik di kelas xi ips 3

anyplace. Whether an individual is in the US or overseas, business can be conducted through the internet. The power of e-commerce allows geophysical barriers to disappear, making all consumers and businesses on earth potential customers and suppliers. Thus, switching barriers and switching costs my shift.[60] eBay is a good example of e-commerce business individuals and businesses are able to post their items and sell them around the Globe.[61]

In e-commerce activities, supply chain and logistics are two most crucial factors need to be considered. Typically, cross-border logistics need about few weeks time round. Based on this low efficiency of the supply chain service, customer satisfaction will be greatly reduced.[62] Some researcher stated that combining e-commerce competence and IT setup could well enhance company’s overall business worth.[63] Other researcher stated that e-commerce need to consider the establishment of warehouse centers in foreign countries, to create high efficiency of the logistics system, not only improve customers’ satisfaction, but also can improve customers’ loyalty.[weasel words]. A recently published comprehensive meta-analysis shows that e-service quality is determined by many different factors, and influences customer satisfaction and repurchase intentions among customers.[64]

Some researcher investigated that if a company want to enhanceinternational customers’ satisfaction, where cultural website need to be adapted in particular country, rather than solely depending onits local country. However, according to this research findings, theresearcher found that German company had treated its international website as the same local model, such as in UK and US online marketing.[65] A company could save money and make decision quickly via the identical strategy in different country. However, opportunity cost could be occurred, if the local strategy does not match to a new market, the company could lose its potential customer.[66]

Impact on supply chain management

Page 141: Tugas tik di kelas xi ips 3

For a long time, companies had been troubled by the gap between the benefits which supply chain technology has and the solutions to deliver those benefits. However, the emergence of e-commerce has provided a more practical and effective way of delivering the benefits of the new supply chain technologies.[67]

E-commerce has the capability to integrate all inter-company and intra-company functions, meaning that the three flows (physical flow, financial flow and information flow) of the supply chain couldbe also affected by e-commerce. The affections on physical flows improved the way of product and inventory movement level for companies. For the information flows, e-commerce optimised the capacity of information processing than companies used to have, and for the financial flows, e-commerce allows companies to have more efficient payment and settlement solutions.[67]

In addition, e-commerce has a more sophisticated level of impact on supply chains: Firstly, the performance gap will be eliminated since companies can identify gaps between different levels of supply chains by electronic means of solutions; Secondly, as a result of e-commerce emergence, new capabilities such implementing ERP systems have helped companies to manage operations with customers and suppliers. Yet these new capabilities are still not fully exploited. Thirdly, technology companies would keep investing on new e-commerce software solutions as they are expectinginvestment return. Fourthly, e-commerce would help to solve many aspects of issues that companies may feel difficult to cope with, such as political barriers or cross-country changes. Finally, e-commerce provides companies a more efficient and effective way to collaborate with each other within the supply chain.[67]

The social impact of e-commerce

Along with the e-commerce and its unique charm that has appeared gradually, virtual enterprise, virtual bank, network marketing, online shopping, payment and advertising, such this new vocabulary which is unheard-of and now has become as familiar to

Page 142: Tugas tik di kelas xi ips 3

people. This reflects that the e-commerce has huge impact on the economy and society from the other side.[68] For instance, B2B is a rapidly growing business in the world that leads to lower cost and then improves the economic efficiency and also bring along the growth of employment.[69]

To understand how the e-commerce has affected the society and economy, this article will mention three issues below:

1. The e-commerce has changed the relative importance of time,but as the pillars of indicator of the country’s economic state thatthe importance of time should not be ignored.

2. The e-commerce offers the consumer or enterprise various information they need, making information into total transparency, will force enterprise no longer is able to use the mode of space or advertisement to raise their competitive edge.[70] Moreover, in theory, perfect competition between the consumer sovereignty and industry will maximize social welfare.[71]

3. In fact, during the economic activity in the past, large enterprise frequently has advantage of information resource, and thus at the expense of consumers. Nowadays, the transparent and real-time information protects the rights of consumers, because the consumers can use internet to pick out the portfolio to the benefit of themselves. The competitiveness of enterprises will be much more obvious than before, consequently, social welfare would be improved by the development of the e-commerce.

4. The new economy led by the e-commerce change humanistic spirit as well, but above all, is the employee loyalty.[72] Due to the market with competition, the employee’s level of professionalismbecomes the crucial for enterprise in the niche market. The enterprises must pay attention to how to build up the enterprises

Page 143: Tugas tik di kelas xi ips 3

inner culture and a set of interactive mechanisms and it is the prime problem for them. Furthermore, though the mode of e-commerce decrease the information cost and transaction cost, however, its development also makes human being are overly computer literate. In hence, emphasized more humanistic attitude to work is another project for enterprise to development. Life is the root of all and high technology are merely an assistive tool to support our quality of life.

The e-commerce is not a kind of new industry, but it is creating a new economic model. Most of people agree that the e-commerce indeed to be important and significant for economic societyin the future, but actually that is a bit of clueless feeling at thebeginning, this problem is exactly prove the e-commerce is a sort ofincorporeal revolution.[73] Generally speaking, as a type of business active procedure, the e-commerce is going to leading an unprecedented revolution in the world, the influence of this model far exceeded the commercial affair itself.[74] Except the mentioned above, in the area of law, education, culture and also policy, the e-commerce will continue that rise in impact. The e-commerce is truly to take human beings into the information society.

Distribution channels

This section does not cite any references or sources.Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (June 2013)

E-commerce has grown in importance as companies have adopted pure-click and brick-and-click channel systems. We can distinguish pure-click and brick-and-click channel system adopted by companies.

Pure-click or pure-play companies are those that have launched a website without any previous existence as a firm.

Bricks-and-clicks companies are those existing companies that have added an online site for e-commerce.

Page 144: Tugas tik di kelas xi ips 3

Click-to-brick online retailers that later open physical locations to supplement their online efforts.[75]

Examples of new e-commerce systems

According to eMarketer research company, "by 2017, 65.8 per cent of Britons will use smartphones" (cited by Williams, 2014).

Bringing online experience into the real world, also allows the development of the economy and the interaction between stores and customers. A great example of this new e-commerce system is whatthe Burberry store in London did in 2012. They refurbished the entire store with numerous big screens, photo-studios, and also provided a stage for live acts. Moreover, on the digital screens which are across the store, some fashion shows´ images and advertising campaigns are displayed (William, 2014). In this way, the experience of purchasing becomes more vivid and entertaining while the online and offline components are working together.

Another example is the Kiddicare smartphone app, in which consumers can compare prices. The app allows people to identify the location of sale products and to check whether the item they are looking for is in stock, or if it can be ordered online without going to the `real´ store (William, 2014). In the United States, theWalmart app allows consumers to check product availability and prices both online and offline. Moreover, you can also add to your shopping list items by scanning them, see their details and information, and check purchasers´ ratings and reviews.

Secret Code is the debut major album of artist Aya Kamiki, released on July 12, 2006. It comes in a CD only version. Secret Code debuted at number 5 on the Oricon Weekly Charts for Japan and sold 65,004 copies in total. "Secret Code" was used as the ending theme song for JAPAN COUNTDOWN.

Page 145: Tugas tik di kelas xi ips 3

Cryptology is an album by American jazz saxophonist David S. Ware recorded in 1994 and released on Homestead.

Contents

1 Background

2 Reception

3 Track listing

4 Personnel

5 References

Background

In fall 1992, Steven Joerg took over as Homestead Records’ manager. While he continued the label’s indie-rock trajectory, Joergadopted a radically different vision integrating free jazz on the same label where Sonic Youth, Dinosaur Jr and Big Black recorded seminal records.[1] Pianist Matthew Shipp, who had a duo record withbassist William Parker on a Texas punk-rock label which had a deal with Homestead's parent company, talked him into signing the David S. Ware Quartet.[2] According to Ware, Cryptology was "a meditation on Coltrane's example of using music as a vehicle for transcendence."[3]

Reception

Professional ratings

Review scores

Source Rating

AllMusic 4/5 stars[4]

The Penguin Guide to Jazz 3/4 stars[5]

Page 146: Tugas tik di kelas xi ips 3

In his review for AllMusic, Thom Jurek says about the album "It is raw, unwavering, and intense almost beyond measure."[4] The Penguin Guide to Jazz states that "the long-form, linked improvisations on Cryptology is an impressive first draft."[5]

The album garnered a Lead Review slot in Rolling Stone by David Fricke, who says about the title piece "It's a sharp lesson for anyone who thinks free jazz is just a euphemism for no discipline".[6]

The Wire placed the album in their "50 Records Of The Year 1995" list.[7]

A designated verifier signature is a signature scheme in whichsignatures can only be verified by a single designated verifier who is chosen by the signer. Designated verifier signatures were first proposed in 1996 by Jakobsson Sako, Kazue Sako, and Russell Impagliazzo.[1] Proposed as a way to combine authentication and off-the-record messages, designated verifier signatures allow authenticated, private conversations to take place.

Unlike in undeniable signature scheme the protocol of verifying is non-interactive, i.e. the signer chooses the designatedverifier (or the set of designated verifiers) in advance and does not take part in the verification process.

This article is about the study of topics such as quantity andstructure. For other uses, see Mathematics (disambiguation)."Math" redirects here. For other uses, see Math (disambiguation).

Page 147: Tugas tik di kelas xi ips 3

Euclid (holding calipers), Greek mathematician, 3rd century BC, as imagined by Raphael in this detail from The School of Athens.[1]

Mathematics (from Greek μάθημα máthēma, “knowledge, study, learning”) is the study of topics such as quantity (numbers),[2] structure,[3] space,[2] and change.[4][5][6] There is a range of views among mathematicians and philosophers as to the exact scope and definition of mathematics.[7][8]

Mathematicians seek out patterns [9] [10] and use them to formulatenew conjectures. Mathematicians resolve the truth or falsity of conjectures by mathematical proof. When mathematical structures are good models of real phenomena, then mathematical reasoning can provide insight or predictions about nature. Through the use of abstraction and logic, mathematics developed from counting, calculation, measurement,and the systematic study of the shapes and motions of physicalobjects. Practical mathematics has been a human activity for as far back as written records exist. The research required tosolve mathematical problems can take years or even centuries of sustained inquiry.

Rigorous arguments first appeared in Greek mathematics, most notably in Euclid's Elements. Since the pioneering work of Giuseppe Peano (1858–1932), David Hilbert (1862–1943), and others on axiomatic systems in the late 19th century, it has become customary to view mathematical research as establishingtruth by rigorous deduction from appropriately chosen axioms and definitions. Mathematics developed at a relatively slow pace until the Renaissance, when mathematical innovations interacting with new scientific discoveries led to a rapid increase in the rate of mathematical discovery that has continued to the present day.[11]

Page 148: Tugas tik di kelas xi ips 3

Galileo Galilei (1564–1642) said, "The universe cannot be readuntil we have learned the language and become familiar with the characters in which it is written. It is written in mathematical language, and the letters are triangles, circles and other geometrical figures, without which means it is humanly impossible to comprehend a single word. Without these,one is wandering about in a dark labyrinth."[12] Carl Friedrich Gauss (1777–1855) referred to mathematics as "the Queen of theSciences".[13] Benjamin Peirce (1809–1880) called mathematics "the science that draws necessary conclusions".[14] David Hilbert said of mathematics: "We are not speaking here of arbitrariness in any sense. Mathematics is not like a game whose tasks are determined by arbitrarily stipulated rules. Rather, it is a conceptual system possessing internal necessity that can only be so and by no means otherwise."[15] Albert Einstein (1879–1955) stated that "as far as the laws ofmathematics refer to reality, they are not certain; and as faras they are certain, they do not refer to reality."[16] French mathematician Claire Voisin states "There is creative drive inmathematics, it's all about movement trying to express itself." [17]

Mathematics is used throughout the world as an essential tool in many fields, including natural science, engineering, medicine, finance and the social sciences. Applied mathematics, the branch of mathematics concerned with application of mathematical knowledge to other fields, inspires and makes use of new mathematical discoveries, which has led to the development of entirely new mathematical disciplines, such as statistics and game theory. Mathematicians also engage in pure mathematics, or mathematicsfor its own sake, without having any application in mind. There is no clear line separating pure and applied mathematics, and practical applications for what began as puremathematics are often discovered.[18]

Contents 1 History

o 1.1 Evolutiono 1.2 Etymology

2 Definitions of mathematics

Page 149: Tugas tik di kelas xi ips 3

o 2.1 Mathematics as science 3 Inspiration, pure and applied mathematics, and

aesthetics 4 Notation, language, and rigor 5 Fields of mathematics

o 5.1 Foundations and philosophyo 5.2 Pure mathematics o 5.3 Applied mathematics

6 Mathematical awards 7 See also 8 Notes 9 References 10 Further reading 11 External links

HistoryEvolution

Main article: History of mathematics

The evolution of mathematics can be seen as an ever-increasingseries of abstractions. The first abstraction, which is sharedby many animals,[19] was probably that of numbers: the realization that a collection of two apples and a collection of two oranges (for example) have something in common, namely quantity of their members.

Greek mathematician Pythagoras (c. 570 – c. 495 BC), commonly credited with discovering the Pythagorean theorem

Page 150: Tugas tik di kelas xi ips 3

Mayan numerals

As evidenced by tallies found on bone, in addition to recognizing how to count physical objects, prehistoric peoplesmay have also recognized how to count abstract quantities, like time – days, seasons, years.[20]

More complex mathematics did not appear until around 3000 BC, when the Babylonians and Egyptians began using arithmetic, algebra and geometry for taxation and other financial calculations, for building and construction, and for astronomy.[21] The earliest uses of mathematics were in trading,land measurement, painting and weaving patterns and the recording of time.

In Babylonian mathematics elementary arithmetic (addition, subtraction, multiplication and division) first appears in thearchaeological record. Numeracy pre-dated writing and numeral systems have been many and diverse, with the first known written numerals created by Egyptians in Middle Kingdom texts such as the Rhind Mathematical Papyrus.[citation needed]

Between 600 and 300 BC the Ancient Greeks began a systematic study of mathematics in its own right with Greek mathematics.[22]

Mathematics has since been greatly extended, and there has been a fruitful interaction between mathematics and science, to the benefit of both. Mathematical discoveries continue to be made today. According to Mikhail B. Sevryuk, in the January2006 issue of the Bulletin of the American Mathematical Society, "The

Page 151: Tugas tik di kelas xi ips 3

number of papers and books included in the Mathematical Reviews database since 1940 (the first year of operation of MR) is nowmore than 1.9 million, and more than 75 thousand items are added to the database each year. The overwhelming majority of works in this ocean contain new mathematical theorems and their proofs."[23]

Etymology

The word mathematics comes from the Greek μάθημα (máthēma), which, in the ancient Greek language, means "that which is learnt",[24] "what one gets to know", hence also "study" and "science", and in modern Greek just "lesson". The word máthēmais derived from μανθάνω (manthano), while the modern Greek equivalent is μαθαίνω (mathaino), both of which mean "to learn". In Greece, the word for "mathematics" came to have thenarrower and more technical meaning "mathematical study" even in Classical times.[25] Its adjective is μαθηματικός (mathēmatikós), meaning "related to learning" or "studious", which likewise further came to mean "mathematical". In particular, μαθηματικὴ τέχνη (mathēmatikḗ tékhnē), Latin: ars mathematica, meant "the mathematical art".

In Latin, and in English until around 1700, the term mathematicsmore commonly meant "astrology" (or sometimes "astronomy") rather than "mathematics"; the meaning gradually changed to its present one from about 1500 to 1800. This has resulted in several mistranslations: a particularly notorious one is SaintAugustine's warning that Christians should beware of mathematicimeaning astrologers, which is sometimes mistranslated as a condemnation of mathematicians.[26]

The apparent plural form in English, like the French plural form les mathématiques (and the less commonly used singular derivative la mathématique), goes back to the Latin neuter plural mathematica (Cicero), based on the Greek plural τα μαθηματικά (ta mathēmatiká), used by Aristotle (384–322 BC), andmeaning roughly "all things mathematical"; although it is plausible that English borrowed only the adjective mathematic(al)and formed the noun mathematics anew, after the pattern of physics and metaphysics, which were inherited from the Greek.[27] In English, the noun mathematics takes singular verb forms.

Page 152: Tugas tik di kelas xi ips 3

It is often shortened to maths or, in English-speaking North America, math.[28]

Definitions of mathematicsMain article: Definitions of mathematics

Leonardo Fibonacci, the Italian mathematician who established the Hindu–Arabic numeral system to the Western World

Aristotle defined mathematics as "the science of quantity", and this definition prevailed until the 18th century.[29] Starting in the 19th century, when the study of mathematics increased in rigor and began to address abstract topics such as group theory and projective geometry, which have no clear-cut relation to quantity and measurement, mathematicians and philosophers began to propose a variety of new definitions.[30] Some of these definitions emphasize the deductive character ofmuch of mathematics, some emphasize its abstractness, some emphasize certain topics within mathematics. Today, no consensus on the definition of mathematics prevails, even among professionals.[7] There is not even consensus on whether mathematics is an art or a science.[8] A great many professional mathematicians take no interest in a definition of mathematics, or consider it undefinable.[7] Some just say, "Mathematics is what mathematicians do."[7]

Three leading types of definition of mathematics are called logicist, intuitionist, and formalist, each reflecting a different philosophical school of thought.[31] All have severe

Page 153: Tugas tik di kelas xi ips 3

problems, none has widespread acceptance, and no reconciliation seems possible.[31]

An early definition of mathematics in terms of logic was Benjamin Peirce's "the science that draws necessary conclusions" (1870).[32] In the Principia Mathematica, Bertrand Russell and Alfred North Whitehead advanced the philosophical program known as logicism, and attempted to prove that all mathematical concepts, statements, and principles can be defined and proven entirely in terms of symbolic logic. A logicist definition of mathematics is Russell's "All Mathematics is Symbolic Logic" (1903).[33]

Intuitionist definitions, developing from the philosophy of mathematician L.E.J. Brouwer, identify mathematics with certain mental phenomena. An example of an intuitionist definition is "Mathematics is the mental activity which consists in carrying out constructs one after the other."[31] A peculiarity of intuitionism is that it rejects some mathematical ideas considered valid according to other definitions. In particular, while other philosophies of mathematics allow objects that can be proven to exist even though they cannot be constructed, intuitionism allows only mathematical objects that one can actually construct.

Formalist definitions identify mathematics with its symbols and the rules for operating on them. Haskell Curry defined mathematics simply as "the science of formal systems".[34] A formal system is a set of symbols, or tokens, and some rules telling how the tokens may be combined into formulas. In formal systems, the word axiom has a special meaning, different from the ordinary meaning of "a self-evident truth". In formal systems, an axiom is a combination of tokens that is included in a given formal system without needing to be derived using the rules of the system.

Mathematics as science

Page 154: Tugas tik di kelas xi ips 3

Carl Friedrich Gauss, known as the prince of mathematicians

Gauss referred to mathematics as "the Queen of the Sciences".[13] In the original Latin Regina Scientiarum, as well as in German Königin der Wissenschaften, the word corresponding to science means a "field of knowledge", and this was the original meaning of "science" in English, also; mathematics is in this sense a field of knowledge. The specialization restricting the meaningof "science" to natural science follows the rise of Baconian science, which contrasted "natural science" to scholasticism, the Aristotelean method of inquiring from first principles. The role of empirical experimentation and observation is negligible in mathematics, compared to natural sciences such as psychology, biology, or physics. Albert Einstein stated that "as far as the laws of mathematics refer to reality, theyare not certain; and as far as they are certain, they do not refer to reality."[16] More recently, Marcus du Sautoy has called mathematics "the Queen of Science ... the main driving force behind scientific discovery".[35]

Many philosophers believe that mathematics is not experimentally falsifiable, and thus not a science according to the definition of Karl Popper.[36] However, in the 1930s Gödel's incompleteness theorems convinced many mathematicians[who?] that mathematics cannot be reduced to logic alone, and Karl Popper concluded that "most mathematical theories are, like those of physics and biology, hypothetico-deductive: pure mathematics therefore turns out to be much closer to the natural sciences whose hypotheses are conjectures, than it seemed even recently."[37] Other thinkers, notably Imre Lakatos, have applied a version of falsificationism to mathematics itself.

Page 155: Tugas tik di kelas xi ips 3

An alternative view is that certain scientific fields (such astheoretical physics) are mathematics with axioms that are intended to correspond to reality. The theoretical physicist J.M. Ziman proposed that science is public knowledge, and thus includes mathematics.[38] Mathematics shares much in common withmany fields in the physical sciences, notably the exploration of the logical consequences of assumptions. Intuition and experimentation also play a role in the formulation of conjectures in both mathematics and the (other) sciences. Experimental mathematics continues to grow in importance within mathematics, and computation and simulation are playingan increasing role in both the sciences and mathematics.

The opinions of mathematicians on this matter are varied. Manymathematicians[who?] feel that to call their area a science is todownplay the importance of its aesthetic side, and its historyin the traditional seven liberal arts; others[who?] feel that to ignore its connection to the sciences is to turn a blind eye to the fact that the interface between mathematics and its applications in science and engineering has driven much development in mathematics. One way this difference of viewpoint plays out is in the philosophical debate as to whether mathematics is created (as in art) or discovered (as in science). It is common to see universities divided into sections that include a division of Science and Mathematics, indicating that the fields are seen as being allied but that they do not coincide. In practice, mathematicians are typically grouped with scientists at the gross level but separated at finer levels. This is one of many issues considered in the philosophy of mathematics.[citation needed]

Inspiration, pure and applied mathematics, and aestheticsMain article: Mathematical beauty

Page 156: Tugas tik di kelas xi ips 3

Isaac Newton (left) and Gottfried Wilhelm Leibniz (right), developers of infinitesimal calculus

Mathematics arises from many different kinds of problems. At first these were found in commerce, land measurement, architecture and later astronomy; today, all sciences suggest problems studied by mathematicians, and many problems arise within mathematics itself. For example, the physicist Richard Feynman invented the path integral formulation of quantum mechanics using a combination of mathematical reasoning and physical insight, and today's string theory, a still-developing scientific theory which attempts to unify the four fundamental forces of nature, continues to inspire new mathematics.[39]

Some mathematics is relevant only in the area that inspired it, and is applied to solve further problems in that area. Butoften mathematics inspired by one area proves useful in many areas, and joins the general stock of mathematical concepts. A

Page 157: Tugas tik di kelas xi ips 3

distinction is often made between pure mathematics and appliedmathematics. However pure mathematics topics often turn out tohave applications, e.g. number theory in cryptography. This remarkable fact, that even the "purest" mathematics often turns out to have practical applications, is what Eugene Wigner has called "the unreasonable effectiveness of mathematics".[40] As in most areas of study, the explosion of knowledge in the scientific age has led to specialization: there are now hundreds of specialized areas in mathematics andthe latest Mathematics Subject Classification runs to 46 pages.[41] Several areas of applied mathematics have merged with related traditions outside of mathematics and become disciplines in their own right, including statistics, operations research, and computer science.

For those who are mathematically inclined, there is often a definite aesthetic aspect to much of mathematics. Many mathematicians talk about the elegance of mathematics, its intrinsic aesthetics and inner beauty. Simplicity and generality are valued. There is beauty in a simple and elegantproof, such as Euclid's proof that there are infinitely many prime numbers, and in an elegant numerical method that speeds calculation, such as the fast Fourier transform. G.H. Hardy inA Mathematician's Apology expressed the belief that these aestheticconsiderations are, in themselves, sufficient to justify the study of pure mathematics. He identified criteria such as significance, unexpectedness, inevitability, and economy as factors that contribute to a mathematical aesthetic.[42] Mathematicians often strive to find proofs that are particularly elegant, proofs from "The Book" of God according to Paul Erdős.[43][44] The popularity of recreational mathematics is another sign of the pleasure many find in solving mathematical questions.

Notation, language, and rigorMain article: Mathematical notation

Page 158: Tugas tik di kelas xi ips 3

Leonhard Euler, who created and popularized much of the mathematical notation used today

Most of the mathematical notation in use today was not invented until the 16th century.[45] Before that, mathematics was written out in words, a painstaking process that limited mathematical discovery.[46] Euler (1707–1783) was responsible for many of the notations in use today. Modern notation makes mathematics much easier for the professional, but beginners often find it daunting. It is extremely compressed: a few symbols contain a great deal of information. Like musical notation, modern mathematical notation has a strict syntax (which to a limited extent varies from author to author and from discipline to discipline) and encodes information that would be difficult to write in any other way.

Mathematical language can be difficult to understand for beginners. Words such as or and only have more precise meaningsthan in everyday speech. Moreover, words such as open and field have been given specialized mathematical meanings. Technical terms such as homeomorphism and integrable have precise meanings in mathematics. Additionally, shorthand phrases such as iff for "if and only if" belong to mathematical jargon. There is a reason for special notation and technical vocabulary: mathematics requires more precision than everyday speech. Mathematicians refer to this precision of language and logic as "rigor".

Mathematical proof is fundamentally a matter of rigor. Mathematicians want their theorems to follow from axioms by means of systematic reasoning. This is to avoid mistaken "theorems", based on fallible intuitions, of which many instances have occurred in the history of the subject.[47] The

Page 159: Tugas tik di kelas xi ips 3

level of rigor expected in mathematics has varied over time: the Greeks expected detailed arguments, but at the time of Isaac Newton the methods employed were less rigorous. Problemsinherent in the definitions used by Newton would lead to a resurgence of careful analysis and formal proof in the 19th century. Misunderstanding the rigor is a cause for some of the common misconceptions of mathematics. Today, mathematicians continue to argue among themselves about computer-assisted proofs. Since large computations are hard toverify, such proofs may not be sufficiently rigorous.[48]

Axioms in traditional thought were "self-evident truths", but that conception is problematic.[49] At a formal level, an axiom is just a string of symbols, which has an intrinsic meaning only in the context of all derivable formulas of an axiomatic system. It was the goal of Hilbert's program to put all of mathematics on a firm axiomatic basis, but according to Gödel's incompleteness theorem every (sufficiently powerful) axiomatic system has undecidable formulas; and so a final axiomatization of mathematics is impossible. Nonetheless mathematics is often imagined to be (as far as its formal content) nothing but set theory in some axiomatization, in thesense that every mathematical statement or proof could be castinto formulas within set theory.[50]

Fields of mathematicsSee also: Areas of mathematics and Glossary of areas of mathematics

An abacus, a simple calculating tool used since ancient times

Mathematics can, broadly speaking, be subdivided into the study of quantity, structure, space, and change (i.e. arithmetic, algebra, geometry, and analysis). In addition to these main concerns, there are also subdivisions dedicated to exploring links from the heart of mathematics to other fields:

Page 160: Tugas tik di kelas xi ips 3

to logic, to set theory (foundations), to the empirical mathematics of the various sciences (applied mathematics), andmore recently to the rigorous study of uncertainty. While someareas might seem unrelated, the Langlands program has found connections between areas previously thought unconnected, suchas Galois groups, Riemann surfaces and number theory.

Foundations and philosophy

In order to clarify the foundations of mathematics, the fieldsof mathematical logic and set theory were developed. Mathematical logic includes the mathematical study of logic and the applications of formal logic to other areas of mathematics; set theory is the branch of mathematics that studies sets or collections of objects. Category theory, whichdeals in an abstract way with mathematical structures and relationships between them, is still in development. The phrase "crisis of foundations" describes the search for a rigorous foundation for mathematics that took place from approximately 1900 to 1930.[51] Some disagreement about the foundations of mathematics continues to the present day. The crisis of foundations was stimulated by a number of controversies at the time, including the controversy over Cantor's set theory and the Brouwer–Hilbert controversy.

Mathematical logic is concerned with setting mathematics within a rigorous axiomatic framework, and studying the implications of such a framework. As such, it is home to Gödel's incompleteness theorems which (informally) imply that any effective formal system that contains basic arithmetic, ifsound (meaning that all theorems that can be proven are true), is necessarily incomplete (meaning that there are true theorems which cannot be proved in that system). Whatever finite collectionof number-theoretical axioms is taken as a foundation, Gödel showed how to construct a formal statement that is a true number-theoretical fact, but which does not follow from those axioms. Therefore, no formal system is a complete axiomatization of full number theory. Modern logic is divided into recursion theory, model theory, and proof theory, and is closely linked to theoretical computer science,[citation needed] as well as to category theory.

Page 161: Tugas tik di kelas xi ips 3

Theoretical computer science includes computability theory, computational complexity theory, and information theory. Computability theory examines the limitations of various theoretical models of the computer, including the most well-known model – the Turing machine. Complexity theory is the study of tractability by computer; some problems, although theoretically solvable by computer, are so expensive in terms of time or space that solving them is likely to remain practically unfeasible, even with the rapid advancement of computer hardware. A famous problem is the "P = NP ? " problem, one of the Millennium Prize Problems.[52] Finally, information theory is concerned with the amount of data that can be storedon a given medium, and hence deals with concepts such as compression and entropy.

Mathematicallogic Set theory Category

theoryTheory of

computation

Pure mathematics

Quantity

The study of quantity starts with numbers, first the familiar natural numbers and integers ("whole numbers") and arithmetical operations on them, which are characterized in arithmetic. The deeper properties of integers are studied in number theory, from which come such popular results as Fermat's Last Theorem. The twin prime conjecture and Goldbach's conjecture are two unsolved problems in number theory.

As the number system is further developed, the integers are recognized as a subset of the rational numbers ("fractions"). These, in turn, are contained within the real numbers, which are used to represent continuous quantities. Real numbers are generalized to complex numbers. These are the first steps of ahierarchy of numbers that goes on to include quaternions and octonions. Consideration of the natural numbers also leads to

Page 162: Tugas tik di kelas xi ips 3

the transfinite numbers, which formalize the concept of "infinity". Another area of study is size, which leads to the cardinal numbers and then to another conception of infinity: the aleph numbers, which allow meaningful comparison of the size of infinitely large sets.

Naturalnumbers Integers Rational

numbersReal

numbers Complex numbers

Structure

Many mathematical objects, such as sets of numbers and functions, exhibit internal structure as a consequence of operations or relations that are defined on the set. Mathematics then studies properties of those sets that can be expressed in terms of that structure; for instance number theory studies properties of the set of integers that can be expressed in terms of arithmetic operations. Moreover, it frequently happens that different such structured sets (or structures) exhibit similar properties, which makes it possible, by a further step of abstraction, to state axioms for a class of structures, and then study at once the whole class of structures satisfying these axioms. Thus one can study groups, rings, fields and other abstract systems; together such studies (for structures defined by algebraic operations) constitute the domain of abstract algebra.

By its great generality, abstract algebra can often be appliedto seemingly unrelated problems; for instance a number of ancient problems concerning compass and straightedge constructions were finally solved using Galois theory, which involves field theory and group theory. Another example of an algebraic theory is linear algebra, which is the general studyof vector spaces, whose elements called vectors have both quantity and direction, and can be used to model (relations between) points in space. This is one example of the phenomenon that the originally unrelated areas of geometry andalgebra have very strong interactions in modern mathematics. Combinatorics studies ways of enumerating the number of objects that fit a given structure.

Page 163: Tugas tik di kelas xi ips 3

Combinatorics Numbertheory

Grouptheory

Graphtheory

Ordertheory Algebra

Space

The study of space originates with geometry – in particular, Euclidean geometry. Trigonometry is the branch of mathematics that deals with relationships between the sides and the anglesof triangles and with the trigonometric functions; it combinesspace and numbers, and encompasses the well-known Pythagorean theorem. The modern study of space generalizes these ideas to include higher-dimensional geometry, non-Euclidean geometries (which play a central role in general relativity) and topology. Quantity and space both play a role in analytic geometry, differential geometry, and algebraic geometry. Convex and discrete geometry were developed to solve problems in number theory and functional analysis but now are pursued with an eye on applications in optimization and computer science. Within differential geometry are the concepts of fiber bundles and calculus on manifolds, in particular, vectorand tensor calculus. Within algebraic geometry is the description of geometric objects as solution sets of polynomial equations, combining the concepts of quantity and space, and also the study of topological groups, which combinestructure and space. Lie groups are used to study space, structure, and change. Topology in all its many ramifications may have been the greatest growth area in 20th-century mathematics; it includes point-set topology, set-theoretic topology, algebraic topology and differential topology. In particular, instances of modern-day topology are metrizabilitytheory, axiomatic set theory, homotopy theory, and Morse theory. Topology also includes the now solved Poincaré conjecture, and the still unsolved areas of the Hodge conjecture. Other results in geometry and topology, including the four color theorem and Kepler conjecture, have been provedonly with the help of computers.

Page 164: Tugas tik di kelas xi ips 3

Geometry Trigonometry

Differential

geometryTopology Fractal

geometryMeasuretheory

Change

Understanding and describing change is a common theme in the natural sciences, and calculus was developed as a powerful tool to investigate it. Functions arise here, as a central concept describing a changing quantity. The rigorous study of real numbers and functions of a real variable is known as realanalysis, with complex analysis the equivalent field for the complex numbers. Functional analysis focuses attention on (typically infinite-dimensional) spaces of functions. One of many applications of functional analysis is quantum mechanics.Many problems lead naturally to relationships between a quantity and its rate of change, and these are studied as differential equations. Many phenomena in nature can be described by dynamical systems; chaos theory makes precise theways in which many of these systems exhibit unpredictable yet still deterministic behavior.

Calculus Vectorcalculus

Differential

equations

Dynamicalsystems

Chaostheory

Complexanalysis

Applied mathematics

Applied mathematics concerns itself with mathematical methods that are typically used in science, engineering, business, andindustry. Thus, "applied mathematics" is a mathematical science with specialized knowledge. The term applied mathematics also describes the professional specialty in which

Page 165: Tugas tik di kelas xi ips 3

mathematicians work on practical problems; as a profession focused on practical problems, applied mathematics focuses on the "formulation, study, and use of mathematical models" in science, engineering, and other areas of mathematical practice.

In the past, practical applications have motivated the development of mathematical theories, which then became the subject of study in pure mathematics, where mathematics is developed primarily for its own sake. Thus, the activity of applied mathematics is vitally connected with research in puremathematics.

Statistics and other decision sciences

Applied mathematics has significant overlap with the discipline of statistics, whose theory is formulated mathematically, especially with probability theory. Statisticians (working as part of a research project) "create data that makes sense" with random sampling and with randomized experiments;[53] the design of a statistical sample or experiment specifies the analysis of the data (before the data be available). When reconsidering data from experiments and samples or when analyzing data from observational studies,statisticians "make sense of the data" using the art of modelling and the theory of inference – with model selection and estimation; the estimated models and consequential predictions should be tested on new data.[54]

Statistical theory studies decision problems such as minimizing the risk (expected loss) of a statistical action, such as using a procedure in, for example, parameter estimation, hypothesis testing, and selecting the best. In these traditional areas of mathematical statistics, a statistical-decision problem is formulated by minimizing an objective function, like expected loss or cost, under specificconstraints: For example, designing a survey often involves minimizing the cost of estimating a population mean with a given level of confidence.[55] Because of its use of optimization, the mathematical theory of statistics shares concerns with other decision sciences, such as operations research, control theory, and mathematical economics.[56]

Page 166: Tugas tik di kelas xi ips 3

Computational mathematics

Computational mathematics proposes and studies methods for solving mathematical problems that are typically too large forhuman numerical capacity. Numerical analysis studies methods for problems in analysis using functional analysis and approximation theory; numerical analysis includes the study ofapproximation and discretization broadly with special concern for rounding errors. Numerical analysis and, more broadly, scientific computing also study non-analytic topics of mathematical science, especially algorithmic matrix and graph theory. Other areas of computational mathematics include computer algebra and symbolic computation.

Mathematical physics

Fluiddynamics

Numericalanalysis

Optimization

Probability theory

Statistics

Cryptography

Mathematical finance

Gametheory

Mathematical

biology

Mathematical

chemistry

Mathematical

economics

Controltheory

Mathematical awardsArguably the most prestigious award in mathematics is the Fields Medal,[57][58] established in 1936 and now awarded every four years. The Fields Medal is often considered a mathematical equivalent to the Nobel Prize.

The Wolf Prize in Mathematics, instituted in 1978, recognizes lifetime achievement, and another major international award, the Abel Prize, was introduced in 2003. The Chern Medal was introduced in 2010 to recognize lifetime achievement. These accolades are awarded in recognition of a particular body of

Page 167: Tugas tik di kelas xi ips 3

work, which may be innovational, or provide a solution to an outstanding problem in an established field.

A famous list of 23 open problems, called "Hilbert's problems", was compiled in 1900 by German mathematician David Hilbert. This list achieved great celebrity among mathematicians, and at least nine of the problems have now been solved. A new list of seven important problems, titled the "Millennium Prize Problems", was published in 2000. A solution to each of these problems carries a $1 million reward, and only one (the Riemann hypothesis) is duplicated inHilbert's problems.

Computer science is the scientific and practical approach to computation and its applications. It is the systematic study of the feasibility, structure, expression, and mechanization of the methodical procedures (or algorithms) that underlie theacquisition, representation, processing, storage, communication of, and access to information, whether such information is encoded as bits in a computer memory or transcribed in genes and protein structures in a biological cell.[1] An alternate, more succinct definition of computer science is the study of automating algorithmic processes that scale. A computer scientist specializes in the theory of computation and the design of computational systems.[2]

Its subfields can be divided into a variety of theoretical andpractical disciplines. Some fields, such as computational complexity theory (which explores the fundamental properties of computational and intractable problems), are highly abstract, while fields such as computer graphics emphasize real-world visual applications. Still other fields focus on the challenges in implementing computation. For example, programming language theory considers various approaches to the description of computation, while the study of computer programming itself investigates various aspects of the use of programming language and complex systems. Human–computer interaction considers the challenges in making computers and computations useful, usable, and universally accessible to humans.

Contents

Page 168: Tugas tik di kelas xi ips 3

1 History o 1.1 Contributions

2 Philosophy o 2.1 Name of the field

3 Areas of computer science o 3.1 Theoretical computer science

3.1.1 Theory of computation 3.1.2 Information and coding theory 3.1.3 Algorithms and data structures 3.1.4 Programming language theory 3.1.5 Formal methods

o 3.2 Applied computer science 3.2.1 Artificial intelligence 3.2.2 Computer architecture and engineering 3.2.3 Computer performance analysis 3.2.4 Computer graphics and visualization 3.2.5 Computer security and cryptography 3.2.6 Computational science 3.2.7 Computer networks 3.2.8 Concurrent, parallel and distributed

systems 3.2.9 Databases 3.2.10 Software engineering

4 The great insights of computer science 5 Academia 6 Education 7 See also 8 Notes 9 References 10 Further reading 11 External links

HistoryMain article: History of computer science

Page 169: Tugas tik di kelas xi ips 3

Charles Babbage is credited with inventing the first mechanical computer.

Ada Lovelace is credited with writing the first algorithm intended for processing on a computer.

The earliest foundations of what would become computer sciencepredate the invention of the modern digital computer. Machinesfor calculating fixed numerical tasks such as the abacus have existed since antiquity, aiding in computations such as multiplication and division. Further, algorithms for performing computations have existed since antiquity, even before the development of sophisticated computing equipment. The ancient Sanskrit treatise Shulba Sutras, or "Rules of the Chord", is a book of algorithms written in 800 BCE for constructing geometric objects like altars using a peg and chord, an early precursor of the modern field of computationalgeometry.

Blaise Pascal designed and constructed the first working mechanical calculator, Pascal's calculator, in 1642.[3] In 1673Gottfried Leibniz demonstrated a digital mechanical calculator, called the 'Stepped Reckoner'.[4] He may be considered the first computer scientist and information theorist, for, among other reasons, documenting the binary

Page 170: Tugas tik di kelas xi ips 3

number system. In 1820, Thomas de Colmar launched the mechanical calculator industry[note 1] when he released his simplified arithmometer, which was the first calculating machine strong enough and reliable enough to be used daily in an office environment. Charles Babbage started the design of the first automatic mechanical calculator, his difference engine, in 1822, which eventually gave him the idea of the first programmable mechanical calculator, his Analytical Engine.[5] He started developing this machine in 1834 and "in less than two years he had sketched out many of the salient features of the modern computer. A crucial step was the adoption of a punched card system derived from the Jacquard loom"[6] making it infinitely programmable.[note 2] In 1843, during the translation of a French article on the analytical engine, Ada Lovelace wrote, in one of the many notes she included, an algorithm to computethe Bernoulli numbers, which is considered to be the first computer program.[7] Around 1885, Herman Hollerith invented thetabulator, which used punched cards to process statistical information; eventually his company became part of IBM. In 1937, one hundred years after Babbage's impossible dream, Howard Aiken convinced IBM, which was making all kinds of punched card equipment and was also in the calculator business[8] to develop his giant programmable calculator, the ASCC/Harvard Mark I, based on Babbage's analytical engine, which itself used cards and a central computing unit. When the machine was finished, some hailed it as "Babbage's dream come true".[9]

During the 1940s, as new and more powerful computing machines were developed, the term computer came to refer to the machinesrather than their human predecessors.[10] As it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to studycomputation in general. Computer science began to be established as a distinct academic discipline in the 1950s andearly 1960s.[11][12] The world's first computer science degree program, the Cambridge Diploma in Computer Science, began at the University of Cambridge Computer Laboratory in 1953. The first computer science degree program in the United States wasformed at Purdue University in 1962.[13] Since practical computers became available, many applications of computing have become distinct areas of study in their own rights.

Page 171: Tugas tik di kelas xi ips 3

Although many initially believed it was impossible that computers themselves could actually be a scientific field of study, in the late fifties it gradually became accepted among the greater academic population.[14][15] It is the now well-known IBM brand that formed part of the computer science revolution during this time. IBM (short for International Business Machines) released the IBM 704[16] and later the IBM 709[17] computers, which were widely used during the exploration period of such devices. "Still, working with the IBM [computer] was frustrating ... if you had misplaced as much asone letter in one instruction, the program would crash, and you would have to start the whole process over again".[14] During the late 1950s, the computer science discipline was very much in its developmental stages, and such issues were commonplace.[15]

Time has seen significant improvements in the usability and effectiveness of computing technology. Modern society has seena significant shift in the users of computer technology, from usage only by experts and professionals, to a near-ubiquitous user base. Initially, computers were quite costly, and some degree of human aid was needed for efficient use - in part from professional computer operators. As computer adoption became more widespread and affordable, less human assistance was needed for common usage.

Contributions

The German military used the Enigma machine (shown here) during World War II for communications they wanted kept secret. The large-scale decryption of Enigma traffic at

Page 172: Tugas tik di kelas xi ips 3

Bletchley Park was an important factor that contributed to Allied victory in WWII.[18]

Despite its short history as a formal academic discipline, computer science has made a number of fundamental contributions to science and society - in fact, along with electronics, it is a founding science of the current epoch of human history called the Information Age and a driver of the Information Revolution, seen as the third major leap in human technological progress after the Industrial Revolution (1750-1850 CE) and the Agricultural Revolution (8000-5000 BCE).

These contributions include:

The start of the "digital revolution", which includes thecurrent Information Age and the Internet.[19]

A formal definition of computation and computability, andproof that there are computationally unsolvable and intractable problems.[20]

The concept of a programming language, a tool for the precise expression of methodological information at various levels of abstraction.[21]

In cryptography, breaking the Enigma code was an important factor contributing to the Allied victory in World War II.[18]

Scientific computing enabled practical evaluation of processes and situations of great complexity, as well as experimentation entirely by software. It also enabled advanced study of the mind, and mapping of the human genome became possible with the Human Genome Project.[19] Distributed computing projects such as Folding@home explore protein folding.

Algorithmic trading has increased the efficiency and liquidity of financial markets by using artificial intelligence, machine learning, and other statistical andnumerical techniques on a large scale.[22] High frequency algorithmic trading can also exacerbate volatility.[23]

Computer graphics and computer-generated imagery have become ubiquitous in modern entertainment, particularly in television, cinema, advertising, animation and video games. Even films that feature no explicit CGI are usually "filmed" now on digital cameras, or edited or postprocessed using a digital video editor.[citation needed]

Page 173: Tugas tik di kelas xi ips 3

Simulation of various processes, including computational fluid dynamics, physical, electrical, and electronic systems and circuits, as well as societies and social situations (notably war games) along with their habitats,among many others. Modern computers enable optimization of such designs as complete aircraft. Notable in electrical and electronic circuit design are SPICE, as well as software for physical realization of new (or modified) designs. The latter includes essential design software for integrated circuits.[citation needed]

Artificial intelligence is becoming increasingly important as it gets more efficient and complex. There are many applications of AI, some of which can be seen athome, such as robotic vacuum cleaners. It is also presentin video games and on the modern battlefield in drones, anti-missile systems, and squad support robots.

PhilosophyMain article: Philosophy of computer science

A number of computer scientists have argued for the distinction of three separate paradigms in computer science. Peter Wegner argued that those paradigms are science, technology, and mathematics.[24] Peter Denning's working group argued that they are theory, abstraction (modeling), and design.[25] Amnon H. Eden described them as the "rationalist paradigm" (which treats computer science as a branch of mathematics, which is prevalent in theoretical computer science, and mainly employs deductive reasoning), the "technocratic paradigm" (which might be found in engineering approaches, most prominently in software engineering), and the"scientific paradigm" (which approaches computer-related artifacts from the empirical perspective of natural sciences, identifiable in some branches of artificial intelligence).[26]

Name of the field

Although first proposed in 1956,[15] the term "computer science"appears in a 1959 article in Communications of the ACM,[27] in whichLouis Fein argues for the creation of a Graduate School in Computer Sciences analogous to the creation of Harvard Business School in

Page 174: Tugas tik di kelas xi ips 3

1921,[28] justifying the name by arguing that, like management science, the subject is applied and interdisciplinary in nature, while having the characteristics typical of an academic discipline.[27] His efforts, and those of others such as numerical analyst George Forsythe, were rewarded: universities went on to create such programs, starting with Purdue in 1962.[29] Despite its name, a significant amount of computer science does not involve the study of computers themselves. Because of this, several alternative names have been proposed.[30] Certain departments of major universities prefer the term computing science, to emphasize precisely that difference. Danish scientist Peter Naur suggested the term datalogy,[31] to reflect the fact that the scientific discipline revolves around data and data treatment, while not necessarilyinvolving computers. The first scientific institution to use the term was the Department of Datalogy at the University of Copenhagen, founded in 1969, with Peter Naur being the first professor in datalogy. The term is used mainly in the Scandinavian countries. An alternative term, also proposed by Naur, is data science; this is now used for a distinct field of data analysis, including statistics and databases.

Also, in the early days of computing, a number of terms for the practitioners of the field of computing were suggested in the Communications of the ACM – turingineer, turologist, flow-charts-man, applied meta-mathematician, and applied epistemologist.[32] Three months later in the same journal, comptologist was suggested, followed next year by hypologist.[33] The term computics has also been suggested.[34] In Europe, terms derived from contracted translations of the expression "automatic information" (e.g. "informazione automatica" in Italian) or "information and mathematics" are often used, e.g. informatique (French), Informatik(German), informatica (Italy, The Netherlands), informática (Spain,Portugal), informatika (Slavic languages and Hungarian) or pliroforiki (πληροφορική, which means informatics) in Greek. Similar words have also been adopted in the UK (as in the School of Informatics of the University of Edinburgh).[35]

A folkloric quotation, often attributed to—but almost certainly not first formulated by—Edsger Dijkstra, states that"computer science is no more about computers than astronomy isabout telescopes."[note 3] The design and deployment of computers

Page 175: Tugas tik di kelas xi ips 3

and computer systems is generally considered the province of disciplines other than computer science. For example, the study of computer hardware is usually considered part of computer engineering, while the study of commercial computer systems and their deployment is often called information technology or information systems. However, there has been much cross-fertilization of ideas between the various computer-related disciplines. Computer science research also often intersects other disciplines, such as philosophy, cognitive science, linguistics, mathematics, physics, biology,statistics, and logic.

Computer science is considered by some to have a much closer relationship with mathematics than many scientific disciplines, with some observers saying that computing is a mathematical science.[11] Early computer science was strongly influenced by the work of mathematicians such as Kurt Gödel and Alan Turing, and there continues to be a useful interchange of ideas between the two fields in areas such as mathematical logic, category theory, domain theory, and algebra.[15]

The relationship between computer science and software engineering is a contentious issue, which is further muddied by disputes over what the term "software engineering" means, and how computer science is defined.[36] David Parnas, taking a cue from the relationship between other engineering and science disciplines, has claimed that the principal focus of computer science is studying the properties of computation in general, while the principal focus of software engineering is the design of specific computations to achieve practical goals, making the two separate but complementary disciplines.[37]

The academic, political, and funding aspects of computer science tend to depend on whether a department formed with a mathematical emphasis or with an engineering emphasis. Computer science departments with a mathematics emphasis and with a numerical orientation consider alignment with computational science. Both types of departments tend to make efforts to bridge the field educationally if not across all research.

Page 176: Tugas tik di kelas xi ips 3

Areas of computer scienceFurther information: Outline of computer science

As a discipline, computer science spans a range of topics fromtheoretical studies of algorithms and the limits of computation to the practical issues of implementing computing systems in hardware and software.[38][39] CSAB, formerly called Computing Sciences Accreditation Board – which is made up of representatives of the Association for Computing Machinery (ACM), and the IEEE Computer Society (IEEE-CS)[40] – identifies four areas that it considers crucial to the discipline of computer science: theory of computation, algorithms and data structures, programming methodology and languages, and computer elements and architecture. In addition to these four areas, CSAB also identifies fields such as software engineering, artificial intelligence, computer networking and telecommunications, database systems, parallel computation, distributed computation, computer-human interaction, computer graphics, operating systems, and numerical and symbolic computation as being important areas of computer science.[38]

Theoretical computer science

Main article: Theoretical computer science

The broader field of theoretical computer science encompasses both the classical theory of computation and a wide range of other topics that focus on the more abstract, logical, and mathematical aspects of computing.

Theory of computation

Main article: Theory of computation

According to Peter J. Denning, the fundamental question underlying computer science is, "What can be (efficiently) automated?"[11]The study of the theory of computation is focused on answeringfundamental questions about what can be computed and what amount of resources are required to perform those computations. In an effort to answer the first question, computability theory examines which computational problems aresolvable on various theoretical models of computation. The

Page 177: Tugas tik di kelas xi ips 3

second question is addressed by computational complexity theory, which studies the time and space costs associated withdifferent approaches to solving a multitude of computational problems.

The famous "P=NP?" problem, one of the Millennium Prize Problems,[41] is an open problem in the theory of computation.

P = NP ? GNITIRW-TERCES

Automatatheory

Computability theory

Computationalcomplexitytheory

Cryptography

Quantumcomputingtheory

Information and coding theory

Main articles: Information theory and Coding theory

Information theory is related to the quantification of information. This was developed by Claude E. Shannon to find fundamental limits on signal processing operations such as compressing data and on reliably storing and communicating data.[42] Coding theory is the study of the properties of codes (systems for converting information from one form to another) and their fitness for a specific application. Codes are used for data compression, cryptography, error detection and correction, and more recently also for network coding. Codes are studied for the purpose of designing efficient and reliable data transmission methods.

Algorithms and data structures

Algorithms and data structures is the study of commonly used computational methods and their computational efficiency.

Analysis of Algorithms Data Combinatorial Computational

Page 178: Tugas tik di kelas xi ips 3

algorithms structures optimization geometry

Programming language theory

Main article: Programming language theory

Programming language theory is a branch of computer science that deals with the design, implementation, analysis, characterization, and classification of programming languages and their individual features. It falls within the discipline of computer science, both depending on and affecting mathematics, software engineering and linguistics. It is an active research area, with numerous dedicated academic journals.

Type theory Compilerdesign

Programminglanguages

Formal methods

Main article: Formal methods

Formal methods are a particular kind of mathematically based technique for the specification, development and verification of software and hardware systems. The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute tothe reliability and robustness of a design. They form an important theoretical underpinning for software engineering, especially where safety or security is involved. Formal methods are a useful adjunct to software testing since they help avoid errors and can also give a framework for testing. For industrial use, tool support is required. However, the high cost of using formal methods means that they are usually only used in the development of high-integrity and life-critical systems, where safety or security is of utmost importance. Formal methods are best described as the

Page 179: Tugas tik di kelas xi ips 3

application of a fairly broad variety of theoretical computer science fundamentals, in particular logic calculi, formal languages, automata theory, and program semantics, but also type systems and algebraic data types to problems in software and hardware specification and verification.

Applied computer science

Applied computer science aims at identifying certain computer science concepts that can be used directly in solving real world problems.

Artificial intelligence

Main article: Artificial intelligence

This branch of computer science aims to or is required to synthesise goal-orientated processes such as problem-solving, decision-making, environmental adaptation, learning and communication found in humans and animals. From its origins incybernetics and in the Dartmouth Conference (1956), artificialintelligence (AI) research has been necessarily cross-disciplinary, drawing on areas of expertise such as applied mathematics, symbolic logic, semiotics, electrical engineering, philosophy of mind, neurophysiology, and social intelligence. AI is associated in the popular mind with robotic development, but the main field of practical application has been as an embedded component in areas of software development, which require computational understanding. The starting-point in the late 1940s was Alan Turing's question "Can computers think?", and the question remains effectively unanswered although the "Turing test" is still used to assess computer output on the scale of human intelligence. But the automation of evaluative and predictive tasks has been increasingly successful as a substitute for human monitoring and intervention in domains of computer application involving complex real-world data.

Page 180: Tugas tik di kelas xi ips 3

Machine learning Computer vision Image processing

Pattern recognition Data mining Evolutionarycomputation

Knowledgerepresentation

Natural languageprocessing Robotics

Computer architecture and engineering

Main articles: Computer architecture and Computer engineering

Computer architecture, or digital computer organization, is the conceptual design and fundamental operational structure ofa computer system. It focuses largely on the way by which the central processing unit performs internally and accesses addresses in memory.[43] The field often involves disciplines ofcomputer engineering and electrical engineering, selecting andinterconnecting hardware components to create computers that meet functional, performance, and cost goals.

Digital logic Microarchitecture

Multiprocessing

Ubiquitouscomputing

Systemsarchitecture

Operatingsystems

Page 181: Tugas tik di kelas xi ips 3

Computer performance analysis

Main article: Computer performance

Computer performance analysis is the study of work flowing through computers with the general goals of improving throughput, controlling response time, using resources efficiently, eliminating bottlenecks, and predicting performance under anticipated peak loads.[44]

Computer graphics and visualization

Main article: Computer graphics (computer science)

Computer graphics is the study of digital visual contents, andinvolves synthese and manipulations of image data. The study is connected to many other fields in computer science, including computer vision, image processing, and computationalgeometry, and is heavily applied in the fields of special effects and video games.

Computer security and cryptography

Main articles: Computer security and Cryptography

Computer security is a branch of computer technology, whose objective includes protection of information from unauthorizedaccess, disruption, or modification while maintaining the accessibility and usability of the system for its intended users. Cryptography is the practice and study of hiding (encryption) and therefore deciphering (decryption) information. Modern cryptography is largely related to computer science, for many encryption and decryption algorithms are based on their computational complexity.

Computational science

Computational science (or scientific computing) is the field of study concerned with constructing mathematical models and quantitative analysis techniques and using computers to analyze and solve scientific problems. In practical use, it istypically the application of computer simulation and other forms of computation to problems in various scientific disciplines.

Page 182: Tugas tik di kelas xi ips 3

Numericalanalysis

Computationalphysics

Computationalchemistry

Bioinformatics

Computer networks

Main article: Computer network

This branch of computer science aims to manage networks between computers worldwide.

Concurrent, parallel and distributed systems

Main articles: Concurrency (computer science) and Distributed computing

Concurrency is a property of systems in which several computations are executing simultaneously, and potentially interacting with each other. A number of mathematical models have been developed for general concurrent computation including Petri nets, process calculi and the Parallel Random Access Machine model. A distributed system extends the idea ofconcurrency onto multiple computers connected through a network. Computers within the same distributed system have their own private memory, and information is often exchanged among themselves to achieve a common goal.

Databases

Main article: Database

A database is intended to organize, store, and retrieve large amounts of data easily. Digital databases are managed using database management systems to store, create, maintain, and search data, through database models and query languages.

Software engineering

Main article: Software engineeringSee also: Computer programming

Page 183: Tugas tik di kelas xi ips 3

Software engineering is the study of designing, implementing, and modifying software in order to ensure it is of high quality, affordable, maintainable, and fast to build. It is a systematic approach to software design, involving the application of engineering practices to software. Software engineering deals with the organizing and analyzing of software— it doesn't just deal with the creation or manufacture of new software, but its internal maintenance and arrangement. Both computer applications software engineers andcomputer systems software engineers are projected to be among the fastest growing occupations from 2008 and 2018.

"Electrical and computer engineering" redirects here. For contents about computer engineering, see Computer engineering.

Electrical engineers design complex power systems ...

... and electronic circuits.

Electrical engineering is a field of engineering that generally deals with the study and application of electricity,electronics, and electromagnetism. This field first became an identifiable occupation in the latter half of the 19th centuryafter commercialization of the electric telegraph, the telephone, and electric power distribution and use. Subsequently, broadcasting and recording media made electronics part of daily life. The invention of the transistor, and later the integrated circuit, brought down the

Page 184: Tugas tik di kelas xi ips 3

cost of electronics to the point where they can be used in almost any household object.

Electrical engineering has now subdivided into a wide range ofsubfields including electronics, digital computers, power engineering, telecommunications, control systems, radio-frequency engineering, signal processing, instrumentation, andmicroelectronics. The subject of electronic engineering is often treated as its own subfield but it intersects with all the other subfields, including the power electronics of power engineering.

Electrical engineers typically hold a degree in electrical engineering or electronic engineering. Practicing engineers may have professional certification and be members of a professional body. Such bodies include the Institute of Electrical and Electronic Engineers (IEEE) and the Institutionof Engineering and Technology (IET).

Electrical engineers work in a very wide range of industries and the skills required are likewise variable. These range from basic circuit theory to the management skills required ofa project manager. The tools and equipment that an individual engineer may need are similarly variable, ranging from a simple voltmeter to a top end analyzer to sophisticated designand manufacturing software.

Contents 1 History

o 1.1 19th century o 1.2 More modern developments o 1.3 Solid-state transistors

2 Subdisciplines o 2.1 Power o 2.2 Control o 2.3 Electronics o 2.4 Microelectronics o 2.5 Signal processing o 2.6 Telecommunications o 2.7 Instrumentation o 2.8 Computers

Page 185: Tugas tik di kelas xi ips 3

o 2.9 Related disciplines 3 Education 4 Practicing engineers 5 Tools and work 6 See also 7 Notes 8 References 9 Further reading 10 External links

HistoryMain article: History of electrical engineering

Electricity has been a subject of scientific interest since atleast the early 17th century. The first electrical engineer was probably William Gilbert who designed the versorium: a device that detected the presence of statically charged objects. He was also the first to draw a clear distinction between magnetism and static electricity and is credited with establishing the term electricity.[1] In 1775 Alessandro Volta's scientific experimentations devised the electrophorus,a device that produced a static electric charge, and by 1800 Volta developed the voltaic pile, a forerunner of the electricbattery.[2]

19th century

The discoveries of Michael Faraday formed the foundation of electric motor technology

However, it was not until the 19th century that research into the subject started to intensify. Notable developments in this

Page 186: Tugas tik di kelas xi ips 3

century include the work of Georg Ohm, who in 1827 quantified the relationship between the electric current and potential difference in a conductor, of Michael Faraday, the discoverer of electromagnetic induction in 1831, and of James Clerk Maxwell, who in 1873 published a unified theory of electricityand magnetism in his treatise Electricity and Magnetism.[3]

Beginning in the 1830s, efforts were made to apply electricityto practical use in the telegraph. By the end of the 19th century the world had been forever changed by the rapid communication made possible by the engineering development of land-lines, submarine cables, and, from about 1890, wireless telegraphy.

Practical applications and advances in such fields created an increasing need for standardized units of measure. They led tothe international standardization of the units volt, ampere, coulomb, ohm, farad, and henry. This was achieved at an international conference in Chicago in 1893.[4] The publicationof these standards formed the basis of future advances in standardisation in various industries, and in many countries the definitions were immediately recognised in relevant legislation.[5]

During these years, the study of electricity was largely considered to be a subfield of physics. It was not until about1885 that universities and institutes of technology such as Massachusetts Institute of Technology (MIT) and Cornell University started to offer bachelor's degrees in electrical engineering. The Darmstadt University of Technology founded the first department of electrical engineering in the world in1882. In that same year, under Professor Charles Cross, MIT began offering the first option of electrical engineering within its physics department.[6] In 1883, Darmstadt Universityof Technology and Cornell University introduced the world's first bachelor's degree courses of study in electrical engineering, and in 1885 University College London founded thefirst chair of electrical engineering in Great Britain.[7] The University of Missouri established the first department of electrical engineering in the United States in 1886.[8] Severalother schools soon followed suit, including Cornell and the Georgia School of Technology in Atlanta, Georgia.

Page 187: Tugas tik di kelas xi ips 3

Thomas Edison, electric light and (DC) power supply networks

Károly Zipernowsky, Ottó Bláthy, Miksa Déri, the ZDB transformer

William Stanley, Jr., transformers

Galileo Ferraris, electrical theory, induction motor

Nikola Tesla, practical polyphase (AC) and induction motor designs

Page 188: Tugas tik di kelas xi ips 3

Mikhail Dolivo-Dobrovolsky developed standard 3-phase (AC) systems

Charles Proteus Steinmetz, AC mathematical theories for engineers

Oliver Heaviside, developed theoretical models for electric circuits

During these decades use of electrical engineering increased dramatically. In 1882, Thomas Edison switched on the world's first large-scale electric power network that provided 110 volts — direct current (DC) — to 59 customers on Manhattan Island in New York City. In 1884, Sir Charles Parsons inventedthe steam turbine allowing for more efficient electric power generation. Alternating current, with its ability to transmit power more efficiently over long distances via the use of transformers, developed rapidly in the 1880s and 1890s with transformer designs by Károly Zipernowsky, Ottó Bláthy and Miksa Déri (later called ZBD transformers), Lucien Gaulard, John Dixon Gibbs and William Stanley, Jr.. Practical AC motor designs including induction motors were independently inventedby Galileo Ferraris and Nikola Tesla and further developed

Page 189: Tugas tik di kelas xi ips 3

into a practical three-phase form by Mikhail Dolivo-Dobrovolsky and Charles Eugene Lancelot Brown.[9] Charles Steinmetz and Oliver Heaviside contributed to the theoretical basis of alternating current engineering.[10][11] The spread in the use of AC set off in the United States what has been called the War of Currents between a George Westinghouse backed AC system and a Thomas Edison backed DC power system, with AC being adopted as the overall standard.[12]

More modern developments

Guglielmo Marconi known for his pioneering work on long distance radio transmission

During the development of radio, many scientists and inventorscontributed to radio technology and electronics. The mathematical work of James Clerk Maxwell during the 1850s had shown the relationship of different forms of electromagnetic radiation including possibility of invisible airborne waves (later called "radio waves"). In his classic physics experiments of 1888, Heinrich Hertz proved Maxwell's theory bytransmitting radio waves with a spark-gap transmitter, and detected them by using simple electrical devices. Other physicists experimented with these new waves and in the process developed devices for transmitting and detecting them.In 1895 Guglielmo Marconi began work on a way to adapt the known methods of transmitting and detecting these "Hertzian waves" into a purpose built commercial wireless telegraphic system. Early on, he sent wireless signals over a distance of one and a half miles. In December 1901, he sent wireless wavesthat were not affected by the curvature of the Earth. Marconi later transmitted the wireless signals across the Atlantic

Page 190: Tugas tik di kelas xi ips 3

between Poldhu, Cornwall, and St. John's, Newfoundland, a distance of 2,100 miles (3,400 km).[13]

In 1897, Karl Ferdinand Braun introduced the cathode ray tube as part of an oscilloscope, a crucial enabling technology for electronic television.[14] John Fleming invented the first radiotube, the diode, in 1904. Two years later, Robert von Lieben and Lee De Forest independently developed the amplifier tube, called the triode.[15]

In 1920 Albert Hull developed the magnetron which would eventually lead to the development of the microwave oven in 1946 by Percy Spencer.[16][17] In 1934 the British military began to make strides toward radar (which also uses the magnetron) under the direction of Dr Wimperis, culminating in the operation of the first radar station at Bawdsey in August 1936.[18]

In 1941 Konrad Zuse presented the Z3, the world's first fully functional and programmable computer using electromechanical parts. In 1943 Tommy Flowers designed and built the Colossus, the world's first fully functional, electronic, digital and programmable computer.[19] In 1946 the ENIAC (Electronic Numerical Integrator and Computer) of John Presper Eckert and John Mauchly followed, beginning the computing era. The arithmetic performance of these machines allowed engineers to develop completely new technologies and achieve new objectives, including the Apollo program which culminated in landing astronauts on the Moon.[20]

Solid-state transistors

The invention of the transistor in late 1947 by William B. Shockley, John Bardeen, and Walter Brattain of the Bell Telephone Laboratories opened the door for more compact devices and led to the development of the integrated circuit in 1958 by Jack Kilby and independently in 1959 by Robert Noyce.[21] Starting in 1968, Ted Hoff and a team at the Intel Corporation invented the first commercial microprocessor, which foreshadowed the personal computer. The Intel 4004 was afour-bit processor released in 1971, but in 1973 the Intel 8080, an eight-bit processor, made the first personal computer, the Altair 8800, possible.[22]

Page 191: Tugas tik di kelas xi ips 3

SubdisciplinesElectrical engineering has many subdisciplines, the most common of which are listed below. Although there are electrical engineers who focus exclusively on one of these subdisciplines, many deal with a combination of them. Sometimes certain fields, such as electronic engineering and computer engineering, are considered separate disciplines in their own right.

Power

Main article: Power engineering

Power pole

Power engineering deals with the generation, transmission and distribution of electricity as well as the design of a range of related devices.[23] These include transformers, electric generators, electric motors, high voltage engineering, and power electronics. In many regions of the world, governments maintain an electrical network called a power grid that connects a variety of generators together with users of their energy. Users purchase electrical energy from the grid, avoiding the costly exercise of having to generate their own. Power engineers may work on the design and maintenance of the power grid as well as the power systems that connect to it.[24] Such systems are called on-grid power systems and may supply thegrid with additional power, draw power from the grid or do both. Power engineers may also work on systems that do not connect to the grid, called off-grid power systems, which in somecases are preferable to on-grid systems. The future includes Satellite controlled power systems, with feedback in real timeto prevent power surges and prevent blackouts.

Control

Page 192: Tugas tik di kelas xi ips 3

Main article: Control engineering

Control systems play a critical role in space flight.

Control engineering focuses on the modeling of a diverse rangeof dynamic systems and the design of controllers that will cause these systems to behave in the desired manner.[25] To implement such controllers electrical engineers may use electrical circuits, digital signal processors, microcontrollers and PLCs (Programmable Logic Controllers). Control engineering has a wide range of applications from the flight and propulsion systems of commercial airliners to the cruise control present in many modern automobiles.[26] It also plays an important role in industrial automation.

Control engineers often utilize feedback when designing control systems. For example, in an automobile with cruise control the vehicle's speed is continuously monitored and fed back to the system which adjusts the motor's power output accordingly. Where there is regular feedback, control theory can be used to determine how the system responds to such feedback.[27]

Electronics

Main article: Electronic engineering

Electronic components

Page 193: Tugas tik di kelas xi ips 3

Electronic engineering involves the design and testing of electronic circuits that use the properties of components suchas resistors, capacitors, inductors, diodes and transistors toachieve a particular functionality.[24] The tuned circuit, whichallows the user of a radio to filter out all but a single station, is just one example of such a circuit. Another example (of a pneumatic signal conditioner) is shown in the adjacent photograph.

Prior to the Second World War, the subject was commonly known as radio engineering and basically was restricted to aspects of communications and radar, commercial radio and early television.[24] Later, in post war years, as consumer devices began to be developed, the field grew to include modern television, audio systems, computers and microprocessors. In the mid-to-late 1950s, the term radio engineering gradually gave way to the name electronic engineering.

Before the invention of the integrated circuit in 1959,[28] electronic circuits were constructed from discrete components that could be manipulated by humans. These discrete circuits consumed much space and power and were limited in speed, although they are still common in some applications. By contrast, integrated circuits packed a large number—often millions—of tiny electrical components, mainly transistors,[29] into a small chip around the size of a coin. This allowed for the powerful computers and other electronic devices we see today.

Microelectronics

Main article: Microelectronics

Microprocessor

Page 194: Tugas tik di kelas xi ips 3

Microelectronics engineering deals with the design and microfabrication of very small electronic circuit components for use in an integrated circuit or sometimes for use on theirown as a general electronic component.[30] The most common microelectronic components are semiconductor transistors, although all main electronic components (resistors, capacitorsetc.) can be created at a microscopic level. Nanoelectronics is the further scaling of devices down to nanometer levels. Modern devices are already in the nanometer regime, with below100 nm processing having been standard since about 2002.[31]

Microelectronic components are created by chemically fabricating wafers of semiconductors such as silicon (at higher frequencies, compound semiconductors like gallium arsenide and indium phosphide) to obtain the desired transportof electronic charge and control of current. The field of microelectronics involves a significant amount of chemistry and material science and requires the electronic engineer working in the field to have a very good working knowledge of the effects of quantum mechanics.[32]

Signal processing

Main article: Signal processing

A Bayer filter on a CCD requires signal processing to get a red, green, and blue value at each pixel.

Signal processing deals with the analysis and manipulation of signals.[33] Signals can be either analog, in which case the signal varies continuously according to the information, or digital, in which case the signal varies according to a seriesof discrete values representing the information. For analog signals, signal processing may involve the amplification and filtering of audio signals for audio equipment or the modulation and demodulation of signals for telecommunications.For digital signals, signal processing may involve the

Page 195: Tugas tik di kelas xi ips 3

compression, error detection and error correction of digitallysampled signals.[34]

Signal Processing is a very mathematically oriented and intensive area forming the core of digital signal processing and it is rapidly expanding with new applications in every field of electrical engineering such as communications, control, radar, audio engineering, broadcast engineering, power electronics and bio-medical engineering as many already existing analog systems are replaced with their digital counterparts. Analog signal processing is still important in the design of many control systems.

DSP processor ICs are found in every type of modern electronicsystems and products including, SDTV | HDTV sets,[35] radios andmobile communication devices, Hi-Fi audio equipment, Dolby noise reduction algorithms, GSM mobile phones, mp3 multimedia players, camcorders and digital cameras, automobile control systems, noise cancelling headphones, digital spectrum analyzers, intelligent missile guidance, radar, GPS based cruise control systems and all kinds of image processing, video processing, audio processing and speech processing systems.[36]

Telecommunications

Main article: Telecommunications engineering

Satellite dishes are a crucial component in the analysis of satellite information.

Telecommunications engineering focuses on the transmission of information across a channel such as a coax cable, optical fiber or free space.[37] Transmissions across free space requireinformation to be encoded in a carrier wave to shift the

Page 196: Tugas tik di kelas xi ips 3

information to a carrier frequency suitable for transmission, this is known as modulation. Popular analog modulation techniques include amplitude modulation and frequency modulation.[38] The choice of modulation affects the cost and performance of a system and these two factors must be balancedcarefully by the engineer.

Once the transmission characteristics of a system are determined, telecommunication engineers design the transmitters and receivers needed for such systems. These two are sometimes combined to form a two-way communication device known as a transceiver. A key consideration in the design of transmitters is their power consumption as this is closely related to their signal strength.[39][40] If the signal strength of a transmitter is insufficient the signal's information willbe corrupted by noise.

Instrumentation

Main article: Instrumentation engineering

Flight instruments provide pilots with the tools to control aircraft analytically.

Instrumentation engineering deals with the design of devices to measure physical quantities such as pressure, flow and temperature.[41] The design of such instrumentation requires a good understanding of physics that often extends beyond electromagnetic theory. For example, flight instruments measure variables such as wind speed and altitude to enable pilots the control of aircraft analytically. Similarly, thermocouples use the Peltier-Seebeck effect to measure the temperature difference between two points.[42]

Page 197: Tugas tik di kelas xi ips 3

Often instrumentation is not used by itself, but instead as the sensors of larger electrical systems. For example, a thermocouple might be used to help ensure a furnace's temperature remains constant.[43] For this reason, instrumentation engineering is often viewed as the counterpartof control engineering.

Computers

Main article: Computer engineering

Supercomputers are used in fields as diverse as computational biology and geographic information systems.

Computer engineering deals with the design of computers and computer systems. This may involve the design of new hardware,the design of PDAs, tablets and supercomputers or the use of computers to control an industrial plant.[44] Computer engineersmay also work on a system's software. However, the design of complex software systems is often the domain of software engineering, which is usually considered a separate discipline.[45] Desktop computers represent a tiny fraction of the devices a computer engineer might work on, as computer-like architectures are now found in a range of devices including video game consoles and DVD players.

Related disciplines

Page 198: Tugas tik di kelas xi ips 3

The Bird VIP Infant ventilator

Mechatronics is an engineering discipline which deals with theconvergence of electrical and mechanical systems. Such combined systems are known as electromechanical systems and have widespread adoption. Examples include automated manufacturing systems,[46] heating, ventilation and air-conditioning systems [47] and various subsystems of aircraft and automobiles. [48]

The term mechatronics is typically used to refer to macroscopic systems but futurists have predicted the emergence of very small electromechanical devices. Already such small devices, known as Microelectromechanical systems (MEMS), are used in automobiles to tell airbags when to deploy,[49] in digital projectors to create sharper images and in inkjet printers to create nozzles for high definition printing. In the future it is hoped the devices will help build tiny implantable medical devices and improve optical communication.[50]

Biomedical engineering is another related discipline, concerned with the design of medical equipment. This includes fixed equipment such as ventilators, MRI scanners [51] and electrocardiograph monitors as well as mobile equipment such as cochlear implants, artificial pacemakers and artificial hearts.

Aerospace engineering and robotics an example is the most recent electric propulsion and ion propulsion.

Education

Page 199: Tugas tik di kelas xi ips 3

Main article: Education and training of electrical and electronics engineers

Oscilloscope

Electrical engineers typically possess an academic degree witha major in electrical engineering, electronics engineering, electrical engineering technology,[52] or electrical and electronic engineering.[53][54] The same fundamental principles are taught in all programs, though emphasis may vary accordingto title. The length of study for such a degree is usually four or five years and the completed degree may be designated as a Bachelor of Science in Electrical/Electronics EngineeringTechnology, Bachelor of Engineering, Bachelor of Science, Bachelor of Technology, or Bachelor of Applied Science depending on the university. The bachelor's degree generally includes units covering physics, mathematics, computer science, project management, and a variety of topics in electrical engineering.[55] Initially such topics cover most, ifnot all, of the subdisciplines of electrical engineering. At some schools, the students can then choose to emphasize one ormore subdisciplines towards the end of their courses of study.

Typical electrical engineering diagram used as a troubleshooting tool

At many schools, electronic engineering is included as part ofan electrical award, sometimes explicitly, such as a Bachelor of Engineering (Electrical and Electronic), but in others electrical and electronic engineering are both considered to be sufficiently broad and complex that separate degrees are offered.[56]

Page 200: Tugas tik di kelas xi ips 3

Some electrical engineers choose to study for a postgraduate degree such as a Master of Engineering/Master of Science (M.Eng./M.Sc.), a Master of Engineering Management, a Doctor of Philosophy (Ph.D.) in Engineering, an Engineering Doctorate(Eng.D.), or an Engineer's degree. The master's and engineer'sdegrees may consist of either research, coursework or a mixture of the two. The Doctor of Philosophy and Engineering Doctorate degrees consist of a significant research component and are often viewed as the entry point to academia. In the United Kingdom and some other European countries, Master of Engineering is often considered to be an undergraduate degree of slightly longer duration than the Bachelor of Engineering rather than postgraduate.[57]

Practicing engineersBelgian electrical engineers inspecting the rotor of a 40,000 kilowatt turbine of the General Electric Company in New York City

In most countries, a bachelor's degree in engineering represents the first step towards professional certification and the degree program itself is certified by a professional body.[58] After completing a certified degree program the engineer must satisfy a range of requirements (including work experience requirements) before being certified. Once certified the engineer is designated the title of ProfessionalEngineer (in the United States, Canada and South Africa), Chartered Engineer or Incorporated Engineer (in India, Pakistan, the United Kingdom, Ireland and Zimbabwe), CharteredProfessional Engineer (in Australia and New Zealand) or European Engineer (in much of the European Union).

Page 201: Tugas tik di kelas xi ips 3

The IEEE corporate office is on the 17th floor of 3 Park Avenue in New York City

The advantages of certification vary depending upon location. For example, in the United States and Canada "only a licensed engineer may seal engineering work for public and private clients".[59] This requirement is enforced by state and provincial legislation such as Quebec's Engineers Act.[60] In other countries, no such legislation exists. Practically all certifying bodies maintain a code of ethics that they expect all members to abide by or risk expulsion.[61] In this way theseorganizations play an important role in maintaining ethical standards for the profession. Even in jurisdictions where certification has little or no legal bearing on work, engineers are subject to contract law. In cases where an engineer's work fails he or she may be subject to the tort of negligence and, in extreme cases, the charge of criminal negligence. An engineer's work must also comply with numerous other rules and regulations such as building codes and legislation pertaining to environmental law.

Professional bodies of note for electrical engineers include the Institute of Electrical and Electronics Engineers (IEEE) and the Institution of Engineering and Technology (IET). The IEEE claims to produce 30% of the world's literature in electrical engineering, has over 360,000 members worldwide andholds over 3,000 conferences annually.[62] The IET publishes 21 journals, has a worldwide membership of over 150,000, and claims to be the largest professional engineering society in

Page 202: Tugas tik di kelas xi ips 3

Europe.[63][64] Obsolescence of technical skills is a serious concern for electrical engineers. Membership and participationin technical societies, regular reviews of periodicals in the field and a habit of continued learning are therefore essential to maintaining proficiency. MIET(Member of the Institution of Engineering and Technology) is recognised in Europe as Electrical and computer (technology) engineer.[65]

In Australia, Canada and the United States electrical engineers make up around 0.25% of the labor force (see note).

Tools and workFrom the Global Positioning System to electric power generation, electrical engineers have contributed to the development of a wide range of technologies. They design, develop, test and supervise the deployment of electrical systems and electronic devices. For example, they may work on the design of telecommunication systems, the operation of electric power stations, the lighting and wiring of buildings,the design of household appliances or the electrical control of industrial machinery.[66]

Satellite communications is typical of what electrical engineers work on.

Fundamental to the discipline are the sciences of physics and mathematics as these help to obtain both a qualitative and quantitative description of how such systems will work. Today most engineering work involves the use of computers and it is commonplace to use computer-aided design programs when designing electrical systems. Nevertheless, the ability to sketch ideas is still invaluable for quickly communicating with others.

Page 203: Tugas tik di kelas xi ips 3

The Shadow robot hand system

Although most electrical engineers will understand basic circuit theory (that is the interactions of elements such as resistors, capacitors, diodes, transistors and inductors in a circuit), the theories employed by engineers generally depend upon the work they do. For example, quantum mechanics and solid state physics might be relevant to an engineer working on VLSI (the design of integrated circuits), but are largely irrelevant to engineers working with macroscopic electrical systems. Even circuit theory may not be relevant to a person designing telecommunication systems that use off-the-shelf components. Perhaps the most important technical skills for electrical engineers are reflected in university programs, which emphasize strong numerical skills, computer literacy andthe ability to understand the technical language and concepts that relate to electrical engineering.[67]

A laser bouncing down an acrylic rod, illustrating the total internal reflection of light in a multi-mode optical fiber.

Page 204: Tugas tik di kelas xi ips 3

A wide range of instrumentation is used by electrical engineers. For simple control circuits and alarms, a basic multimeter measuring voltage, current and resistance may suffice. Where time-varying signals need to be studied, the oscilloscope is also an ubiquitous instrument. In RF engineering and high frequency telecommunications spectrum analyzers and network analyzers are used. In some disciplines safety can be a particular concern with instrumentation. For instance medical electronics designers must take into account that much lower voltages than normal can be dangerous when electrodes are directly in contact with internal body fluids.[68] Power transmission engineering also has great safety concerns due to the high voltages used; although voltmeters may in principle be similar to their low voltage equivalents, safety and calibration issues make them very different.[69] Manydisciplines of electrical engineering use tests specific to their discipline. Audio electronics engineers use audio test sets consisting of a signal generator and a meter, principallyto measure level but also other parameters such as harmonic distortion and noise. Likewise information technology have their own test sets, often specific to a particular data format, and the same is true of television broadcasting.

Radome at the Misawa Air Base Misawa Security Operations Center, Misawa, Japan

For many engineers, technical work accounts for only a fraction of the work they do. A lot of time may also be spent on tasks such as discussing proposals with clients, preparing budgets and determining project schedules.[70] Many senior engineers manage a team of technicians or other engineers and for this reason project management skills are important. Most engineering projects involve some form of documentation and strong written communication skills are therefore very important.

Page 205: Tugas tik di kelas xi ips 3

The workplaces of engineers are just as varied as the types ofwork they do. Electrical engineers may be found in the pristine lab environment of a fabrication plant, the offices of a consulting firm or on site at a mine. During their working life, electrical engineers may find themselves supervising a wide range of individuals including scientists, electricians, computer programmers and other engineers.[71]

Electrical engineering has an intimate relationship with the physical sciences. For instance the physicist Lord Kelvin played a major role in the engineering of the first transatlantic telegraph cable.[72] Conversely, the engineer Oliver Heaviside produced major work on the mathematics of transmission on telegraph cables.[73] Electrical engineers are often required on major science projects. For instance, large particle accelerators such as CERN need electrical engineers to deal with many aspects of the project: from the power distribution, to the instrumentation, to the manufacture and installation of the superconducting electromagnets.[74][75]

"Cash machine" redirects here. For the Hard-Fi song, see Cash Machine.

An NCR Personas 75-Series interior, multi-function ATM in the United States

Page 206: Tugas tik di kelas xi ips 3

Smaller indoor ATMs dispense money inside convenience stores and other busy areas, such as this off-premises Wincor Nixdorfmono-function ATM in Sweden.

An automated teller machine or automatic teller machine[1][2][3] (ATM, American, Australian, Malaysian, Singaporean, Indian, Maldivian, Hiberno, Philippine and Sri Lankan English), also known as an automated banking machine (ABM, Canadian English),cash machine, cashpoint, cashline, minibank, or colloquially hole in the wall (British and South African English), is an electronic telecommunications device that enables the customers of a financial institution to perform financial transactions, particularly cash withdrawal, without the need for a human cashier, clerk or bank teller.

On most modern ATMs, the customer is identified by inserting aplastic ATM card with a magnetic stripe or a plastic smart card with a chip that contains a unique card number and some security information such as an expiration date or CVVC (CVV).Authentication is provided by the customer entering a personalidentification number (PIN).

Using an ATM, customers can access their bank deposit or credit accounts in order to make a variety of transactions such as cash withdrawals, check balances, or credit mobile phones. If the currency being withdrawn from the ATM is different from that in which the bank account is denominated the money will be converted at an official exchange rate. Thus, ATMs often provide the best possible exchange rates for foreign travellers, and are widely used for this purpose.[4] Yet it seems on modern ATMs foreign cash currency is not

Page 207: Tugas tik di kelas xi ips 3

processed or possibly rejected when deposited or at least is not originally expected by the machine.

Contents 1 History

o 1.1 Docutel United States 1969 o 1.2 Continued Improvements

2 Location 3 Financial networks 4 Global use 5 Hardware 6 Software 7 Security

o 7.1 Physical o 7.2 Transactional secrecy and integrity o 7.3 Customer identity integrity o 7.4 Device operation integrity o 7.5 Customer security o 7.6 Uses

8 Reliability 9 Fraud

o 9.1 Card fraud 10 Related devices 11 In popular culture 12 See also 13 References 14 Further reading 15 External links

History

Page 208: Tugas tik di kelas xi ips 3

An old Nixdorf ATM

The idea of out-of-hours cash distribution developed from banker's needs in Asia (Japan), Europe (Sweden and the United Kingdom) and North America (the United States).[5][6] LIttle is known of the Japanese device. In the US patent record, Luther George Simjian has been credited with developing a "prior art device". Specifically his 132nd patent (US3079603), which was first filed on 30 June 1960 (and granted 26 February 1963). The roll-out of this machine, called Bankograph, was delayed by a couple of years, due in part to Simjian's Reflectone Electronics Inc. being acquired by Universal Match Corporation.[7] An experimental Bankograph was installed in NewYork City in 1961 by the City Bank of New York, but removed after six months due to the lack of customer acceptance. The Bankograph was an automated envelope deposit machine (accepting coins, cash and cheques) and did not have cash dispensing features.[8][9]

Actor Reg Varney using the world's first cash machine in Enfield Town, north London on 27 June 1967

Page 209: Tugas tik di kelas xi ips 3

It is widely accepted that the first ATM was put into use by Barclays Bank in its Enfield Town branch in north London, United Kingdom, on 27 June 1967.[10] This machine was inaugurated by English comedy actor Reg Varney.[11] This instance of the invention is credited to John Shepherd-Barron of printing firm De La Rue,[12] who was awarded an OBE in the 2005 New Year Honours.[13] This design used paper cheques issuedby a teller or cashier, marked with carbon-14 for machine readability and security, which in a latter model were matchedwith a personal identification number (PIN).[12][14] Shepherd-Barron stated; "It struck me there must be a way I could get my own money, anywhere in the world or the UK. I hit upon the idea of a chocolate bar dispenser, but replacing chocolate with cash."[12]

The Barclays-De La Rue machine (called De La Rue Automatic Cash System or DACS)[15] beat the Swedish saving banks' and a company called Metior's machine (a device called Bankomat) by a mere nine days and Westminster Bank’s-Smith Industries-Chubbsystem (called Chubb MD2) by a month.[16] The online version of the Swedish machine is listed to have been operational on 6 May 1968, while claiming to be the first online cash machine in the world (ahead of a similar claim by IBM and Lloyds Bank in 1971).[17] The collaboration of a small start-up called Speytec and Midland Bank developed a fourth machine which was marketed after 1969 in Europe and the US by the Burroughs Corporation. The patent for this device (GB1329964) was filed on September 1969 (and granted in 1973) by John David Edwards,Leonard Perkins, John Henry Donald, Peter Lee Chappell, Sean Benjamin Newcombe & Malcom David Roe.

ATM of Sberbank in Tolyatti, Russia

Both the DACS and MD2 accepted only a single-use token or voucher which was retained by the machine while the Speytec worked with a card with a magnetic strip at the back. They used principles including Carbon-14 and low-coercivity magnetism in order to make fraud more difficult. The idea of aPIN stored on the card was developed by a British engineer working on the MD2 named James Goodfellow in 1965 (patent GB1197183 filed on 2 May 1966 with Anthony Davies). The essence of this system was that it enabled the verification ofthe customer with the debited account without human

Page 210: Tugas tik di kelas xi ips 3

intervention. This patent is also the earliest instance of a complete "currency dispenser system" in the patent record. This patent was filed on 5 March 1968 in the US (US 3543904) and granted on 1 December 1970. It had a profound influence onthe industry as a whole. Not only did future entrants into thecash dispenser market such as NCR Corporation and IBM licence Goodfellow’s PIN system, but a number of later patents reference this patent as "Prior Art Device".[18]

In January 9, 1969's ABC newspaper (Madrid edition) there was an article about the new Bancomat, a teller machine installed in downtown Madrid, Spain, by Banesto, dispensing 1,000 pesetabills (1 to 5 max). Each user had to introduce a security personal key using a combination of the ten numeric buttons.[19]

In March of the same year an ad with the instructions to use the Bancomat was published in the same newspaper [20] Bancomat was the first cash machine installed in Spain, one of the first in Europe.

Play media1969 ABC news report on the introduction of ATMs in Sydney, Australia. People could only receive AUS $25 at a time and thebank card was sent back to the user at a later date.

Docutel United States 1969

After looking first hand at the experiences in Europe, in 1968the networked ATM was pioneered in the US, in Dallas, Texas, by Donald Wetzel, who was a department head at an automated baggage-handling company called Docutel. Recognised by the United States Patent Office for having invented the ATM are Kenneth S. Goldstein and John D. White, under US Patent # 3,662,343. Recognised by the United States Patent Office for having invented the ATM network are Fred J. Gentile and Jack Wu Chang, under US Patent # 3,833,885. On September 2, 1969, Chemical Bank installed the first ATM in the U.S. at its branch in Rockville Centre, New York. The first ATMs were designed to dispense a fixed amount of cash when a user

Page 211: Tugas tik di kelas xi ips 3

inserted a specially coded card.[21] A Chemical Bank advertisement boasted "On Sept. 2 our bank will open at 9:00 and never close again."[22] Chemical's ATM, initially known as aDocuteller was designed by Donald Wetzel and his company Docutel. Chemical executives were initially hesitant about theelectronic banking transition given the high cost of the earlymachines. Additionally, executives were concerned that customers would resist having machines handling their money.[23]

In 1995, the Smithsonian National Museum of American History recognised Docutel and Wetzel as the inventors of the networked ATM.[24]

Continued Improvements

The first modern ATM was an IBM 2984 and came into use at Lloyds Bank, Brentwood High Street, Essex, England in December1972. The IBM 2984 was designed at the request of Lloyds Bank.The 2984 Cash Issuing Terminal was the first true ATM, similarin function to today's machines and named by Lloyds Bank: Cashpoint; Cashpoint is still a registered trademark of LloydsTSB in the UK. All were online and issued a variable amount which was immediately deducted from the account. A small number of 2984s were supplied to a US bank. A couple of well known historical models of ATMs include the IBM 3614, IBM 3624and 473x series, Diebold 10xx and TABS 9000 series, NCR 1780 and earlier NCR 770 series.

The first switching system to enable shared automated teller machines between banks went into production operation on February 3, 1979 in Denver, Colorado, in an effort by ColoradoNational Bank of Denver and Kranzley and Company of Cherry Hill, New Jersey.[25]

The newest ATM at Royal Bank of Scotland allows customers to withdraw cash up to £100 without a card by inputting a six-digit code requested through their smartphones.[26]

LocationThis section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged

Page 212: Tugas tik di kelas xi ips 3

and removed. (February 2011)

An ATM Encrypting PIN Pad (EPP) with German markings

ATM in Vatican City with menu in Latin

ATMs are placed not only near or inside the premises of banks,but also in locations such as shopping centers/malls, airports, grocery stores, petrol/gas stations, restaurants, oranywhere frequented by large numbers of people. There are two types of ATM installations: on- and off-premises. On-premises ATMs are typically more advanced, multi-function machines thatcomplement a bank branch's capabilities, and are thus more expensive. Off-premises machines are deployed by financial institutions and Independent Sales Organisations (ISOs) where there is a simple need for cash, so they are generally cheapersingle function devices. In Canada, ATMs (also known there as ABMs) not operated by a financial institution are known as "white-label ABMs".

In the U.S., Canada and some Gulf countries, banks often have drive-thru lanes providing access to ATMs using an automobile.

Many ATMs have a sign above them, indicating the name of the bank or organisation owning the ATM and possibly including thelist of ATM networks to which that machine is connected.

ATMs can also be found in railway stations and metro stations.In recent times, countries like India and some countries in

Page 213: Tugas tik di kelas xi ips 3

Africa are installing ATM's in rural areas which are solar powered and do not require air conditioning.[27]

Financial networks

An ATM in the Netherlands. The logos of a number of interbank networks this ATM is connected to are shown

Most ATMs are connected to interbank networks, enabling peopleto withdraw and deposit money from machines not belonging to the bank where they have their accounts or in the countries where their accounts are held (enabling cash withdrawals in local currency). Some examples of interbank networks include NYCE, PULSE, PLUS, Cirrus, AFFN, Interac, Interswitch, STAR, LINK, MegaLink and BancNet.

ATMs rely on authorisation of a financial transaction by the card issuer or other authorising institution on a communications network. This is often performed through an ISO8583 messaging system.

Many banks charge ATM usage fees. In some cases, these fees are charged solely to users who are not customers of the bank where the ATM is installed; in other cases, they apply to all users.

In order to allow a more diverse range of devices to attach totheir networks, some interbank networks have passed rules expanding the definition of an ATM to be a terminal that either has the vault within its footprint or utilises the vault or cash drawer within the merchant establishment, which allows for the use of a scrip cash dispenser.

Page 214: Tugas tik di kelas xi ips 3

A Diebold 1063ix with a dial-up modem visible at the base

ATMs typically connect directly to their host or ATM Controller on either ADSL or dial-up modem over a telephone line or directly on a leased line. Leased lines are preferableto plain old telephone service (POTS) lines because they require less time to establish a connection. Less-trafficked machines will usually rely on a dial-up modem on a POTS line rather than using a leased line, since a leased line may be comparatively more expensive to operate compared to a POTS line. That dilemma may be solved as high-speed Internet VPN connections become more ubiquitous. Common lower-level layer communication protocols used by ATMs to communicate back to the bank include SNA over SDLC, TC500 over Async, X.25, and TCP/IP over Ethernet.

In addition to methods employed for transaction security and secrecy, all communications traffic between the ATM and the Transaction Processor may also be encrypted using methods suchas SSL.[28]

Global use

ATMs at the railway station in Poznań

Page 215: Tugas tik di kelas xi ips 3

There are no hard international or government-compiled numberstotaling the complete number of ATMs in use worldwide. Estimates developed by ATMIA place the number of ATMs in use currently at over 2.2 million, or approximately 1 ATM per 3000people in the world.[29]

To simplify the analysis of ATM usage around the world, financial institutions generally divide the world into seven regions, due to the penetration rates, usage statistics, and features deployed. Four regions (USA, Canada, Europe, and Japan) have high numbers of ATMs per million people.[30][31] Despite the large number of ATMs, there is additional demand for machines in the Asia/Pacific area as well as in Latin America.[32][33] ATMs have yet to reach high numbers in the Near East and Africa.[34]

One of the world's most northerly installed ATMs is located atLongyearbyen, Svalbard, Norway.

The world's most southerly installed ATM is located at McMurdoStation, located in New Zealand's Ross Dependency, in Antarctica since 1997.[35] There are two ATMs at McMurdo, but only one active at any time, that are owned by Wells Fargo [36] and serviced once every two years by NCR.[37]

According to international statistics, the highest installed ATM in the world is located at Nathu La Pass, in India, installed by the Indian Axis Bank at 4,023 metres (13,199 ft).[38] According to the Mainland Chinese media and CPC statistics,the highest installed ATM in the world is located in Nagchu County, Tibet, China, at 4500 metres, allegedly installed by the Agricultural Bank of China.[39][unreliable source?][40][unreliable source?]

Israel has the world's lowest installed ATM at Ein Bokek at the Dead Sea, installed independently by a grocery store at 421 metres below sea level.[41]

While ATMs are ubiquitous on modern cruise ships, ATMs can also be found on some US Navy ships.[42]

Page 216: Tugas tik di kelas xi ips 3

Welcome message displayed on the world's most northerly ATM located in the post office at Longyearbyen

Hardware

A block diagram of an ATM

An ATM is typically made up of the following devices:

CPU (to control the user interface and transaction devices)

Magnetic or chip card reader (to identify the customer) PIN pad EEP4 (similar in layout to a touch tone or

calculator keypad), manufactured as part of a secure enclosure

Secure cryptoprocessor , generally within a secure enclosure

Display (used by the customer for performing the transaction)

Function key buttons (usually close to the display) or a touchscreen (used to select the various aspects of the transaction)

Page 217: Tugas tik di kelas xi ips 3

Record printer (to provide the customer with a record of the transaction)

Vault (to store the parts of the machinery requiring restricted access)

Housing (for aesthetics and to attach signage to) Sensors and indicators

Due to heavier computing demands and the falling price of personal computer–like architectures, ATMs have moved away from custom hardware architectures using microcontrollers or application-specific integrated circuits and have adopted the hardware architecture of a personal computer, such as USB connections for peripherals, Ethernet and IP communications, and use personal computer operating systems.

Business owners often lease ATM terminals from ATM service providers, however based on the economies of scale, the price of equipment has dropped to the point where many business owners are simply paying for ATMs using a credit card.

New ADA voice and text-to-speech guidelines imposed in 2010, but required by March 2012[43] have forced many ATM owners to either upgrade non-compliant machines or dispose them if they are not up-gradable, and purchase new compliant equipment. This has created an avenue for hackers and thieves to obtain ATM hardware at junkyards from improperly disposed decommissioned ATMs.[44]

Two Loomis employees refilling an ATM at the Downtown Seattle REI

Page 218: Tugas tik di kelas xi ips 3

The vault of an ATM is within the footprint of the device itself and is where items of value are kept. Scrip cash dispensers do not incorporate a vault.

Mechanisms found inside the vault may include:

Dispensing mechanism (to provide cash or other items of value)

Deposit mechanism including a check processing module andbulk note acceptor (to allow the customer to make deposits)

Security sensors (magnetic, thermal, seismic, gas) Locks (to ensure controlled access to the contents of the

vault) Journaling systems; many are electronic (a sealed flash

memory device based on in-house standards) or a solid-state device (an actual printer) which accrues all records of activity including access timestamps, number of notes dispensed, etc. This is considered sensitive data and is secured in similar fashion to the cash as it is a similar liability.

ATM vaults are supplied by manufacturers in several grades. Factors influencing vault grade selection include cost, weight, regulatory requirements, ATM type, operator risk avoidance practices and internal volume requirements.[45] Industry standard vault configurations include Underwriters Laboratories UL-291 "Business Hours" and Level 1 Safes,[46] RAL TL-30 derivatives,[47] and CEN EN 1143-1 - CEN III and CEN IV.[48]

[49]

ATM manufacturers recommend that an ATM vault be attached to the floor to prevent theft,[50] though there is a record of a theft conducted by tunnelling into an ATM floor.[citation needed]

SoftwareWith the migration to commodity Personal Computer hardware, standard commercial "off-the-shelf" operating systems, and programming environments can be used inside of ATMs. Typical platforms previously used in ATM development include RMX or OS/2.

Page 219: Tugas tik di kelas xi ips 3

A Wincor Nixdorf ATM running Windows 2000.

Today the vast majority of ATMs worldwide use a Microsoft Windows operating system, primarily Windows XP Professional orWindows XP Embedded.[citation needed] A small number of deployments maystill be running older versions of Windows OS such as Windows NT, Windows CE, or Windows 2000.

There is a computer industry security view that general publicdesktop operating systems(os) have greater risks as operating systems for cash dispensing machines than other types of operating systems like (secure) real-time operating systems (RTOS). RISKS Digest has many articles about cash machine operating system vulnerabilities.[51]

Linux is also finding some reception in the ATM marketplace. An example of this is Banrisul, the largest bank in the south of Brazil, which has replaced the MS-DOS operating systems in its ATMs with Linux. Banco do Brasil is also migrating ATMs toLinux. Indian-based Vortex Engineering is manufacturing ATMs which operate only with Linux. Common application layer transaction protocols, such as Diebold 91x (911 or 912) and NCR NDC or NDC+ provide emulation of older generations of hardware on newer platforms with incremental extensions made over time to address new capabilities, although companies likeNCR continuously improve these protocols issuing newer versions (e.g. NCR's AANDC v3.x.y, where x.y are subversions).Most major ATM manufacturers provide software packages that implement these protocols. Newer protocols such as IFX have yet to find wide acceptance by transaction processors.[52]

With the move to a more standardised software base, financial institutions have been increasingly interested in the ability to pick and choose the application programs that drive their equipment. WOSA/XFS, now known as CEN XFS (or simply XFS),

Page 220: Tugas tik di kelas xi ips 3

provides a common API for accessing and manipulating the various devices of an ATM. J/XFS is a Java implementation of the CEN XFS API.

While the perceived benefit of XFS is similar to the Java's "Write once, run anywhere" mantra, often different ATM hardware vendors have different interpretations of the XFS standard. The result of these differences in interpretation means that ATM applications typically use a middleware to evenout the differences between various platforms.

With the onset of Windows operating systems and XFS on ATM's, the software applications have the ability to become more intelligent. This has created a new breed of ATM applications commonly referred to as programmable applications. These typesof applications allows for an entirely new host of applications in which the ATM terminal can do more than only communicate with the ATM switch. It is now empowered to connected to other content servers and video banking systems.

Notable ATM software that operates on XFS platforms include Triton PRISM, Diebold Agilis EmPower, NCR APTRA Edge, AbsoluteSystems AbsoluteINTERACT, KAL Kalignite Software Platform, Phoenix Interactive VISTAatm, Wincor Nixdorf ProTopas, EuronetEFTS and Intertech inter-ATM.

With the move of ATMs to industry-standard computing environments, concern has risen about the integrity of the ATM's software stack.[53]

SecurityMain article: Security of automated teller machines

Security, as it relates to ATMs, has several dimensions. ATMs also provide a practical demonstration of a number of securitysystems and concepts operating together and how various security concerns are dealt with.

Physical

Page 221: Tugas tik di kelas xi ips 3

A Wincor Nixdorf Procash 2100xe Frontload that was opened withan angle grinder

Automated Teller Machine In Dezfull in southwest of Iran

Early ATM security focused on making the ATMs invulnerable to physical attack; they were effectively safes with dispenser mechanisms. A number of attacks on ATMs resulted, with thievesattempting to steal entire ATMs by ram-raiding.[54] Since late 1990s, criminal groups operating in Japan improved ram-raidingby stealing and using a truck loaded with heavy construction machinery to effectively demolish or uproot an entire ATM and any housing to steal its cash.[55]

Another attack method, plofkraak, is to seal all openings of theATM with silicone and fill the vault with a combustible gas orto place an explosive inside, attached, or near the ATM. This gas or explosive is ignited and the vault is opened or distorted by the force of the resulting explosion and the criminals can break in.[56] This type of theft has occurred in the Netherlands, Belgium, France, Denmark, Germany and Australia.[57][58] These types of attacks can be prevented by a number of gas explosion prevention devices also known as gas

Page 222: Tugas tik di kelas xi ips 3

suppression system. These systems use explosive gas detection sensor to detect explosive gas and to neutralise it by releasing a special explosion suppression chemical which changes the composition of the explosive gas and renders it ineffective.

Several attacks in the UK (at least one of which was successful) have involved digging a concealed tunnel under theATM and cutting through the reinforced base to remove the money.[59]

Modern ATM physical security, per other modern money-handling security, concentrates on denying the use of the money inside the machine to a thief, by using different types of Intelligent Banknote Neutralisation Systems.

A common method is to simply rob the staff filling the machinewith money. To avoid this, the schedule for filling them is kept secret, varying and random. The money is often kept in cassettes, which will dye the money if incorrectly opened.

Transactional secrecy and integrity

A Triton brand ATM with a dip style card reader and a triple DES keypad

The security of ATM transactions relies mostly on the integrity of the secure cryptoprocessor: the ATM often uses general commodity components that sometimes are not consideredto be "trusted systems".

Encryption of personal information, required by law in many jurisdictions, is used to prevent fraud. Sensitive data in ATMtransactions are usually encrypted with DES, but transaction processors now usually require the use of Triple DES.[60] Remote

Page 223: Tugas tik di kelas xi ips 3

Key Loading techniques may be used to ensure the secrecy of the initialisation of the encryption keys in the ATM. Message Authentication Code (MAC) or Partial MAC may also be used to ensure messages have not been tampered with while in transit between the ATM and the financial network.

Customer identity integrity

A BTMU ATM with a palm scanner (to the right of the screen)

There have also been a number of incidents of fraud by Man-in-the-middle attacks, where criminals have attached fake keypadsor card readers to existing machines. These have then been used to record customers' PINs and bank card information in order to gain unauthorised access to their accounts. Various ATM manufacturers have put in place countermeasures to protectthe equipment they manufacture from these threats.[61][62]

Alternative methods to verify cardholder identities have been tested and deployed in some countries, such as finger and palmvein patterns,[63] iris, and facial recognition technologies. Cheaper mass-produced equipment has been developed and is being installed in machines globally that detect the presence of foreign objects on the front of ATMs, current tests have shown 99% detection success for all types of skimming devices.[64]

Device operation integrity

Page 224: Tugas tik di kelas xi ips 3

ATMs that are exposed to the outside must be vandal and weather resistant

Openings on the customer-side of ATMs are often covered by mechanical shutters to prevent tampering with the mechanisms when they are not in use. Alarm sensors are placed inside the ATM and in ATM servicing areas to alert their operators when doors have been opened by unauthorised personnel.

To protect against hackers, ATM's have a built-in firewall. Once the firewall has detected malicious attempts to break into the machine remotely, the firewall locks down the machine.[65]

Rules are usually set by the government or ATM operating body that dictate what happens when integrity systems fail. Depending on the jurisdiction, a bank may or may not be liablewhen an attempt is made to dispense a customer's money from anATM and the money either gets outside of the ATM's vault, or was exposed in a non-secure fashion, or they are unable to determine the state of the money after a failed transaction.[66]

Customers often commented that it is difficult to recover money lost in this way, but this is often complicated by the policies regarding suspicious activities typical of the criminal element.[67]

Customer security

Page 225: Tugas tik di kelas xi ips 3

Dunbar Armored ATM Techs watching over ATMs that have been installed in a van

In some countries, multiple security cameras and security guards are a common feature.[68] In the United States, The New York State Comptroller's Office has advised the New York StateDepartment of Banking to have more thorough safety inspectionsof ATMs in high crime areas.[69]

Consultants of ATM operators assert that the issue of customersecurity should have more focus by the banking industry;[70] it has been suggested that efforts are now more concentrated on the preventive measure of deterrent legislation than on the problem of ongoing forced withdrawals.[71]

At least as far back as July 30, 1986, consultants of the industry have advised for the adoption of an emergency PIN system for ATMs, where the user is able to send a silent alarmin response to a threat.[72] Legislative efforts to require an emergency PIN system have appeared in Illinois,[73] Kansas [74] andGeorgia,[75] but none have succeeded yet. In January 2009, Senate Bill 1355 was proposed in the Illinois Senate that revisits the issue of the reverse emergency PIN system.[76] The bill is again supported by the police and denied by the banking lobby.[77]

In 1998 three towns outside the Cleveland, Ohio, in response to an ATM crime wave, adopted ATM Consumer Security Legislation requiring that an emergency telephone number switch be installed at all outside ATMs within their jurisdiction. In the wake of an ATM Murder in Sharon Hill, Pennsylvania, The City Council of Sharon Hill passed an ATM Consumer Security Bill as well. As of July 2009, ATM Consumer Security Legislation is currently pending in New York, New Jersey, and Washington D.C.

Page 226: Tugas tik di kelas xi ips 3

In China and elsewhere, many efforts to promote security have been made. On-premises ATMs are often located inside the bank's lobby which may be accessible 24 hours a day. These lobbies have extensive security camera coverage, a courtesy telephone for consulting with the bank staff, and a security guard on the premises. Bank lobbies that are not guarded 24 hours a day may also have secure doors that can only be openedfrom outside by swiping the bank card against a wall-mounted scanner, allowing the bank to identify which card enters the building. Most ATMs will also display on-screen safety warnings and may also be fitted with convex mirrors above the display allowing the user to see what is happening behind them.

As of 2013, the only claim available about the extent of ATM connected homicides is that they range from 500 to 1000 per year in the US, covering only cases where the victim had an ATM card and the card was used by the killer after the known time of death.[78]

Uses

Two NCR Personas 84 ATMs at a bank in Jersey dispensing two types of pound sterling banknotes: Bank of England on the left, and States of Jersey on the right

Although ATMs were originally developed as just cash dispensers, they have evolved to include many other bank-related functions:

Paying routine bills, fees, and taxes (utilities, phone bills, social security, legal fees, taxes, etc.)

Printing bank statements Updating passbooks Cash advances Cheque Processing Module Paying (in full or partially) the credit balance on a

card linked to a specific current account.

Page 227: Tugas tik di kelas xi ips 3

Transferring money between linked accounts (such as transferring between checking and savings accounts)

Deposit currency recognition, acceptance, and recycling[79]

[80]

In some countries, especially those which benefit from a fullyintegrated cross-bank ATM network (e.g.: Multibanco in Portugal), ATMs include many functions which are not directly related to the management of one's own bank account, such as:

Gold vending ATM in New York City

Loading monetary value into stored value cards Adding pre-paid cell phone / mobile phone credit. Purchasing

o Postage stamps .o Lottery ticketso Train tickets o Concert ticketso Movie ticketso Shopping mall gift certificates.o Gold [81]

Donating to charities[82]

Increasingly banks are seeking to use the ATM as a sales device to deliver pre approved loans and targeted advertising using products such as ITM (the Intelligent Teller Machine) from Aptra Relate from NCR.[83] ATMs can also act as an advertising channel for other companies.[84]*

Page 228: Tugas tik di kelas xi ips 3

A South Korean ATM with mobile bank port and bar code reader

However several different technologies on ATMs have not yet reached worldwide acceptance, such as:

Videoconferencing with human tellers, known as video tellers[85]

Biometrics , where authorisation of transactions is based on the scanning of a customer's fingerprint, iris, face, etc.[86][87][88]

Cheque/Cash Acceptance, where the ATM accepts and recognise cheques and/or currency without using envelopes[89] Expected to grow in importance in the US through Check 21 legislation.

Bar code scanning [90] On-demand printing of "items of value" (such as movie

tickets, traveler's cheques, etc.) Dispensing additional media (such as phone cards) Co-ordination of ATMs with mobile phones[91]

Integration with non-banking equipment[92][93]

Games and promotional features[94]

CRM at the ATM

E.g. In Canada, ATMs are called guichets automatiques in French and sometimes "Bank Machines" in English. The Interac shared cash network does not allow for the selling of goods from ATMsdue to specific security requirements for PIN entry when buying goods.[95] CIBC machines in Canada, are able to top-up the minutes on certain pay as you go phones.

Page 229: Tugas tik di kelas xi ips 3

Reliability

An ATM running Microsoft Windows that has crashed due to a peripheral component failure

Before an ATM is placed in a public place, it typically has undergone extensive testing with both test money and the backend computer systems that allow it to perform transactions. Banking customers also have come to expect high reliability in their ATMs,[96] which provides incentives to ATM providers to minimise machine and network failures. Financial consequences of incorrect machine operation also provide high degrees of incentive to minimise malfunctions.[97]

ATMs and the supporting electronic financial networks are generally very reliable, with industry benchmarks typically producing 98.25% customer availability for ATMs[98] and up to 99.999% availability for host systems that manage the networksof ATMs. If ATM networks do go out of service, customers couldbe left without the ability to make transactions until the beginning of their bank's next time of opening hours.

This said, not all errors are to the detriment of customers; there have been cases of machines giving out money without debiting the account, or giving out higher value notes as a result of incorrect denomination of banknote being loaded in the money cassettes.[99] The result of receiving too much money may be influenced by the card holder agreement in place between the customer and the bank.[100][101]

Errors that can occur may be mechanical (such as card transport mechanisms; keypads; hard disk failures; envelope

Page 230: Tugas tik di kelas xi ips 3

deposit mechanisms); software (such as operating system; device driver; application); communications; or purely down tooperator error.

To aid in reliability, some ATMs print each transaction to a roll paper journal that is stored inside the ATM, which allowsboth the users of the ATMs and the related financial institutions to settle things based on the records in the journal in case there is a dispute. In some cases, transactions are posted to an electronic journal to remove thecost of supplying journal paper to the ATM and for more convenient searching of data.

Improper money checking can cause the possibility of a customer receiving counterfeit banknotes from an ATM. While bank personnel are generally trained better at spotting and removing counterfeit cash,[102][103] the resulting ATM money supplies used by banks provide no guarantee for proper banknotes, as the Federal Criminal Police Office of Germany has confirmed that there are regularly incidents of false banknotes having been dispensed through bank ATMs.[104] Some ATMs may be stocked and wholly owned by outside companies, which can further complicate this problem. Bill validation technology can be used by ATM providers to help ensure the authenticity of the cash before it is stocked in an ATM; ATMs that have cash recycling capabilities include this capability.[105]

Fraud

ATM lineup

Page 231: Tugas tik di kelas xi ips 3

Some ATMs may put up warning messages to customers to be vigilant of possible tampering.

Banknotes from a cash machine robbery made unusable with red paint.

As with any device containing objects of value, ATMs and the systems they depend on to function are the targets of fraud. Fraud against ATMs and people's attempts to use them takes several forms.

The first known instance of a fake ATM was installed at a shopping mall in Manchester, Connecticut in 1993. By modifyingthe inner workings of a Fujitsu model 7020 ATM, a criminal gang known as The Bucklands Boys were able to steal information from cards inserted into the machine by customers.[106]

WAVY-TV reported an incident in Virginia Beach in September 2006 where a hacker who had probably obtained a factory-default administrator password for a gas station's white labelATM caused the unit to assume it was loaded with US$5 bills instead of $20s, enabling himself—and many subsequent

Page 232: Tugas tik di kelas xi ips 3

customers—to walk away with four times the money they wanted to withdraw.[107] This type of scam was featured on the TV series The Real Hustle.

ATM behavior can change during what is called "stand-in" time,where the bank's cash dispensing network is unable to access databases that contain account information (possibly for database maintenance). In order to give customers access to cash, customers may be allowed to withdraw cash up to a certain amount that may be less than their usual daily withdrawal limit, but may still exceed the amount of availablemoney in their accounts, which could result in fraud if the customers intentionally withdraw more money than what they hadin their accounts.[108]

Card fraud

In an attempt to prevent criminals from shoulder surfing the customer's personal identification number (PIN), some banks draw privacy areas on the floor.

For a low-tech form of fraud, the easiest is to simply steal acustomer's card along with its PIN. A later variant of this approach is to trap the card inside of the ATM's card reader with a device often referred to as a Lebanese loop. When the customer gets frustrated by not getting the card back and walks away from the machine, the criminal is able to remove the card and withdraw cash from the customer's account, using the card and its PIN.

This type of ATM fraud has spread globally. Although somewhat replaced in terms of volume by ATM skimming incidents, a re-emergence of card trapping has been noticed in regions such asEurope, where EMV chip and PIN cards have increased in circulation.[109]

Another simple form of fraud involves attempting to get the customer's bank to issue a new card and its PIN and stealing them from their mail.[110]

By contrast, a newer high-tech method of operating, sometimes called card skimming or card cloning, involves the installation of a magnetic card reader over the real ATM's

Page 233: Tugas tik di kelas xi ips 3

card slot and the use of a wireless surveillance camera or a modified digital camera or a false PIN keypad to observe the user's PIN. Card data is then cloned into a duplicate card andthe criminal attempts a standard cash withdrawal. The availability of low-cost commodity wireless cameras, keypads, card readers, and card writers has made it a relatively simpleform of fraud, with comparatively low risk to the fraudsters.[111]

In an attempt to stop these practices, countermeasures againstcard cloning have been developed by the banking industry, in particular by the use of smart cards which cannot easily be copied or spoofed by unauthenticated devices, and by attempting to make the outside of their ATMs tamper evident. Older chip-card security systems include the French Carte Bleue, Visa Cash, Mondex, Blue from American Express [112] and EMV '96 or EMV 3.11. The most actively developed form of smartcard security in the industry today is known as EMV 2000 or EMV 4.x.

EMV is widely used in the UK (Chip and PIN) and other parts ofEurope, but when it is not available in a specific area, ATMs must fall back to using the easy–to–copy magnetic strip to perform transactions. This fallback behaviour can be exploited.[113] However the fall-back option has been removed onthe ATMs of some UK banks, meaning if the chip is not read, the transaction will be declined.

Card cloning and skimming can be detected by the implementation of magnetic card reader heads and firmware thatcan read a signature embedded in all magnetic strips during the card production process. This signature, known as a "MagnePrint" or "BluPrint", can be used in conjunction with common two-factor authentication schemes used in ATM, debit/retail point-of-sale and prepaid card applications.

The concept and various methods of copying the contents of an ATM card's magnetic strip onto a duplicate card to access other people's financial information was well known in the hacking communities by late 1990.[114]

In 1996, Andrew Stone, a computer security consultant from Hampshire in the UK, was convicted of stealing more than

Page 234: Tugas tik di kelas xi ips 3

£1 million by pointing high-definition video cameras at ATMs from a considerable distance, and by recording the card numbers, expiry dates, etc. from the embossed detail on the ATM cards along with video footage of the PINs being entered. After getting all the information from the videotapes, he was able to produce clone cards which not only allowed him to withdraw the full daily limit for each account, but also allowed him to sidestep withdrawal limits by using multiple copied cards. In court, it was shown that he could withdraw asmuch as £10,000 per hour by using this method. Stone was sentenced to five years and six months in prison.[115]

In February 2009, a group of criminals used counterfeit ATM cards to steal $9 million from 130 ATMs in 49 cities around the world, all within a period of 30 minutes.[116]

Related devicesA talking ATM is a type of ATM that provides audible instructions so that people who cannot read an ATM screen can independently use the machine, therefore effectively eliminating the need for assistance from an external, potentially malevolent source. All audible information is delivered privately through a standard headphone jack on the face of the machine. Alternatively, some banks such as the Nordea and Swedbank use a built-in external speaker which may be invoked by pressing the talk button on the keypad.[117] Information is delivered to the customer either through pre-recorded sound files or via text-to-speech speech synthesis.

A postal interactive kiosk may also share many of the same components as an ATM (including a vault), but only dispenses items related to postage.[118][119]

A scrip cash dispenser may share many of the same components as an ATM, but lacks the ability to dispense physical cash andconsequently requires no vault. Instead, the customer requestsa withdrawal transaction from the machine, which prints a receipt. The customer then takes this receipt to a nearby sales clerk, who then exchanges it for cash from the till.[120]

A teller assist unit (TAU) may also share many of the same components as an ATM (including a vault), but they are

Page 235: Tugas tik di kelas xi ips 3

distinct in that they are designed to be operated solely by trained personnel and not by the general public, they do not integrate directly into interbank networks, and are usually controlled by a computer that is not directly integrated into the overall construction of the unit.

A web ATM is an online interface for ATM card banking that uses a smart card reader. All the usual ATM functions are available except for withdrawing cash. Most banks in Taiwan provide these online ATM services. [121][122]

In popular cultureOne of the banking innovations that Arthur Hailey mentioned inhis 1975 bestselling novel The Moneychangers is Docutel, an automated teller machine,(Hailey & 1975 308)[123] based on real technology that was issued a patent in 1974 in the United States.

In the novel, Jill Peacock, a journalist, interviewed First Mercantile American Bank executive VP Alexander Vandervoort ina suburban shopping plaza where the bank had installed the first two stainless-steel Docutel automatic tellers. Vandervoort, whose clothes looked like they were from the "fashion section of Esquire", was not at all like the classical solemn, cautious banker in a double-breasted, dark blue suit. Peacock compared him to the new ATMs which embodiedmodern banking.[123]

In GTA Online, players can For other uses, see Password (disambiguation).

Page 236: Tugas tik di kelas xi ips 3

The sign in form for the English Wikipedia, which requests a username and password

A password is a word or string of characters used for user authentication to prove identity or access approval to gain access to a resource (example: an access code is a type of password), which should be kept secret from those not allowed access.

The use of passwords is known to be ancient. Sentries would challenge those wishing to enter an area or approaching it to supply a password or watchword, and would only allow a person or group to pass if they knew the password. In modern times, user names and passwords are commonly used by people during a log in process that controls access to protected computer operating systems, mobile phones, cable TV decoders, automatedteller machines (ATMs), etc. A typical computer user has passwords for many purposes: logging into accounts, retrievinge-mail, accessing applications, databases, networks, web sites, and even reading the morning newspaper online.

Despite the name, there is no need for passwords to be actual words; indeed passwords which are not actual words may be harder to guess, a desirable property. Some passwords are formed from multiple words and may more accurately be called apassphrase. The terms passcode and passkey are sometimes used when the secret information is purely numeric, such as the personal identification number (PIN) commonly used for ATM access. Passwords are generally short enough to be easily memorized and typed.

Most organizations specify a password policy that sets requirements for the composition and usage of passwords, typically dictating minimum length, required categories (e.g. upper and lower case, numbers, and special characters), prohibited elements (e.g. own name, date of birth, address, telephone number). Some governments have national authentication frameworks[1] that define requirements for user authentication to government services, including requirements for passwords.

Contents

Page 237: Tugas tik di kelas xi ips 3

1 Choosing a secure and memorable password 2 Factors in the security of a password system

o 2.1 Rate at which an attacker can try guessed passwords

o 2.2 Limits on the number of password guesseso 2.3 Form of stored passwordso 2.4 Methods of verifying a password over a network

2.4.1 Simple transmission of the password 2.4.2 Transmission through encrypted channels 2.4.3 Hash-based challenge-response methods 2.4.4 Zero-knowledge password proofs

o 2.5 Procedures for changing passwordso 2.6 Password longevityo 2.7 Number of users per passwordo 2.8 Password security architectureo 2.9 Password reuseo 2.10 Writing down passwords on papero 2.11 After death

3 Password cracking o 3.1 Incidents

4 Alternatives to passwords for authentication 5 "The Password is dead" 6 Website password systems 7 History of passwords 8 See also 9 References 10 External links

Choosing a secure and memorable passwordThe easier a password is for the owner to remember generally means it will be easier for an attacker to guess.[2] However, passwords which are difficult to remember may also reduce the security of a system because (a) users might need to write down or electronically store the password, (b) users will needfrequent password resets and (c) users are more likely to re-use the same password. Similarly, the more stringent requirements for password strength, e.g. "have a mix of uppercase and lowercase letters and digits" or "change it monthly", the greater the degree to which users will subvert the system.[3] Others argue longer passwords provide more

Page 238: Tugas tik di kelas xi ips 3

security (e.g., entropy) than shorter passwords with a wide variety of characters.[4]

In The Memorability and Security of Passwords,[5] Jeff Yan et al. examinethe effect of advice given to users about a good choice of password. They found that passwords based on thinking of a phrase and taking the first letter of each word are just as memorable as naively selected passwords, and just as hard to crack as randomly generated passwords. Combining two or more unrelated words is another good method, but a single dictionary word is not. Having a personally designed algorithmfor generating obscure passwords is another good method.

However, asking users to remember a password consisting of a "mix of uppercase and lowercase characters" is similar to asking them to remember a sequence of bits: hard to remember, and only a little bit harder to crack (e.g. only 128 times harder to crack for 7-letter passwords, less if the user simply capitalises one of the letters). Asking users to use "both letters and digits" will often lead to easy-to-guess substitutions such as 'E' → '3' and 'I' → '1', substitutions which are well known to attackers. Similarly typing the password one keyboard row higher is a common trick known to attackers.[6]

A method to memorize a complex password is to remember a sentence like 'This year I go to Italy on Friday July 6!' and use the first characters as the actual password. In this case 'TyIgtIoFJ6!'.

In 2013, Google released a list of the most common password types, all of which are considered insecure because they are too easy to guess (especially after researching an individual on social media):[7]

The name of a pet, child, family member, or significant other

Anniversary dates and birthdays Birthplace Name of a favorite holiday Something related to a favorite sports team The word "password"

Page 239: Tugas tik di kelas xi ips 3

Factors in the security of a password systemThe security of a password-protected system depends on severalfactors. The overall system must, of course, be designed for sound security, with protection against computer viruses, man-in-the-middle attacks and the like. Physical security issues are also a concern, from deterring shoulder surfing to more sophisticated physical threats such as video cameras and keyboard sniffers. And, of course, passwords should be chosen so that they are hard for an attacker to guess and hard for anattacker to discover using any (and all) of the available automatic attack schemes. See password strength and computer security.

Nowadays, it is a common practice for computer systems to hidepasswords as they are typed. The purpose of this measure is toavoid bystanders reading the password. However, some argue that this practice may lead to mistakes and stress, encouraging users to choose weak passwords. As an alternative,users should have the option to show or hide passwords as theytype them.[8]

Effective access control provisions may force extreme measureson criminals seeking to acquire a password or biometric token.[9] Less extreme measures include extortion, rubber hose cryptanalysis, and side channel attack.

Here are some specific password management issues that must beconsidered in thinking about, choosing, and handling, a password.

Rate at which an attacker can try guessed passwords

The rate at which an attacker can submit guessed passwords to the system is a key factor in determining system security. Some systems impose a time-out of several seconds after a small number (e.g., three) of failed password entry attempts. In the absence of other vulnerabilities, such systems can be effectively secure with relatively simple passwords, if they have been well chosen and are not easily guessed.[10]

Page 240: Tugas tik di kelas xi ips 3

Many systems store a cryptographic hash of the password. If anattacker gets access to the file of hashed passwords guessing can be done off-line, rapidly testing candidate passwords against the true password's hash value. In the example of a web-server, an online attacker can guess only at the rate at which the server will respond, while an off-line attacker (whogains access to the file) can guess at a rate limited only by the hardware that is brought to bear.

Passwords that are used to generate cryptographic keys (e.g., for disk encryption or Wi-Fi security) can also be subjected to high rate guessing. Lists of common passwords are widely available and can make password attacks very efficient. (See Password cracking.) Security in such situations depends on using passwords or passphrases of adequate complexity, making such an attack computationally infeasible for the attacker. Some systems, such as PGP and Wi-Fi WPA, apply a computation-intensive hash to the password to slow such attacks. See key stretching.

Limits on the number of password guesses

An alternative to limiting the rate at which an attacker can make guesses on a password is to limit the total number of guesses that can be made. The password can be disabled, requiring a reset, after a small number of consecutive bad guesses (say 5); and the user may be required to change the password after a larger cumulative number of bad guesses (say 30), to prevent an attacker from making an arbitrarily large number of bad guesses by interspersing them between good guesses made by the legitimate password owner. [11] The usernameassociated with the password can be changed to counter a denial of service attack.

Form of stored passwords

Some computer systems store user passwords as plaintext, against which to compare user log on attempts. If an attacker gains access to such an internal password store, all passwords—and so all user accounts—will be compromised. If some users employ the same password for accounts on different systems, those will be compromised as well.

Page 241: Tugas tik di kelas xi ips 3

More secure systems store each password in a cryptographicallyprotected form, so access to the actual password will still bedifficult for a snooper who gains internal access to the system, while validation of user access attempts remains possible. The most secure don't store passwords at all, but a one-way derivation, such as a polynomial, modulus, or an advanced hash function.[4] Roger Needham invented the now common approach of storing only a “hashed” form of the plaintext password. When a user types in a password on such a system, the password handling software runs through a cryptographic hash algorithm, and if the hash value generated from the user’s entry matches the hash stored in the password database, the user is permitted access. The hash value is created by applying a cryptographic hash function to a string consisting of the submitted password and, in many implementations, another value known as a salt. A salt prevents attackers from easily building a list of hash values for common passwords and prevents password cracking efforts from scaling across all users.[12] MD5 and SHA1 are frequently used cryptographic hash functions but they are not recommendedfor password hashing unless they are used as part of a larger construction such as in PBKDF2.[13]

The stored data—sometimes called the "password verifier" or the "password hash"—is often stored in Modular Crypt Format orRFC 2307 hash format, sometimes in the /etc/passwd file or the/etc/shadow file.[14]

The main storage formats for passwords are plaintext, hashed, hashed and salted, and reversibly encrypted.[15] If an attacker gains access to the password file then if it is plaintext no cracking is necessary. If it is hashed but not salted then it is subject to rainbow table attacks (which are more efficient than cracking). If it is reversibly encrypted then if the attacker gets the decryption key along with the file no cracking is necessary, while if he fails to get the key cracking is not possible (guesses cannot be verified without the key). Thus, of the common storage formats for passwords only when passwords have been salted and hashed is cracking both necessary and possible.[15]

If a cryptographic hash function is well designed, it is computationally infeasible to reverse the function to recover

Page 242: Tugas tik di kelas xi ips 3

a plaintext password. An attacker can, however, use widely available tools to attempt to guess the passwords. These toolswork by hashing possible passwords and comparing the result ofeach guess to the actual password hashes. If the attacker finds a match, they know that their guess is the actual password for the associated user. Password cracking tools can operate by brute force (i.e. trying every possible combinationof characters) or by hashing every word from a list; large lists of possible passwords in many languages are widely available on the Internet.[4] The existence of password cracking tools allows attackers to easily recover poorly chosen passwords. In particular, attackers can quickly recoverpasswords that are short, dictionary words, simple variations on dictionary words or that use easily guessable patterns.[16] Amodified version of the DES algorithm was used as the basis for the password hashing algorithm in early Unix systems.[17] The crypt algorithm used a 12-bit salt value so that each user’s hash was unique and iterated the DES algorithm 25 timesin order to make the hash function slower, both measures intended to frustrate automated guessing attacks.[17] The user’spassword was used as a key to encrypt a fixed value. More recent Unix or Unix like systems (e.g., Linux or the various BSD systems) use more secure password hashing algorithms such as PBKDF2, bcrypt, and scrypt which have large salts and an adjustable cost or number of iterations.[18] A poorly designed hash function can make attacks feasible even if a strong password is chosen. See LM hash for a widely deployed, and insecure, example.[19]

Methods of verifying a password over a network

Simple transmission of the password

Passwords are vulnerable to interception (i.e., "snooping") while being transmitted to the authenticating machine or person. If the password is carried as electrical signals on unsecured physical wiring between the user access point and the central system controlling the password database, it is subject to snooping by wiretapping methods. If it is carried as packetized data over the Internet, anyone able to watch thepackets containing the logon information can snoop with a verylow probability of detection.

Page 243: Tugas tik di kelas xi ips 3

Email is sometimes used to distribute passwords but this is generally an insecure method. Since most email is sent as plaintext, a message containing a password is readable withouteffort during transport by any eavesdropper. Further, the message will be stored as plaintext on at least two computers:the sender's and the recipient's. If it passes through intermediate systems during its travels, it will probably be stored on there as well, at least for some time, and may be copied to backup, cache or history files on any of these systems.

Using client-side encryption will only protect transmission from the mail handling system server to the client machine. Previous or subsequent relays of the email will not be protected and the email will probably be stored on multiple computers, certainly on the originating and receiving computers, most often in cleartext.

Transmission through encrypted channels

The risk of interception of passwords sent over the Internet can be reduced by, among other approaches, using cryptographicprotection. The most widely used is the Transport Layer Security (TLS, previously called SSL) feature built into most current Internet browsers. Most browsers alert the user of a TLS/SSL protected exchange with a server by displaying a closed lock icon, or some other sign, when TLS is in use. There are several other techniques in use; see cryptography.

Hash-based challenge-response methods

Unfortunately, there is a conflict between stored hashed-passwords and hash-based challenge-response authentication; the latter requires a client to prove to a server that they know what the shared secret (i.e., password) is, and to do this, the server must be able to obtain the shared secret fromits stored form. On many systems (including Unix-type systems)doing remote authentication, the shared secret usually becomesthe hashed form and has the serious limitation of exposing passwords to offline guessing attacks. In addition, when the hash is used as a shared secret, an attacker does not need theoriginal password to authenticate remotely; they only need thehash.

Page 244: Tugas tik di kelas xi ips 3

Zero-knowledge password proofs

Rather than transmitting a password, or transmitting the hash of the password, password-authenticated key agreement systems can perform a zero-knowledge password proof, which proves knowledge of the password without exposing it.

Moving a step further, augmented systems for password-authenticated key agreement (e.g., AMP, B-SPEKE, PAK-Z, SRP-6)avoid both the conflict and limitation of hash-based methods. An augmented system allows a client to prove knowledge of the password to a server, where the server knows only a (not exactly) hashed password, and where the unhashed password is required to gain access.

Procedures for changing passwords

Usually, a system must provide a way to change a password, either because a user believes the current password has been (or might have been) compromised, or as a precautionary measure. If a new password is passed to the system in unencrypted form, security can be lost (e.g., via wiretapping)before the new password can even be installed in the password database. And, of course, if the new password is given to a compromised employee, little is gained. Some web sites includethe user-selected password in an unencrypted confirmation e-mail message, with the obvious increased vulnerability.

Identity management systems are increasingly used to automate issuance of replacements for lost passwords, a feature called self service password reset. The user's identity is verified by asking questions and comparing the answers to ones previously stored (i.e., when the account was opened).

Some password reset questions ask for personal information that could be found on social media, such as mother's maiden name. As a result, some security experts recommend either making up one's own questions or giving false answers.[20]

Password longevity

"Password aging" is a feature of some operating systems which forces users to change passwords frequently (e.g., quarterly,

Page 245: Tugas tik di kelas xi ips 3

monthly or even more often). Such policies usually provoke user protest and foot-dragging at best and hostility at worst.There is often an increase in the people who note down the password and leave it where it can easily be found, as well ashelpdesk calls to reset a forgotten password. Users may use simpler passwords or develop variation patterns on a consistent theme to keep their passwords memorable.[21] Because of these issues, there is some debate[22] as to whether passwordaging is effective. The intended benefit is mainly that a stolen password will be made ineffective if it is reset; however in many cases, particularly with administrative or "root" accounts, once an attacker has gained access, they can make alterations to the operating system that will allow them future access even after the initial password they used expires. (See rootkit.)

The other less-frequently cited, and possibly more valid reason is that in the event of a long brute force attack, the password will be invalid by the time it has been cracked. Specifically, in an environment where it is considered important to know the probability of a fraudulent login in order to accept the risk, one can ensure that the total numberof possible passwords multiplied by the time taken to try eachone (assuming the greatest conceivable computing resources) ismuch greater than the password lifetime. However there is no documented evidence that the policy of requiring periodic changes in passwords increases system security.

Password aging may be required because of the nature of IT systems the password allows access to; if personal data is involved the EU Data Protection Directive is in force. Implementing such a policy, however, requires careful consideration of the relevant human factors. Humans memorize by association, so it is impossible to simply replace one memory with another. Two psychological phenomena interfere with password substitution. "Primacy" describes the tendency for an earlier memory to be retained more strongly than a later one. "Interference" is the tendency of two memories withthe same association to conflict. Because of these effects most users must resort to a simple password containing a number that can be incremented each time the password is changed.

Page 246: Tugas tik di kelas xi ips 3

Number of users per password

Sometimes a single password controls access to a device, for example, for a network router, or password-protected mobile phone. However, in the case of a computer system, a password is usually stored for each user account, thus making all access traceable (save, of course, in the case of users sharing passwords). A would-be user on most systems must supply a username as well as a password, almost always at account set up time, and periodically thereafter. If the user supplies a password matching the one stored for the supplied username, he or she is permitted further access into the computer system. This is also the case for a cash machine, except that the 'user name' is typically the account number stored on the bank customer's card, and the PIN is usually quite short (4 to 6 digits).

Allotting separate passwords to each user of a system is preferable to having a single password shared by legitimate users of the system, certainly from a security viewpoint. Thisis partly because users are more willing to tell another person (who may not be authorized) a shared password than one exclusively for their use. Single passwords are also much lessconvenient to change because many people need to be told at the same time, and they make removal of a particular user's access more difficult, as for instance on graduation or resignation. Per-user passwords are also essential if users are to be held accountable for their activities, such as making financial transactions or viewing medical records.

Password security architecture

Common techniques used to improve the security of computer systems protected by a password include:

Not displaying the password on the display screen as it is being entered or obscuring it as it is typed by using asterisks (*) or bullets (•).

Allowing passwords of adequate length. (Some legacy operating systems, including early versions[which?] of Unix and Windows, limited passwords to an 8 character maximum,[23][24][25] reducing security.)

Page 247: Tugas tik di kelas xi ips 3

Requiring users to re-enter their password after a periodof inactivity (a semi log-off policy).

Enforcing a password policy to increase password strengthand security.

o Requiring periodic password changes.o Assigning randomly chosen passwords.o Requiring minimum password lengths.[13]

o Some systems require characters from various character classes in a password—for example, "must have at least one uppercase and at least one lowercase letter". However, all-lowercase passwords are more secure per keystroke than mixed capitalization passwords.[26]

o Providing an alternative to keyboard entry (e.g., spoken passwords, or biometric passwords).

o Requiring more than one authentication system, such as 2-factor authentication (something a user has andsomething the user knows).

Using encrypted tunnels or password-authenticated key agreement to prevent access to transmitted passwords via network attacks

Limiting the number of allowed failures within a given time period (to prevent repeated password guessing). After the limit is reached, further attempts will fail (including correct password attempts) until the beginningof the next time period. However, this is vulnerable to aform of denial of service attack.

Introducing a delay between password submission attempts to slow down automated password guessing programs.

Some of the more stringent policy enforcement measures can pose a risk of alienating users, possibly decreasing security as a result.

Password reuse

It is common practice amongst computer users to reuse the samepassword on multiple sites. This presents a substantial security risk, since an attacker need only compromise a singlesite in order to gain access to other sites the victim uses. This problem is exacerbated by also reusing usernames, and by websites requiring email logins, as it makes it easier for an

Page 248: Tugas tik di kelas xi ips 3

attacker to track a single user across multiple sites. Password reuse can be avoided or minimused by using mnemonic techniques, writing passwords down on paper, or using a password manager.[27]

It has been argued by Redmond researchers Dinei Florencio and Cormac Herley, together with Paul C. van Oorschot of Carleton University, Canada, that password reuse is inevitable, and that users should reuse passwords for low-security websites (which contain little personal data and no financial information, for example) and instead focus their efforts on remember long, complex passwords for a few important accounts,such as banks accounts.[28] Similar arguments were made by Forbes cybersecurity columnist, Joseph Steinberg, who also argued that people should not change passwords as often as many "experts" advise, due to the same limiations in human memory.[21]

Writing down passwords on paper

Historically, many security experts asked people to memorize their passwords: "Never write down a password". More recently,many security experts such as Bruce Schneier recommend that people use passwords that are too complicated to memorize, write them down on paper, and keep them in a wallet.[29][30][31][32]

[33][34][35]

Password manager software can also store passwords relatively safely, in an encrypted file sealed with a single password.

After death

According to a survey by the University of London, one in ten people are now leaving their passwords in their wills to pass on this important information when they die. One third of people, according to the poll, agree that their password protected data is important enough to pass on in their will.[36]

Password crackingMain article: Password cracking

Page 249: Tugas tik di kelas xi ips 3

Attempting to crack passwords by trying as many possibilities as time and money permit is a brute force attack. A related method, rather more efficient in most cases, is a dictionary attack. In a dictionary attack, all words in one or more dictionaries are tested. Lists of common passwords are also typically tested.

Password strength is the likelihood that a password cannot be guessed or discovered, and varies with the attack algorithm used. Cryptologists and computer scientists often refer to thestrength or 'hardness' in terms of entropy.[4]

Passwords easily discovered are termed weak or vulnerable; passwords very difficult or impossible to discover are considered strong. There are several programs available for password attack (or even auditing and recovery by systems personnel) such as L0phtCrack, John the Ripper, and Cain; someof which use password design vulnerabilities (as found in the Microsoft LANManager system) to increase efficiency. These programs are sometimes used by system administrators to detectweak passwords proposed by users.

Studies of production computer systems have consistently shownthat a large fraction of all user-chosen passwords are readilyguessed automatically. For example, Columbia University found 22% of user passwords could be recovered with little effort.[37]

According to Bruce Schneier, examining data from a 2006 phishing attack, 55% of MySpace passwords would be crackable in 8 hours using a commercially available Password Recovery Toolkit capable of testing 200,000 passwords per second in 2006.[38] He also reported that the single most common password was password1, confirming yet again the general lack of informed care in choosing passwords among users. (He nevertheless maintained, based on these data, that the generalquality of passwords has improved over the years—for example, average length was up to eight characters from under seven in previous surveys, and less than 4% were dictionary words.[39])

Incidents

On July 16, 1998, CERT reported an incident where an attacker had found 186,126 encrypted passwords. At the

Page 250: Tugas tik di kelas xi ips 3

time the attacker was discovered, 47,642 passwords had already been cracked.[40]

In September, 2001, after the deaths of 960 New York employees in the September 11 attacks, financial servicesfirm Cantor Fitzgerald through Microsoft broke the passwords of deceased employees to gain access to files needed for servicing client accounts.[41] Technicians used brute-force attacks, and interviewers contacted families to gather personalized information that might reduce the search time for weaker passwords.[41]

In December 2009, a major password breach of the Rockyou.com website occurred that led to the release of 32 million passwords. The hacker then leaked the full list of the 32 million passwords (with no other identifiable information) to the Internet. Passwords werestored in cleartext in the database and were extracted through a SQL injection vulnerability. The Imperva Application Defense Center (ADC) did an analysis on the strength of the passwords.[42]

In June, 2011, NATO (North Atlantic Treaty Organization) experienced a security breach that led to the public release of first and last names, usernames, and passwordsfor more than 11,000 registered users of their e-bookshop. The data was leaked as part of Operation AntiSec, a movement that includes Anonymous, LulzSec, as well as other hacking groups and individuals. The aim of AntiSec is to expose personal, sensitive, and restricted information to the world, using any means necessary.[43]

On July 11, 2011, Booz Allen Hamilton, a consulting firm that does work for the Pentagon, had their servers hackedby Anonymous and leaked the same day. "The leak, dubbed 'Military Meltdown Monday,' includes 90,000 logins of military personnel—including personnel from USCENTCOM, SOCOM, the Marine corps, various Air Force facilities, Homeland Security, State Department staff, and what lookslike private sector contractors."[44] These leaked passwords wound up being hashed in SHA1, and were later decrypted and analyzed by the ADC team at Imperva, revealing that even military personnel look for shortcutsand ways around the password requirements.[45]

Page 251: Tugas tik di kelas xi ips 3

Alternatives to passwords for authenticationThe numerous ways in which permanent or semi-permanent passwords can be compromised has prompted the development of other techniques. Unfortunately, some are inadequate in practice, and in any case few have become universally available for users seeking a more secure alternative.[citation needed]

A 2012 paper[46] examines why passwords have proved so hard to supplant (despite numerous predictions that they would soon bea thing of the past[47]); in examining thirty representative proposed replacements with respect to security, usability and deployability they conclude "none even retains the full set ofbenefits that legacy passwords already provide."

Single-use passwords . Having passwords which are only valid once makes many potential attacks ineffective. Mostusers find single use passwords extremely inconvenient. They have, however, been widely implemented in personal online banking, where they are known as Transaction Authentication Numbers (TANs). As most home users only perform a small number of transactions each week, the single use issue has not led to intolerable customer dissatisfaction in this case.

Time-synchronized one-time passwords are similar in some ways to single-use passwords, but the value to be enteredis displayed on a small (generally pocketable) item and changes every minute or so.

PassWindow one-time passwords are used as single-use passwords, but the dynamic characters to be entered are visible only when a user superimposes a unique printed visual key over a server generated challenge image shown on the user's screen.

Access controls based on public key cryptography e.g. ssh. The necessary keys are usually too large to memorize(but see proposal Passmaze)[48] and must be stored on a local computer, security token or portable memory device,such as a USB flash drive or even floppy disk.

Biometric methods promise authentication based on unalterable personal characteristics, but currently (2008) have high error rates and require additional

Page 252: Tugas tik di kelas xi ips 3

hardware to scan, for example, fingerprints, irises, etc.They have proven easy to spoof in some famous incidents testing commercially available systems, for example, the gummie fingerprint spoof demonstration,[49] and, because these characteristics are unalterable, they cannot be changed if compromised; this is a highly important consideration in access control as a compromised access token is necessarily insecure.

Single sign-on technology is claimed to eliminate the need for having multiple passwords. Such schemes do not relieve user and administrators from choosing reasonable single passwords, nor system designers or administrators from ensuring that private access control information passed among systems enabling single sign-on is secure against attack. As yet, no satisfactory standard has beendeveloped.

Envaulting technology is a password-free way to secure data on e.g. removable storage devices such as USB flash drives. Instead of user passwords, access control is based on the user's access to a network resource.

Non-text-based passwords, such as graphical passwords or mouse-movement based passwords.[50] Graphical passwords arean alternative means of authentication for log-in intended to be used in place of conventional password; they use images, graphics or colours instead of letters, digits or special characters. One system requires users to select a series of faces as a password, utilizing the human brain's ability to recall faces easily.[51] In some implementations the user is required to pick from a series of images in the correct sequence in order to gainaccess.[52] Another graphical password solution creates a one-time password using a randomly generated grid of images. Each time the user is required to authenticate, they look for the images that fit their pre-chosen categories and enter the randomly generated alphanumeric character that appears in the image to form the one-time password.[53][54] So far, graphical passwords are promising, but are not widely used. Studies on this subject have been made to determine its usability in the real world. While some believe that graphical passwords would be harder to crack, others suggest that people will be just

Page 253: Tugas tik di kelas xi ips 3

as likely to pick common images or sequences as they are to pick common passwords.[citation needed]

2D Key (2-Dimensional Key)[55] is a 2D matrix-like key input method having the key styles of multiline passphrase, crossword, ASCII/Unicode art, with optional textual semantic noises, to create big password/key beyond 128 bits to realize the MePKC (Memorizable Public-Key Cryptography)[56] using fully memorizable private key upon the current private key management technologies likeencrypted private key, split private key, and roaming private key.

Cognitive passwords use question and answer cue/response pairs to verify identity.

"The Password is dead"That "the password is dead" is a recurring idea in Computer Security. It often accompanies arguments that the replacement of passwords by a more secure means of authentication is both necessary and imminent. This claim has been made by numerous people at least since 2004. Notably, Bill Gates, speaking at the 2004 RSA Conference predicted the demise of passwords saying "they just don't meet the challenge for anything you really want to secure."[47] In 2011 IBM predicted that, within five years, "You will never need a password again."[57] Matt Honan, a journalist at Wired, who was the victim of a hacking incident, in 2012 wrote "The age of the password has come to an end."[58] Heather Adkins, manager of Information Security at Google, in 2013 said that "passwords are done at Google."[59] Eric Grosse, VP of security engineering at Google, states that"passwords and simple bearer tokens, such as cookies, are no longer sufficient to keep users safe."[60] Christopher Mims, writing in the Wall Street Journal said the password "is finally dying" and predicted their replacement by device-basedauthentication.[61] Avivah Litan of Gartner said in 2014 "Passwords were dead a few years ago. Now they are more than dead."[62] The reasons given often include reference to the Usability as well as security problems of passwords.

The claim that "the password is dead" is often used by advocates of alternatives to passwords, such as Biometrics, Two-factor authentication or Single sign-on. Many initiatives

Page 254: Tugas tik di kelas xi ips 3

have been launched with the explicit goal of eliminating passwords. These include Microsoft's Cardspace, the Higgins project, the Liberty Alliance, NSTIC, the FIDO Alliance and various Identity 2.0 proposals. Jeremy Grant, head of NSTIC initiative (the US Dept. of Commerce National Strategy for Trusted Identities in Cyberspace), declared "Passwords are a disaster from a security perspective, we want to shoot them dead."[63] The FIDO Alliance promises a "passwordless experience" in its 2015 specification document.[64]

In spite of these predictions and efforts to replace them passwords still appear the dominant form of authentication on the web. In "The Persistence of Passwords," Cormac Herley and Paul van Oorschot suggest that every effort should be made to end the "spectacularly incorrect assumption" that passwords are dead.[65] They argue that "no other single technology matches their combination of cost, immediacy and convenience" and that "passwords are themselves the bestfit for many of thescenarios in which they are currently used."

Website password systemsPasswords are used on websites to authenticate users and are usually maintained on the Web server, meaning the browser on aremote system sends a password to the server (by HTTP POST), the server checks the password and sends back the relevant content (or an access denied message). This process eliminatesthe possibility of local reverse engineering as the code used to authenticate the password does not reside on the local machine.

Transmission of the password, via the browser, in plaintext means it can be intercepted along its journey to the server. Many web authentication systems use SSL to establish an encrypted session between the browser and the server, and is usually the underlying meaning of claims to have a "secure Website". This is done automatically by the browser and increasesintegrity of the session, assuming neither end has been compromised and that the SSL/TLS implementations used are highquality ones.

History of passwords

Page 255: Tugas tik di kelas xi ips 3

Passwords or watchwords have been used since ancient times. Polybius describes the system for the distribution of watchwords in the Roman military as follows:

The way in which they secure the passing round of the watchword for the night is as follows: from the tenth maniple of each class of infantry and cavalry, the maniple which is encamped at the lower end of the street,a man is chosen who is relieved from guard duty, and he attends every day at sunset at the tent of the tribune, and receiving from him the watchword — that is a wooden tablet with the word inscribed on it – takes his leave, and on returning to his quarters passes on the watchword and tablet before witnesses to the commander of the next maniple, who in turn passes it to the one next him. All do the same until it reaches the first maniples, those encamped near the tents of the tribunes. These latter areobliged to deliver the tablet to the tribunes before dark. So that if all those issued are returned, the tribune knows that the watchword has been given to all the maniples, and has passed through all on its way back to him. If any one of them is missing, he makes inquiry at once, as he knows by the marks from what quarter the tablet has not returned, and whoever is responsible for the stoppage meets with the punishment he merits.[66]

Passwords in military use evolved to include not just a password, but a password and a counterpassword; for example inthe opening days of the Battle of Normandy, paratroopers of the U.S. 101st Airborne Division used a password — flash — whichwas presented as a challenge, and answered with the correct response — thunder. The challenge and response were changed every three days. American paratroopers also famously used a device known as a "cricket" on D-Day in place of a password system as a temporarily unique method of identification; one metallic click given by the device in lieu of a password was to be met by two clicks in reply.[67]

Passwords have been used with computers since the earliest days of computing. MIT's CTSS, one of the first time sharing systems, was introduced in 1961. It had a LOGIN command that requested a user password. "After typing PASSWORD, the system turns off the printing mechanism, if possible, so that the

Page 256: Tugas tik di kelas xi ips 3

user may type in his password with privacy."[68] In the early 1970s, Robert Morris developed a system of storing login passwords in a hashed form as part of the Unix operating system. The system was based on a simulated Hagelin rotor crypto machine, and first appeared in 6th Edition Unix in 1974. A later version of his algorithm, known as crypt(3), used a 12-bit salt and invoked a modified form of the DES algorithm 25 times to reduce the risk of pre-computed dictionary attacks.[69]

use ATMs to store their collected money into their bank account.

E-commerce (also written as e-Commerce, eCommerce or similar variants), short for electronic commerce, is trading in products or services using computer networks, such as the Internet. Electronic commerce draws on technologies such as mobile commerce, electronic funds transfer, supply chain management, Internet marketing, online transaction processing,electronic data interchange (EDI), inventory management systems, and automated data collection systems. Modern electronic commerce typically uses the World Wide Web for at least one part of the transaction's life cycle, although it may also use other technologies such as e-mail.

E-commerce businesses may employ some or all of the following:

Online shopping web sites for retail sales direct to consumers

Providing or participating in online marketplaces, which process third-party business-to-consumer or consumer-to-consumer sales

Business-to-business buying and selling Gathering and using demographic data through web contacts

and social media Business-to-business electronic data interchange Marketing to prospective and established customers by e-

mail or fax (for example, with newsletters) Engaging in pretail for launching new products and

services

Contents

Page 257: Tugas tik di kelas xi ips 3

1 Timeline 2 Business applications 3 Governmental regulation 4 Forms 5 Global trends 6 Impact on markets and retailers 7 Impact on supply chain management 8 The social impact of e-commerce 9 Distribution channels 10 Examples of new e-commerce systems 11 See also 12 References 13 Further reading 14 External links

TimelineA timeline for the development of e-commerce:

1971 or 1972: The ARPANET is used to arrange a sale between students at the Stanford Artificial Intelligence Laboratory and the Massachusetts Institute of Technology,later described as "the seminal act of e-commerce" in John Markoff's book What the Dormouse Said.[1]

1979: Michael Aldrich demonstrates the first online shopping system.[2]

1981: Thomson Holidays UK is first business-to-business online shopping system to be installed.[3]

1982: Minitel was introduced nationwide in France by France Télécom and used for online ordering.

1983: California State Assembly holds first hearing on "electronic commerce" in Volcano, California.[4] Testifying are CPUC, MCI Mail, Prodigy, CompuServe, Volcano Telephone, and Pacific Telesis. (Not permitted totestify is Quantum Technology, later to become AOL.)

1984: Gateshead SIS/Tesco is first B2C online shopping system [5] and Mrs Snowball, 72, is the first online home shopper[6]

1984: In April 1984, CompuServe launches the Electronic Mall in the USA and Canada. It is the first comprehensiveelectronic commerce service.[7]

Page 258: Tugas tik di kelas xi ips 3

1990: Tim Berners-Lee writes the first web browser, WorldWideWeb, using a NeXT computer.[8]

1992: Book Stacks Unlimited in Cleveland opens a commercial sales website (www.books.com) selling books online with credit card processing.

1992: St. Martin's Press publishes J.H. Snider and Terra Ziporyn's Future Shop: How New Technologies Will Change the Way We Shop and What We Buy. [9]

1993: Paget Press releases edition No. 3 [10] of the first[citation needed] app store, The Electronic AppWrapper [11]

1994: Netscape releases the Navigator browser in October under the code name Mozilla. Netscape 1.0 is introduced in late 1994 with SSL encryption that made transactions secure.

1994: Ipswitch IMail Server becomes the first software available online for sale and immediate download via a partnership between Ipswitch, Inc. and OpenMarket.

1994: "Ten Summoner's Tales" by Sting becomes the first secure online purchase.[12]

1995: The US National Science Foundation lifts its formerstrict prohibition of commercial enterprise on the Internet.[13]

1995: Thursday 27 April 1995, the purchase of a book by Paul Stanfield, Product Manager for CompuServe UK, from WH Smith's shop within CompuServe's UK Shopping Centre is the UK's first national online shopping service secure transaction. The shopping service at launch featured W H Smith, Tesco, Virgin Megastores/Our Price, Great Universal Stores (GUS), Interflora, Dixons Retail, Past Times, PC World (retailer) and Innovations.

1995: Jeff Bezos launches Amazon.com and the first commercial-free 24-hour, internet-only radio stations, Radio HK and NetRadio start broadcasting. eBay is foundedby computer programmer Pierre Omidyar as AuctionWeb.

1996: IndiaMART B2B marketplace established in India. 1996: ECPlaza B2B marketplace established in Korea. 1996: The UK e-commerce platform Sellerdeck, formerly

Actinic, is established.[citation needed]

1998: Electronic postal stamps can be purchased and downloaded for printing from the Web.[14]

Page 259: Tugas tik di kelas xi ips 3

1998: Cbazaar formerly chennaibazaar.com, India's first B2C eCommerce portal launched by Rajesh Nahar and Ritesh Katariya.

1999: Alibaba Group is established in China. Business.comsold for US $7.5 million to eCompanies, which was purchased in 1997 for US $149,000. The peer-to-peer filesharing software Napster launches. ATG Stores launches to sell decorative items for the home online.

2000: The dot-com bust. 2001: Alibaba.com achieved profitability in December

2001. 2002: eBay acquires PayPal for $1.5 billion.[15] Niche

retail companies Wayfair and NetShops are founded with the concept of selling products through several targeted domains, rather than a central portal.

2003: Amazon.com posts first yearly profit. 2003: Bossgoo B2B marketplace established in China. 2004: DHgate.com, China's first online b2b transaction

platform, is established, forcing other b2b sites to moveaway from the "yellow pages" model.[16]

2007: Business.com acquired by R.H. Donnelley for $345 million.[17]

2009: Zappos.com acquired by Amazon.com for $928 million.[18] Retail Convergence, operator of private sale website RueLaLa.com, acquired by GSI Commerce for $180 million, plus up to $170 million in earn-out payments based on performance through 2012.[19]

2010: Groupon reportedly rejects a $6 billion offer from Google. Instead, the group buying websites went ahead with an IPO on 4 November 2011. It was the largest IPO since Google.[20][21]

2011: Quidsi.com, parent company of Diapers.com, acquiredby Amazon.com for $500 million in cash plus $45 million in debt and other obligations.[22] GSI Commerce, a company specializing in creating, developing and running online shopping sites for brick and mortar businesses, acquired by eBay for $2.4 billion.[23]

2014: Overstock.com processes over $1 million in Bitcoin sales.[24] India’s e-commerce industry is estimated to havegrown more than 30% from 2012 to $12.6 billion in 2013.[25]

US eCommerce and Online Retail sales projected to reach $294 billion, an increase of 12 percent over 2013 and 9%

Page 260: Tugas tik di kelas xi ips 3

of all retail sales.[26] Alibaba Group has the largest Initial public offering ever, worth $25 billion.

Business applications

An example of an automated online assistant on a merchandisingwebsite.

Some common applications related to electronic commerce are:

Document automation in supply chain and logistics Domestic and international payment systems Enterprise content management Group buying Print on demand Automated online assistant Newsgroups Online shopping and order tracking Online banking Online office suites Shopping cart software Teleconferencing Electronic tickets Social networking Instant messaging Pretail Digital Wallet

Governmental regulation

Page 261: Tugas tik di kelas xi ips 3

In the United States, some electronic commerce activities are regulated by the Federal Trade Commission (FTC). These activities include the use of commercial e-mails, online advertising and consumer privacy. The CAN-SPAM Act of 2003 establishes national standards for direct marketing over e-mail. The Federal Trade Commission Act regulates all forms of advertising, including online advertising, and states that advertising must be truthful and non-deceptive.[27] Using its authority under Section 5 of the FTC Act, which prohibits unfair or deceptive practices, the FTC has brought a number ofcases to enforce the promises in corporate privacy statements,including promises about the security of consumers' personal information.[28] As result, any corporate privacy policy relatedto e-commerce activity may be subject to enforcement by the FTC.

The Ryan Haight Online Pharmacy Consumer Protection Act of 2008, which came into law in 2008, amends the Controlled Substances Act to address online pharmacies.[29]

Conflict of laws in cyberspace is a major hurdle for harmonisation of legal framework for e-commerce around the world. In order to give a uniformity to e-commerce law around the world, many countries adopted the UNCITRAL Model Law on Electronic Commerce (1996) [30]

Internationally there is the International Consumer Protectionand Enforcement Network (ICPEN), which was formed in 1991 froman informal network of government customer fair trade organisations. The purpose was stated as being to find ways ofco-operating on tackling consumer problems connected with cross-border transactions in both goods and services, and to help ensure exchanges of information among the participants for mutual benefit and understanding. From this came Econsumer.gov, an ICPEN initiative since April 2001. It is a portal to report complaints about online and related transactions with foreign companies.

There is also Asia Pacific Economic Cooperation (APEC) was established in 1989 with the vision of achieving stability, security and prosperity for the region through free and open trade and investment. APEC has an Electronic Commerce Steering

Page 262: Tugas tik di kelas xi ips 3

Group as well as working on common privacy regulations throughout the APEC region.

In Australia, Trade is covered under Australian Treasury Guidelines for electronic commerce,[31] and the Australian Competition and Consumer Commission [32] regulates and offers advice on how to deal with businesses online,[33][34] and offers specific advice on what happens if things go wrong.[35]

In the United Kingdom, The Financial Services Authority (FSA)[36] was formerly the regulating authority for most aspects of the EU's Payment Services Directive (PSD), until its replacement in 2013 by the Prudential Regulation Authority andthe Financial Conduct Authority.[37] The UK implemented the PSD through the Payment Services Regulations 2009 (PSRs), which came into effect on 1 November 2009. The PSR affects firms providing payment services and their customers. These firms include banks, non-bank credit card issuers and non-bank merchant acquirers, e-money issuers, etc. The PSRs created a new class of regulated firms known as payment institutions (PIs), who are subject to prudential requirements. Article 87 of the PSD requires the European Commission to report on the implementation and impact of the PSD by 1 November 2012.[38]

In India, the Information Technology Act 2000 governs the basic applicability of e-commerce.

In China, the Telecommunications Regulations of the People's Republic of China (promulgated on 25 September 2000), stipulated the Ministry of Industry and Information Technology(MIIT) as the government department regulating all telecommunications related activities, including electronic commerce.[39] On the same day, The Administrative Measures on Internet Information Services released, is the first administrative regulation to address profit-generating activities conducted through the Internet, and lay the foundation for future regulations governing e-commerce in China.[40] In 28 August 2004, the eleventh session of the tenth NPC Standing Committee adopted The Electronic Signature Law, which regulates data message, electronic signature authentication and legal liability issues. It is considered the first law in China’s e-commerce legislation. It was a milestone in the course of improving China’s electronic

Page 263: Tugas tik di kelas xi ips 3

commerce legislation, and also marks the entering of China’s rapid development stage for electronic commerce legislation.[41]

FormsContemporary electronic commerce involves everything from ordering "digital" content for immediate online consumption, to ordering conventional goods and services, to "meta" services to facilitate other types of electronic commerce.

On the institutional level, big corporations and financial institutions use the internet to exchange financial data to facilitate domestic and international business. Data integrityand security are pressing issues for electronic commerce.

Aside from traditional e-Commerce, the terms m-Commerce (mobile commerce) as well (around 2013) t-Commerce [42] have alsobeen used.

Global trendsIn 2010, the United Kingdom had the biggest e-commerce market in the world when measured by the amount spent per capita.[43] The Czech Republic is the European country where ecommerce delivers the biggest contribution to the enterprises´ total revenue. Almost a quarter (24%) of the country’s total turnover is generated via the online channel.[44]

Among emerging economies, China's e-commerce presence continues to expand every year. With 384 million internet users, China's online shopping sales rose to $36.6 billion in 2009 and one of the reasons behind the huge growth has been the improved trust level for shoppers. The Chinese retailers have been able to help consumers feel more comfortable shopping online.[45] China's cross-border e-commerce is also growing rapidly. E-commerce transactions between China and other countries increased 32% to 2.3 trillion yuan ($375.8 billion) in 2012 and accounted for 9.6% of China's total international trade [46] In 2013, Alibaba had an e-commerce market share of 80% in China.[47]

Page 264: Tugas tik di kelas xi ips 3

Other BRIC countries are witnessing the accelerated growth of eCommerce as well. Brazil's eCommerce is growing quickly with retail eCommerce sales expected to grow at a healthy double-digit pace through 2014. By 2016, eMarketer expects retail ecommerce sales in Brazil to reach $17.3 billion.[48] India has an internet user base of about 243.2 million as of January 2014. Despite being third largest userbase in world, the penetration of Internet is low compared to markets like the United States, United Kingdom or France but is growing at a much faster rate, adding around 6 million new entrants every month. The industry consensus is that growth is at an inflection point. In India, cash on delivery is the most preferred payment method, accumulating 75% of the e-retail activities.

E-Commerce has become an important tool for small and large businesses worldwide, not only to sell to customers, but also to engage them.[49][50]

In 2012, ecommerce sales topped $1 trillion for the first timein history.[51]

Mobile devices are playing an increasing role in the mix of eCommerce. Some estimates show that purchases made on mobile devices will make up 25% of the market by 2017.[52] According toCisco Visual Networking Index,[53] in 2014 the amount of mobile devices will outnumber the number of world population.

In the past 10 years, e-commerce is in a period of rapid development. Cross-border e-commerce is called the Internet thinking along with traditional import and export trade. Cross-border e-commerce enables international trade towards more convenient and free open to cooperate between different countries in the world, incorporating developed and developingcountries. In the short term, developing countries may be limited to IT, but in the long term, they would change the barrier to develop their IT facilities, and continuing to close to developed countries.[54] The moment, developing countries like China and India are developing e-commerce very rapidly, such as China 's Alibaba, the financing capital (£15 billions) is the highest ever in e-commerce company. In addition, China is becoming the biggest e-commerce provider inthe world.[55] The number of Internet users in China which

Page 265: Tugas tik di kelas xi ips 3

amounts to 600 millions, and which is doubled than USA users in total.[56]

For traditional businesses, one research stated that information technology and cross-border e-commerce is a good opportunity for the rapid development and growth of enterprises. Many companies have invested enormous volume of investment in mobile applications.The DeLone and McLean Model stated that 3 perspectives are contributed to a successful e-business, including information system quality, service quality and users satisfaction.[57] There is no limit of time and space, there are more opportunities to reach out to customers around the world, and to cut down unnecessary intermediate links, thereby reducing the cost price, and can benefit from one on one large customer data analysis, to achieve a high degree of personal customization strategic plan, in order to fully enhance the core competitiveness of the products in company[58]

Impact on markets and retailersEconomists have theorized that e-commerce ought to lead to intensified price competition, as it increases consumers' ability to gather information about products and prices. Research by four economists at the University of Chicago has found that the growth of online shopping has also affected industry structure in two areas that have seen significant growth in e-commerce, bookshops and travel agencies. Generally, larger firms are able to use economies of scale andoffer lower prices. The lone exception to this pattern has been the very smallest category of bookseller, shops with between one and four employees, which appear to have withstoodthe trend.[59] Depending on the category, e-commerce may shift the switching costs—procedural, relational, and financial—experienced by customers.[60]

Individual or business involved in e-commerce whether buyers or sellers rely on Internet-based technology in order to accomplish their transactions. E-commerce is recognized for its ability to allow business to communicate and to form transaction anytime and anyplace. Whether an individual is in the US or overseas, business can be conducted through the

Page 266: Tugas tik di kelas xi ips 3

internet. The power of e-commerce allows geophysical barriers to disappear, making all consumers and businesses on earth potential customers and suppliers. Thus, switching barriers and switching costs my shift.[60] eBay is a good example of e-commerce business individuals and businesses are able to post their items and sell them around the Globe.[61]

In e-commerce activities, supply chain and logistics are two most crucial factors need to be considered. Typically, cross-border logistics need about few weeks time round. Based on this low efficiency of the supply chain service, customer satisfaction will be greatly reduced.[62] Some researcher statedthat combining e-commerce competence and IT setup could well enhance company’s overall business worth.[63] Other researcher stated that e-commerce need to consider the establishment of warehouse centers in foreign countries, to create high efficiency of the logistics system, not only improve customers’ satisfaction, but also can improve customers’ loyalty.[weasel words]. A recently published comprehensive meta-analysis shows that e-service quality is determined by many different factors, and influences customer satisfaction and repurchase intentions among customers.[64]

Some researcher investigated that if a company want to enhanceinternational customers’ satisfaction, where cultural website need to be adapted in particular country, rather than solely depending on its local country. However, according to this research findings, the researcher found that German company had treated its international website as the same local model,such as in UK and US online marketing.[65] A company could save money and make decision quickly via the identical strategy in different country. However, opportunity cost could be occurred, if the local strategy does not match to a new market, the company could lose its potential customer.[66]

Impact on supply chain managementFor a long time, companies had been troubled by the gap between the benefits which supply chain technology has and thesolutions to deliver those benefits. However, the emergence ofe-commerce has provided a more practical and effective way of

Page 267: Tugas tik di kelas xi ips 3

delivering the benefits of the new supply chain technologies.[67]

E-commerce has the capability to integrate all inter-company and intra-company functions, meaning that the three flows (physical flow, financial flow and information flow) of the supply chain could be also affected by e-commerce. The affections on physical flows improved the way of product and inventory movement level for companies. For the information flows, e-commerce optimised the capacity of information processing than companies used to have, and for the financial flows, e-commerce allows companies to have more efficient payment and settlement solutions.[67]

In addition, e-commerce has a more sophisticated level of impact on supply chains: Firstly, the performance gap will be eliminated since companies can identify gaps between differentlevels of supply chains by electronic means of solutions; Secondly, as a result of e-commerce emergence, new capabilities such implementing ERP systems have helped companies to manage operations with customers and suppliers. Yet these new capabilities are still not fully exploited. Thirdly, technology companies would keep investing on new e-commerce software solutions as they are expecting investment return. Fourthly, e-commerce would help to solve many aspects of issues that companies may feel difficult to cope with, suchas political barriers or cross-country changes. Finally, e-commerce provides companies a more efficient and effective wayto collaborate with each other within the supply chain.[67]

The social impact of e-commerceAlong with the e-commerce and its unique charm that has appeared gradually, virtual enterprise, virtual bank, network marketing, online shopping, payment and advertising, such thisnew vocabulary which is unheard-of and now has become as familiar to people. This reflects that the e-commerce has hugeimpact on the economy and society from the other side.[68] For instance, B2B is a rapidly growing business in the world that leads to lower cost and then improves the economic efficiency and also bring along the growth of employment.[69]

Page 268: Tugas tik di kelas xi ips 3

To understand how the e-commerce has affected the society and economy, this article will mention three issues below:

1. The e-commerce has changed the relative importance of time,but as the pillars of indicator of the country’s economic state that the importance of time should not be ignored.

2. The e-commerce offers the consumer or enterprise various information they need, making information into total transparency, will force enterprise no longer is able to use the mode of space or advertisement to raise their competitive edge.[70] Moreover, in theory, perfect competition between the consumer sovereignty and industry will maximize social welfare.[71]

3. In fact, during the economic activity in the past, large enterprise frequently has advantage of information resource, and thus at the expense of consumers. Nowadays, the transparent and real-time information protects the rights of consumers, because the consumers can use internet to pick out the portfolio to the benefit of themselves. The competitiveness of enterprises will be much more obvious than before, consequently, social welfare would be improved by the development of the e-commerce.

4. The new economy led by the e-commerce change humanistic spirit as well, but above all, is the employee loyalty.[72] Due to the market with competition, the employee’s level of professionalism becomes the crucial for enterprise in the niche market. The enterprises must pay attention to how to build up the enterprises inner culture and a set of interactive mechanisms and it is the prime problem for them. Furthermore, though the mode of e-commerce decrease the information cost and transaction cost, however, its development also makes human being are overly computer literate. In hence, emphasized more humanistic attitude to work is another project for enterprise to development. Life isthe root of all and high technology are merely an assistive tool to support our quality of life.

The e-commerce is not a kind of new industry, but it is creating a new economic model. Most of people agree that the e-commerce indeed to be important and significant for economic

Page 269: Tugas tik di kelas xi ips 3

society in the future, but actually that is a bit of clueless feeling at the beginning, this problem is exactly prove the e-commerce is a sort of incorporeal revolution.[73] Generally speaking, as a type of business active procedure, the e-commerce is going to leading an unprecedented revolution in the world, the influence of this model far exceeded the commercial affair itself.[74] Except the mentioned above, in thearea of law, education, culture and also policy, the e-commerce will continue that rise in impact. The e-commerce is truly to take human beings into the information society.

Distribution channelsThis section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (June 2013)

E-commerce has grown in importance as companies have adopted pure-click and brick-and-click channel systems. We can distinguish pure-click and brick-and-click channel system adopted by companies.

Pure-click or pure-play companies are those that have launched a website without any previous existence as a firm.

Bricks-and-clicks companies are those existing companies that have added an online site for e-commerce.

Click-to-brick online retailers that later open physical locations to supplement their online efforts.[75]

Examples of new e-commerce systemsAccording to eMarketer research company, "by 2017, 65.8 per cent of Britons will use smartphones" (cited by Williams, 2014).

Bringing online experience into the real world, also allows the development of the economy and the interaction between stores and customers. A great example of this new e-commerce system is what the Burberry store in London did in 2012. They refurbished the entire store with numerous big screens, photo-

Page 270: Tugas tik di kelas xi ips 3

studios, and also provided a stage for live acts. Moreover, onthe digital screens which are across the store, some fashion shows´ images and advertising campaigns are displayed (William, 2014). In this way, the experience of purchasing becomes more vivid and entertaining while the online and offline components are working together.

Another example is the Kiddicare smartphone app, in which consumers can compare prices. The app allows people to identify the location of sale products and to check whether the item they are looking for is in stock, or if it can be ordered online without going to the `real´ store (William, 2014). In the United States, the Walmart app allows consumers to check product availability and prices both online and offline. Moreover, you can also add to your shopping list items by scanning them, see their details and information, andcheck purchasers´ ratings and reviews.

"Computer technology" and "Computer system" redirect here. Forthe company, see Computer Technology Limited. For other uses, see Computer (disambiguation) and Computer system (disambiguation).

Computer

A computer is a general-purpose device that can be programmed to carry out a set of arithmetic or logical operations automatically. Since a sequence of operations can be readily changed, the computer can solve more than one kind of problem.

Conventionally, a computer consists of at least one processingelement, typically a central processing unit (CPU), and some form of memory. The processing element carries out arithmetic

Page 271: Tugas tik di kelas xi ips 3

and logic operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices allow information to be retrieved from an external source, and the result of operations saved and retrieved.

Mechanical analog computers started appearing in the first century and were later used in the medieval era for astronomical calculations. In World War II, mechanical analog computers were used for specialized military applications suchas calculating torpedo aiming. During this time the first electronic digital computers were developed. Originally they were the size of a large room, consuming as much power as several hundred modern personal computers (PCs).[1]

Modern computers based on integrated circuits are millions to billions of times more capable than the early machines, and occupy a fraction of the space.[2] Computers are small enough to fit into mobile devices, and mobile computers can be powered by small batteries. Personal computers in their various forms are icons of the Information Age and are what most people consider as “computers.” However, the embedded computers found in many devices from MP3 players to fighter aircraft and from electronic toys to industrial robots are themost numerous.

Contents 1 Etymology 2 History

o 2.1 Pre-twentieth centuryo 2.2 First general-purpose computing deviceo 2.3 Later Analog computerso 2.4 Digital computer development

2.4.1 Electromechanical 2.4.2 Vacuum tubes and digital electronic

circuits 2.4.3 Stored programs 2.4.4 Transistors 2.4.5 Integrated circuits

o 2.5 Mobile computers become dominant 3 Programs

Page 272: Tugas tik di kelas xi ips 3

o 3.1 Stored program architectureo 3.2 Machine codeo 3.3 Programming language

3.3.1 Low-level languages 3.3.2 High-level languages/Third Generation

Languageo 3.4 Fourth Generation Languageso 3.5 Program designo 3.6 Bugs

4 Components o 4.1 Control unito 4.2 Central Processing unit (CPU)o 4.3 Arithmetic logic unit (ALU)o 4.4 Memoryo 4.5 Input/output (I/O)o 4.6 Multitaskingo 4.7 Multiprocessing

5 Networking and the Internet o 5.1 Computer architecture paradigms

6 Misconceptions o 6.1 Unconventional computing

7 Future 8 Further topics

o 8.1 Artificial intelligence 9 Hardware

o 9.1 History of computing hardwareo 9.2 Other hardware topics

10 Software 11 Languages

o 11.1 Firmwareo 11.2 Liveware

12 Types of computers o 12.1 Based on useso 12.2 Based on sizes

13 Input Devices 14 Output Devices 15 Professions and organizations 16 See also 17 Notes 18 References 19 External links

Page 273: Tugas tik di kelas xi ips 3

EtymologyThe first known use of the word "computer" was in 1613 in a book called The Yong Mans Gleanings by English writer Richard Braithwait: "I haue read the truest computer of Times, and thebest Arithmetician that euer breathed, and he reduceth thy dayes into a short number." It referred to a person who carried out calculations, or computations. The word continued with the same meaning until the middle of the 20th century. From the end of the 19th century the word began to take on itsmore familiar meaning, a machine that carries out computations.[3]

HistoryMain article: History of computing hardware

Pre-twentieth century

The Ishango bone

Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was probably a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, probably livestock or grains, sealed in hollow unbaked clay containers.[4][5] The use of counting rods is one example.

Page 274: Tugas tik di kelas xi ips 3

Suanpan (the number represented on this abacus is 6,302,715,408)

The abacus was initially used for arithmetic tasks. The Roman abacus was used in Babylonia as early as 2400 BC. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money.

The ancient Greek-designed Antikythera mechanism, dating between 150 to 100 BC, is the world's oldest analog computer.

The Antikythera mechanism is believed to be the earliest mechanical analog "computer", according to Derek J. de Solla Price.[6] It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to circa 100 BC. Devices of a level of complexitycomparable to that of the Antikythera mechanism would not reappear until a thousand years later.

Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century.[7] The astrolabe was invented in theHellenistic world in either the 1st or 2nd centuries BC and is

Page 275: Tugas tik di kelas xi ips 3

often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kindsof problems in spherical astronomy. An astrolabe incorporatinga mechanical calendar computer[8][9] and gear-wheels was inventedby Abi Bakr of Isfahan, Persia in 1235.[10] Abū Rayhān al-Bīrūnīinvented the first mechanical geared lunisolar calendar astrolabe,[11] an early fixed-wired knowledge processing machine [12] with a gear train and gear-wheels,[13] circa 1000 AD.

The sector, a calculating instrument used for solving problemsin proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation.

The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage.

A slide rule

The slide rule was invented around 1620–1630, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cuberoots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Aviation is one of the few fields where sliderules are still in widespread use, particularly for solving time–distance problems in light aircraft. To save space and for ease of reading, these are typically circular devices rather than the classic linear slide rule shape. A popular example is the E6B.

In the 1770s Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automata) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be

Page 276: Tugas tik di kelas xi ips 3

produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire of Neuchâtel, Switzerland, and still operates.[14]

The tide-predicting machine invented by Sir William Thomson in1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location.

The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876 Lord Kelvin had already discussed the possible construction ofsuch calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators.[15] In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torqueamplifier was the advance that allowed these machines to work.Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers.

First general-purpose computing device

A portion of Babbage's Difference engine.

Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered

Page 277: Tugas tik di kelas xi ips 3

the "father of the computer",[16] he conceptualized and inventedthe first mechanical computer in the early 19th century. Afterworking on his revolutionary difference engine, designed to aid in navigational calculations, in 1833 he realized that a much more general design, an Analytical Engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output,the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The Engine incorporated an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for ageneral-purpose computer that could be described in modern terms as Turing-complete.[17][18]

The machine was about a century ahead of its time. All the parts for his machine had to be made by hand - this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of theBritish Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to difficulties not only of politics and financing, but also to his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in1888. He gave a successful demonstration of its use in computing tables in 1906.

Later Analog computers

Sir William Thomson's third tide-predicting machine design, 1879–81

Page 278: Tugas tik di kelas xi ips 3

During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these werenot programmable and generally lacked the versatility and accuracy of modern digital computers.[19]

The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the brother of the more famous Lord Kelvin.[15]

The art of mechanical analog computing reached its zenith withthe differential analyzer, built by H. L. Hazen and Vannevar Bush at MIT starting in 1927. This built on the mechanical integrators of James Thomson and the torque amplifiers invented by H. W. Nieman. A dozen of these devices were built before their obsolescence became obvious.

By the 1950s the success of digital electronic computers had spelled the end for most analog computing machines, but analogcomputers remain in use in some specialized applications such as education (control systems) and aircraft (slide rule).

Digital computer development

The principle of the modern computer was first described by mathematician and pioneering computer scientist Alan Turing, who set out the idea in his seminal 1936 paper,[20] On Computable Numbers. Turing reformulated Kurt Gödel's 1931 results on the limits of proof and computation, replacing Gödel's universal arithmetic-based formal language with the formal and simple hypothetical devices that became known as Turing machines. He proved that some such machine would be capable of performing any conceivable mathematical computation if it were representable as an algorithm. He went on to prove that there was no solution to the Entscheidungsproblem by first showing thatthe halting problem for Turing machines is undecidable: in general, it is not possible to decide algorithmically whether a given Turing machine will ever halt.

Page 279: Tugas tik di kelas xi ips 3

He also introduced the notion of a 'Universal Machine' (now known as a Universal Turing machine), with the idea that such a machine could perform the tasks of any other machine, or in other words, it is provably capable of computing anything thatis computable by executing a program stored on tape, allowing the machine to be programmable. Von Neumann acknowledged that the central concept of the modern computer was due to this paper.[21] Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine.

Electromechanical

By 1938 the United States Navy had developed an electromechanical analog computer small enough to use aboard asubmarine. This was the Torpedo Data Computer, which used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II similar devices were developed in other countries as well.

Replica of Zuse's Z3, the first fully automatic, digital (electromechanical) computer.

Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939, was one of the earliest examples of an electromechanical relay computer.[22]

In 1941, Zuse followed his earlier machine up with the Z3, theworld's first working electromechanical programmable, fully

Page 280: Tugas tik di kelas xi ips 3

automatic digital computer.[23][24] The Z3 was built with 2000 relays, implementing a 22 bit word length that operated at a clock frequency of about 5–10 Hz.[25] Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advancessuch as floating point numbers. Replacement of the hard-to-implement decimal system (used in Charles Babbage's earlier design) by the simpler binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time.[26] The Z3 was probably a complete Turing machine.

Vacuum tubes and digital electronic circuits

Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same timethat digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in Londonin the 1930s, began to explore the possible use of electronicsfor the telephone exchange. Experimental equipment that he built in 1934 went into operation 5 years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes.[19] In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942,[27] the first "automatic electronic digital computer".[28] This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory.[29]

Colossus was the first electronic digital programmable computing device, and was used to break German ciphers during World War II.

Page 281: Tugas tik di kelas xi ips 3

During World War II, the British at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes.To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus.[29] He spent eleven months from early February 1943 designing and building the first Colossus.[30] After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944[31] and attacked its first message on 5 February.[29]

Colossus was the world's first electronic digital programmablecomputer.[19] It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured toperform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1500 thermionic valves (tubes), but Mark II with 2400 valves, was both 5 times fasterand simpler to operate than Mark 1, greatly speeding the decoding process.[32][33]

ENIAC was the first Turing-complete device, and performed ballistics trajectory calculations for the United States Army.

The US-built ENIAC [34] (Electronic Numerical Integrator and Computer) was the first electronic programmable computer builtin the US. Although the ENIAC was similar to the Colossus it was much faster and more flexible. It was unambiguously a Turing-complete device and could compute any problem that would fit into its memory. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic

Page 282: Tugas tik di kelas xi ips 3

machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches.

It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than anyother machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors.[35]

Stored programs

A section of the Manchester Small-Scale Experimental Machine, the first stored-program computer.

Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine.[29] With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid by Alan Turing in his 1936 paper. In 1945 Turing joined the National PhysicalLaboratory and began work on developing an electronic stored-program digital computer. His 1945 report ‘Proposed ElectronicCalculator’ was the first specification for such a device. John von Neumann at the University of Pennsylvania, also circulated his First Draft of a Report on the EDVAC in 1945.[19]

Page 283: Tugas tik di kelas xi ips 3

Ferranti Mark 1, c. 1951.

The Manchester Small-Scale Experimental Machine, nicknamed Baby, was the world's first stored-program computer. It was built at the Victoria University of Manchester by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948.[36] It was designed as a testbed for the Williams tube the first random-access digital storage device.[37] Although the computer was considered "small and primitive" by the standards of its time, it was the first working machine to contain all of the elements essential to a modern electronic computer.[38] As soon as the SSEM had demonstrated the feasibility of its design, a project was initiated at the university to develop it into a more usable computer, the Manchester Mark 1.

The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer.[39] Built by Ferranti, it was delivered to the University of Manchester in February 1951. Atleast seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam.[40] In October 1947, the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. The LEO I computer became operational in April 1951 [41] and ran the world's first regular routine office computer job.

Transistors

Page 284: Tugas tik di kelas xi ips 3

A bipolar junction transistor

The bipolar transistor was invented in 1947. From 1955 onwardstransistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give offless heat. Silicon junction transistors were much more reliable than vacuum tubes and had longer, indefinite, servicelife. Transistorized computers could contain tens of thousandsof binary logic circuits in a relatively compact space.

At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves.[42] Their first transistorised computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955,[43] built by the electronics division of the Atomic Energy Research Establishment at Harwell.[44][45]

Integrated circuits

The next great advance in computing power came with the adventof the integrated circuit. The idea of the integrated circuit was first conceived by a radar scientist working for the RoyalRadar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington,   D.C. on 7 May 1952.[46]

Page 285: Tugas tik di kelas xi ips 3

The first practical ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor.[47] Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958.[48] In his patent application of 6 February 1959, Kilby described his newdevice as “a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated.”[49][50] Noyce also came up with his own idea of an integrated circuit half a year later than Kilby.[51] His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium.

This new development heralded an explosion in the commercial and personal use of computers and led to the invention of the microprocessor. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack ofagreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004,[52] designed and realized by Ted Hoff, Federico Faggin, and Stanley Mazor at Intel.[53]

Mobile computers become dominant

With the continued miniaturization of computing resources, andadvancements in portable battery life, portable computers grewin popularity in the 2000s.[54] The same developments that spurred the growth of laptop computers and other portable computers allowed manufacturers to integrate computing resources into cellular phones. These so-called smartphones and tablets run on a variety of operating systems and have become the dominant computing device on the market, with manufacturers reporting having shipped an estimated 237 million devices in 2Q 2013.[55]

ProgramsThe defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program)

Page 286: Tugas tik di kelas xi ips 3

can be given to the computer, and it will process them. Moderncomputers based on the von Neumann architecture often have machine code in the form of an imperative programming language.

In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as dothe programs for word processors and web browsers for example.A typical modern computer can execute billions of instructionsper second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors.

Stored program architecture

Main articles: Computer program and Computer programming

Replica of the Small-Scale Experimental Machine (SSEM), the world's first stored-program computer, at the Museum of Science and Industry in Manchester, England

This section applies to most common RAM machine-based computers.

In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given.However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place inthe program and to carry on executing from there. These are called “jump” instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on

Page 287: Tugas tik di kelas xi ips 3

the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that “remembers” the location it jumped from and another instruction to return to the instruction following that jump instruction.

Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, theymay at times jump back to an earlier place in the text or skipsections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention.

Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button pressesand a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language:

begin: addi $8, $0, 0 # initialize sum to 0 addi $9, $0, 1 # set first number to add = 1 loop: slti $10, $9, 1000 # check if the number is less than 1000 beq $10, $0, finish # if odd number is greater than n then exit add $8, $8, $9 # update sum addi $9, $9, 1 # get next number j loop # repeat the summing process finish: add $2, $8, $0 # put sum in output register

Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second.

Machine code

Page 288: Tugas tik di kelas xi ips 3

In most computers, individual instructions are stored as machine code with each instruction being given a unique number(its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, itcan also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers andcan themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program[citation needed], architecture. In some cases, a computer mightstore some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modernvon Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches.

While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[56] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easyto remember – a mnemonic such as ADD, SUB, MULT or JUMP. Thesemnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler.

Page 289: Tugas tik di kelas xi ips 3

A 1970s punched card containing one line from a FORTRAN program. The card reads: “Z(1) = Y + W(1)” and is labeled “PROJ039” for identification purposes.

Programming language

Main article: Programming language

Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are oftendifficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques.

Low-level languages

Main article: Low-level programming language

Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) tend to be unique to a particular type of computer. For instance, an ARM architecture computer (such as may be found in a PDA or a hand-held videogame) cannot understand the machine language ofan Intel Pentium or the AMD Athlon 64 computer that might be in a PC.[57]

High-level languages/Third Generation Language

Main article: High-level programming language

Though considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages thatare able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually “compiled” into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[58]

Page 290: Tugas tik di kelas xi ips 3

High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles.

Fourth Generation Languages

These 4G languages are less procedural than 3G languages. The benefit of 4GL is that it provides ways to obtain information without requiring the direct help of a programmer. Example of 4GL is SQL.

Program design

This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (July 2012)

Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising orusing established procedures and algorithms, providing data for output devices and solutions to the problem as applicable.As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge.

Bugs

Page 291: Tugas tik di kelas xi ips 3

Main article: Software bug

The actual first computer bug, a moth found trapped on a relayof the Harvard Mark II computer

Errors in computer programs are called “bugs.” They may be benign and not affect the usefulness of the program, or have only subtle effects. But in some cases, they may cause the program or the entire system to “hang,” becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writingan exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the resultof programmer error or an oversight made in the program's design.[59]

Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term “bugs” in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947.[60]

ComponentsMain articles: Central processing unit and Microprocessor

Play mediaVideo demonstrating the standard components of a "slimline" computer

A general purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, andthe input and output devices (collectively termed I/O). These

Page 292: Tugas tik di kelas xi ips 3

parts are interconnected by buses, often made of groups of wires.

Inside each of these parts are thousands to trillions of smallelectrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a “1”, and when off it represents a “0” (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits.

Control unit

Main articles: CPU design and Control unit

Diagram showing how a particular MIPS architecture instructionwould be decoded by the control system

The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer.[61] Control systems in advanced computersmay change the order of execution of some instructions to improve performance.

A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[62]

The control system's function is as follows—note that this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU:

1. Read the code for the next instruction from the cell indicated by the program counter.

2. Decode the numerical code for the instruction into a set of commands or signals for each of the other systems.

Page 293: Tugas tik di kelas xi ips 3

3. Increment the program counter so it points to the next instruction.

4. Read whatever data the instruction requires from cells inmemory (or perhaps from an input device). The location ofthis required data is typically stored within the instruction code.

5. Provide the necessary data to an ALU or register.6. If the instruction requires an ALU or specialized

hardware to complete, instruct the hardware to perform the requested operation.

7. Write the result from the ALU back to a memory location or to a register or perhaps an output device.

8. Jump back to step (1).

Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in theALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further downthe program. Instructions that modify the program counter are often known as “jumps” and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow).

The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, whichruns a microcode program that causes all of these events to happen.

Central Processing unit (CPU)

The control unit, ALU, and registers are collectively known asa central processing unit (CPU). Early CPUs were composed of many separate components but since the mid-1970s CPUs have typically been constructed on a single integrated circuit called a microprocessor.

Arithmetic logic unit (ALU)

Main article: Arithmetic logic unit

Page 294: Tugas tik di kelas xi ips 3

The ALU is capable of performing two classes of operations: arithmetic and logic.[63]

The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can only operateon whole numbers (integers) whilst others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU mayalso compare numbers and return boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other (“is 64 greater than 65?”).

Logic operations involve Boolean logic: AND, OR, XOR and NOT. These can be useful for creating complicated conditional statements and processing boolean logic.

Superscalar computers may contain multiple ALUs, allowing themto process several instructions simultaneously.[64] Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices.

Memory

Main article: Computer data storage

Magnetic core memory was the computer memory of choice throughout the 1960s, until it was replaced by semiconductor memory.

Page 295: Tugas tik di kelas xi ips 3

A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered “address” and can store a single number. The computer can be instructed to “put the number 123 into the cell numbered 1357”or to “add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595.” The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what thememory sees as nothing but a series of numbers.

In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (2^8 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside ofspecialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory.

The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two andone hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed.

Computer main memory comes in two principal varieties:

random-access memory or RAM read-only memory or ROM

RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never

Page 296: Tugas tik di kelas xi ips 3

changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored inROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slowerthan conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[65]

In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part.

Input/output (I/O)

Main article: Input/output

Hard disk drives are common storage devices used with computers.

I/O is the means by which a computer exchanges information with the outside world.[66] Devices that provide input or outputto the computer are called peripherals.[67] On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display andprinter. Hard disk drives, floppy disk drives and optical disc

Page 297: Tugas tik di kelas xi ips 3

drives serve as both input and output devices. Computer networking is another form of I/O.

I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O.

Multitasking

Main article: Computer multitasking

While a computer may be viewed as running one gigantic programstored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking i.e. having the computer switch rapidly between running each program in turn.[68]

One means by which this is done is with a special signal called an interrupt, which can periodically cause the computerto stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running “at the same time,” then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time even though only one is ever executing in any given instant. This method of multitasking is sometimes termed “time-sharing” since each program is allocated a “slice” of time in turn.[69]

Before the era of cheap computers, the principal use for multitasking was to allow many people to share the same computer.

Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but

Page 298: Tugas tik di kelas xi ips 3

most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a “time slice” until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss.

Multiprocessing

Main article: Multiprocessing

Cray designed many supercomputers that used multiprocessing heavily.

Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed only in large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-endmarkets as a result.

Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general purpose computers.[70] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful only for specialized tasks due to the large scale of program organization required to successfully utilize most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called “embarrassingly parallel” tasks.

Page 299: Tugas tik di kelas xi ips 3

Networking and the InternetMain articles: Computer networking and Internet

Visualization of a portion of the routes on the Internet

Computers have been used to coordinate information between multiple locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre.[71]

In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET.[72] The technologies that made the Arpanet possible spread and evolved.

In time, the network spread beyond academic and military institutions and became known as the Internet. The emergence of networking involved a redefinition of the nature and boundaries of the computer. Computer operating systems and applications were modified to include the ability to define and access the resources of other computers on the network, such as peripheral devices, stored information, and the like, as extensions of the resources of an individual computer. Initially these facilities were available primarily to people working in high-tech environments, but in the 1990s the spreadof applications like e-mail and the World Wide Web, combined with the development of cheap, fast networking technologies like Ethernet and ADSL saw computer networking become almost ubiquitous. In fact, the number of computers that are

Page 300: Tugas tik di kelas xi ips 3

networked is growing phenomenally. A very large proportion of personal computers regularly connect to the Internet to communicate and receive information. “Wireless” networking, often utilizing mobile phone networks, has meant networking isbecoming increasingly ubiquitous even in mobile computing environments.

Computer architecture paradigms

There are many types of computer architectures:

Quantum computer vs Chemical computer Scalar processor vs Vector processor Non-Uniform Memory Access (NUMA) computers Register machine vs Stack machine Harvard architecture vs von Neumann architecture Cellular architecture

Of all these abstract machines, a quantum computer holds the most promise for revolutionizing computing.[73]

Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms.

The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle,capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity.

MisconceptionsMain articles: Human computer and Harvard Computers

Page 301: Tugas tik di kelas xi ips 3

Women as computers in NACA High Speed Flight Station "Computer Room"

A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word “computer” is synonymous with a personal electronic computer, the modern[74] definition of a computer is literally “A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information.”[75] Any device which processes information qualifies as a computer, especially if the processing is purposeful. Even a human is a computer, in this sense.

Unconventional computing

Main article: Unconventional computing

Historically, computers evolved from mechanical computers and eventually from vacuum tubes to transistors. However, conceptually computational systems as flexible as a personal computer can be built out of almost anything. For example, a computer can be made out of billiard balls (billiard ball computer); an often quoted example.[citation needed] More realistically, modern computers are made out of transistors made of photolithographed semiconductors.

FutureThere is active research to make computers out of many promising new types of technology, such as optical computers, DNA computers, neural computers, and quantum computers. Most computers are universal, and are able to calculate any computable function, and are limited only by their memory

Page 302: Tugas tik di kelas xi ips 3

capacity and operating speed. However different designs of computers can give very different performance for particular problems; for example quantum computers can potentially break some modern encryption algorithms (by quantum factoring) very quickly.

Further topics Glossary of computers

Artificial intelligence

A computer will solve problems in exactly the way it is programmed to, without regard to efficiency, alternative solutions, possible shortcuts, or possible errors in the code.Computer programs that learn and adapt are part of the emerging field of artificial intelligence and machine learning.

HardwareMain articles: Computer hardware and Personal computer hardware

The term hardware covers all of those parts of a computer that are tangible objects. Circuits, displays, power supplies, cables, keyboards, printers and mice are all hardware.

History of computing hardware

Main article: History of computing hardware

First generation (mechanical/electromechanical)

Calculators

Pascal's calculator, Arithmometer, Difference engine, Quevedo's analytical machines

Programmable devices

Jacquard loom, Analytical engine, IBM ASCC/Harvard MarkI, Harvard Mark II, IBM SSEC, Z1, Z2, Z3

Second generation Calculators Atanasoff–Berry

Page 303: Tugas tik di kelas xi ips 3

(vacuum tubes)

Computer, IBM 604, UNIVAC 60, UNIVAC 120

Programmable devices

Colossus, ENIAC, Manchester Small-Scale Experimental Machine, EDSAC, Manchester Mark 1, Ferranti Pegasus, Ferranti Mercury, CSIRAC, EDVAC, UNIVACI, IBM 701, IBM 702, IBM 650, Z22

Third generation (discrete transistors and SSI, MSI, LSI integrated circuits)

Mainframes IBM 7090, IBM 7080, IBM System/360, BUNCH

MinicomputerPDP-8, PDP-11, IBM System/32, IBM System/36

Fourth generation (VLSI integrated circuits)

Minicomputer VAX, IBM System i4-bit microcomputer

Intel 4004, Intel 4040

8-bit microcomputer

Intel 8008, Intel 8080, Motorola 6800, Motorola 6809, MOS Technology 6502, Zilog Z80

16-bit microcomputer

Intel 8088, Zilog Z8000, WDC 65816/65802

32-bit microcomputer

Intel 80386, Pentium,Motorola 68000, ARM

64-bit microcomputer[76]

Alpha, MIPS, PA-RISC,PowerPC, SPARC, x86-64, ARMv8-A

Embedded computer Intel 8048, Intel 8051Personal computer Desktop computer,

Home computer, Laptopcomputer, Personal digital assistant (PDA), Portable computer, Tablet PC,

Page 304: Tugas tik di kelas xi ips 3

Wearable computer

Theoretical/experimental

Quantum computer,Chemical computer, DNA computing, Optical computer,Spintronics basedcomputer

Other hardware topics

Peripheral device (input/output)

InputMouse, keyboard, joystick, image scanner, webcam, graphicstablet, microphone

Output Monitor, printer, loudspeaker

BothFloppy disk drive, hard disk drive, optical disc drive, teleprinter

Computer buses

Short range RS-232, SCSI, PCI, USBLong range (computer networking)

Ethernet, ATM, FDDI

SoftwareMain article: Computer software

Software refers to parts of the computer which do not have a material form, such as programs, data, protocols, etc. When software is stored in hardware that cannot easily be modified (such as BIOS ROM in an IBM PC compatible), it is sometimes called “firmware.”

Operating system /System Software

Unix and BSDUNIX System V, IBM AIX, HP-UX, Solaris(SunOS), IRIX, List of BSD operating systems

GNU/Linux List of Linux distributions, Comparison of Linux distributions

Microsoft Windows

Windows 95, Windows 98, Windows NT, Windows 2000, Windows Me, Windows XP, Windows Vista, Windows 7, Windows 8

Page 305: Tugas tik di kelas xi ips 3

DOS 86-DOS (QDOS), IBM PC DOS, MS-DOS, DR-DOS, FreeDOS

Mac OS Mac OS classic, Mac OS XEmbedded andreal-time List of embedded operating systems

Experimental Amoeba, Oberon/Bluebottle, Plan 9 fromBell Labs

LibraryMultimedia DirectX, OpenGL, OpenALProgramming library

C standard library, Standard Template Library

Data Protocol TCP/IP, Kermit, FTP, HTTP, SMTPFile format HTML, XML, JPEG, MPEG, PNG

User interface

Graphical user interface (WIMP)

Microsoft Windows, GNOME, KDE, QNX Photon, CDE, GEM, Aqua

Text-based user interface

Command-line interface, Text user interface

ApplicationSoftware

Office suite

Word processing, Desktop publishing, Presentation program, Database management system, Scheduling & Time management, Spreadsheet, Accounting software

Internet Access

Browser, E-mail client, Web server, Mail transfer agent, Instant messaging

Design and manufacturing

Computer-aided design, Computer-aided manufacturing, Plant management, Robotic manufacturing, Supply chain management

Graphics

Raster graphics editor, Vector graphics editor, 3D modeler, Animationeditor, 3D computer graphics, Video editing, Image processing

AudioDigital audio editor, Audio playback, Mixing, Audio synthesis, Computer music

Software engineering

Compiler, Assembler, Interpreter, Debugger, Text editor, Integrated development environment, Software performance analysis, Revision

Page 306: Tugas tik di kelas xi ips 3

control, Software configuration management

Educational Edutainment, Educational game, Seriousgame, Flight simulator

Games

Strategy, Arcade, Puzzle, Simulation, First-person shooter, Platform, Massively multiplayer, Interactive fiction

Misc

Artificial intelligence, Antivirus software, Malware scanner, Installer/Package management systems, File manager

LanguagesThere are thousands of different programming languages—some intended to be general purpose, others useful only for highly specialized applications.

Programming languages

Lists of programming languages

Timeline of programming languages, List of programming languages by category, Generational list of programming languages, List of programming languages, Non-English-based programming languages

Commonly used assembly languages

ARM, MIPS, x86

Commonly used high-level programming languages

Ada, BASIC, C, C++, C#, COBOL, Fortran, Java, Lisp, Pascal, Object Pascal

Commonly used scripting languages

Bourne script, JavaScript, Python, Ruby, PHP, Perl

Firmware

Firmware is the technology which has the combination of both hardware and software such as BIOS chip inside a computer.this

Page 307: Tugas tik di kelas xi ips 3

chip(hardware) is located on the motherboard and has the BIOS set up (software) stored in it.

Liveware

At times the user working on the system are termed as Liveware.

Types of computersComputer can be classified based on their uses

Based on uses

Analog compute r Digital computer Hybrid computer

Based on sizes

Micro computer Personal computer Mini Computer Mainframe computer Super computer

Input DevicesWhen unprocessed data goes to computer with the help of input devices and get output after the data has been processed.the input devices may be hand operated or automated. The act of processing is mainly regulated by CPU.Hand operated input devices are

Concept keyboard Track Ball Joysticks Digital Camera Microphone Touch Screen Video Digital Scanner

Page 308: Tugas tik di kelas xi ips 3

Graphic Tablet Keyboard Mouse

Output DevicesThe means through which computer gives output are known as output devices. Some of the output devices are:

Monitor Printer Projector Sound card Speaker Video card

Professions and organizationsAs the use of computers has spread throughout society, there are an increasing number of careers involving computers.

Computer-related professions

Hardware-related

Electrical engineering, Electronic engineering, Computer engineering, Telecommunications engineering, Optical engineering, Nanoengineering

Software-related

Computer science, Computer engineering, Desktop publishing, Human–computer interaction, Information technology, Information systems, Computational science, Software engineering, Video game industry, Web design

The need for computers to work well together and to be able toexchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature.

OrganizationsStandards groups ANSI, IEC, IEEE, IETF, ISO, W3CProfessional societies ACM, AIS, IET, IFIP, BCS

Free/open source software groups

Free Software Foundation, Mozilla Foundation, Apache Software Foundation

Page 309: Tugas tik di kelas xi ips 3

"Encrypt" redirects here. For the film, see Encrypt (film).This article is about algorithms for encryption and decryption. For an overview of cryptographic technology in general, see Cryptography.

In cryptography, encryption is the process of encoding messages or information in such a way that only authorized parties can read it.[1] Encryption does not of itself prevent interception, but denies the message content to the interceptor.[2]:374 In an encryption scheme, the intended communication information or message, referred to as plaintext, is encrypted using an encryption algorithm, generating ciphertext that can only be read if decrypted.[2] For technical reasons, an encryption scheme usually uses a pseudo-random encryption key generated by an algorithm. It is in principle possible to decrypt the message without possessing the key, but, for a well-designed encryption scheme, large computational resources and skill are required. An authorized recipient can easily decrypt the message with the key provided by the originator to recipients, but not to unauthorized interceptors.

Contents 1 Types of encryption

o 1.1 Symmetric key encryption o 1.2 Public key encryption

2 Uses of encryption o 2.1 Message verification

3 See also 4 References 5 Further reading

Types of encryptionSymmetric key encryption

In symmetric-key schemes,[3] the encryption and decryption keysare the same. Communicating parties must have the same key before they can achieve secure communication.

Public key encryption

Page 310: Tugas tik di kelas xi ips 3

Illustration of how a file or document is sent using Public key encryption.

In public-key encryption schemes, the encryption key is published for anyone to use and encrypt messages. However, only the receiving party has access to the decryption key thatenables messages to be read.[4] Public-key encryption was firstdescribed in a secret document in 1973;[5] before then all encryption schemes were symmetric-key (also called private-key).[2]:478

A publicly available public key encryption application called Pretty Good Privacy (PGP) was written in 1991 by Phil Zimmermann, and distributed free of charge with source code; it was purchased by Symantec in 2010 and is regularly updated.[6]

Uses of encryptionEncryption has long been used by military and governments to facilitate secret communication. It is now commonly used in protecting information within many kinds of civilian systems. For example, the Computer Security Institute reported that in 2007, 71% of companies surveyed utilized encryption for some of their data in transit, and 53% utilized encryption for someof their data in storage.[7] Encryption can be used to protect data "at rest", such as information stored on computers and storage devices (e.g. USB flash drives). In recent years therehave been numerous reports of confidential data such as customers' personal records being exposed through loss or theft of laptops or backup drives. Encrypting such files at rest helps protect them should physical security measures fail. Digital rights management systems, which prevent unauthorized use or reproduction of copyrighted material and protect software against reverse engineering (see also copy protection), is another somewhat different example of using encryption on data at rest.[8]

Page 311: Tugas tik di kelas xi ips 3

Encryption is also used to protect data in transit, for example data being transferred via networks (e.g. the Internet, e-commerce), mobile telephones, wireless microphones, wireless intercom systems, Bluetooth devices and bank automatic teller machines. There have been numerous reports of data in transit being intercepted in recent years.[9] Data should also be encrypted when transmitted across networks in order to protect against eavesdropping of network traffic by unauthorized users.[10]

Message verification

Encryption, by itself, can protect the confidentiality of messages, but other techniques are still needed to protect theintegrity and authenticity of a message; for example, verification of a message authentication code (MAC) or a digital signature. Standards for cryptographic software and hardware to perform encryption are widely available, but successfully using encryption to ensure security may be a challenging problem. A single error in system design or execution can allow successful attacks. Sometimes an adversarycan obtain unencrypted information without directly undoing the encryption. See, e.g., traffic analysis, TEMPEST, or Trojan horse.[11]

Digital signature and encryption must be applied to the ciphertext when it is created (typically on the same device used to compose the message) to avoid tampering; otherwise anynode between the sender and the encryption agent could potentially tamper with it. Encrypting at the time of creatioFor other uses, see Nonsense (disambiguation).

For Wikipedia policy regarding nonsense, see Wikipedia:Patent nonsense.

This article needs additional citations for verification.Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (September 2008)

Nonsense is a communication, via speech, writing, or any othersymbolic system, that lacks any coherent meaning. Sometimes inordinary usage, nonsense is synonymous with absurdity or the

Page 312: Tugas tik di kelas xi ips 3

ridiculous. Many poets, novelists and songwriters have used nonsense in their works, often creating entire works using it for reasons ranging from pure comic amusement or satire, to illustrating a point about language or reasoning. In the philosophy of language and philosophy of science, nonsense is distinguished from sense or meaningfulness, and attempts have been made to come up with a coherent and consistent method of distinguishing sense from nonsense. It is also an important field of study in cryptography regarding separating a signal from noise.

Contents 1 Literary nonsense

o 1.1 Nonsense verse 1.1.1 Examples

2 Philosophy of language and of science o 2.1 Logical positivism o 2.2 Wittgenstein

3 Cryptography 4 Teaching machines to talk nonsense 5 See also 6 Notes 7 References 8 External links

Literary nonsense

A Book of Nonsense (ca. 1875 James Miller edition) by Edward LearMain article: Literary nonsense

The phrase "Colorless green ideas sleep furiously" was coined by Noam Chomsky as an example of nonsense. However, this can be easily confused with poetic symbolism. The individual words make sense and are arranged according to proper grammatical

Page 313: Tugas tik di kelas xi ips 3

rules, yet the result is nonsense. The inspiration for this attempt at creating verbal nonsense came from the idea of contradiction and seemingly irrelevant and/or incompatible characteristics, which conspire to make the phrase meaningless, but is open to interpretation. The phrase "the square root of Tuesday" (not a similar example; the lemondrop sunshine is more comparable)[vague] operates on the latter principle. This principle is behind the inscrutability of the kōan "What is the sound of one hand clapping?", where one hand would presumably be insufficient for clapping without the intervention of another.

James Joyce’s final novel Finnegans Wake also uses nonsense: full of portmanteau and strong words, it appears to be pregnantwith multiple layers of meaning, but in many passages it is difficult to say whether any one human’s interpretation of a text could be the intended or unintended one.

Nonsense verse

Jabberwocky, a poem (of nonsense verse) found in Through the Looking-Glass, and What Alice Found There by Lewis Carroll (1871), is anonsense poem written in the English language. The word jabberwocky is also occasionally used as a synonym of nonsense.[citation needed]

Nonsense verse is the verse form of literary nonsense, a genrethat can manifest in many other ways. Its best-known exponent is Edward Lear, author of The Owl and the Pussycat and hundreds of limericks.

Nonsense verse is part of a long line of tradition predating Lear: the nursery rhyme Hey Diddle Diddle could also be termed a nonsense verse. There are also some works which appear to be nonsense verse, but actually are not, such as the popular 1940s song Mairzy Doats.

Lewis Carroll, seeking a nonsense riddle, once posed the question How is a raven like a writing desk?. Someone answered him, Because Poe wrote on both. However, there are other possible answers (e.g. both have inky quills).

Page 314: Tugas tik di kelas xi ips 3

Lines of nonsense frequently figure in the refrains of folksongs, where nonsense riddles and knock-knock jokes are often encountered.

Examples

The first verse of Jabberwocky by Lewis Carroll;

'Twas brillig, and the slithy tovesDid gyre and gimble in the wabe;All mimsy were the borogoves,And the mome raths outgrabe.

The first four lines of On the Ning Nang Nong by Spike Milligan;[1]

On the Ning Nang NongWhere the cows go Bong!and the monkeys all say BOO!There's a Nong Nang Ning

The first verse of Spirk Troll-Derisive by James Whitcomb Riley;[2]

The Crankadox leaned o'er the edge of the moon,And wistfully gazed on the seaWhere the Gryxabodill madly whistled a tuneTo the air of "Ti-fol-de-ding-dee."

The first four lines of The Mayor of Scuttleton by Mary Mapes Dodge;[2]

The Mayor of Scuttleton burned his noseTrying to warm his copper toes;He lost his money and spoiled his willBy signing his name with an icicle quill;

The first four lines of Oh Freddled Gruntbuggly by Prostetnic Vogon Jeltz; a creation of Douglas Adams

Oh freddled gruntbuggly,Thy micturations are to meAs plurdled gabbleblotchits on a lurgid bee.Groop I implore thee, my foonting turlingdromes

Page 315: Tugas tik di kelas xi ips 3

Philosophy of language and of scienceFurther information: Sense

In the philosophy of language and the philosophy of science, nonsense refers to a lack of sense or meaning. Different technical definitions of meaning delineate sense from nonsense.

Logical positivism

Further information: Logical positivism

Wittgenstein

Further information: Ludwig Wittgenstein

In Ludwig Wittgenstein's writings, the word "nonsense" carriesa special technical meaning which differs significantly from the normal use of the word. In this sense, "nonsense" does notrefer to meaningless gibberish, but rather to the lack of sense in the context of sense and reference. In this context, logical tautologies, and purely mathematical propositions may be regarded as "nonsense". For example, "1+1=2" is a nonsensical proposition.[3] Wittgenstein wrote in Tractatus Logico Philosophicus that some of the propositions contained in his own book should be regarded as nonsense.[4] Used in thisway, "nonsense" does not necessarily carry negative connotations.

Starting from Wittgenstein, but through an original perspective, the Italian philosopher Leonardo Vittorio Arena, in his book Nonsense as the meaning, highlights this positive meaning of nonsense to undermine every philosophical conception which does not take note of the absolute lack of meaning of the world and life. Nonsense implies the destruction of all views or opinions, on the wake of the Indian Buddhist philosopher Nagarjuna. In the name of nonsense, it is finally refused the conception of duality and the Aristotelian formal logic.

Cryptography

Page 316: Tugas tik di kelas xi ips 3

The problem of distinguishing sense from nonsense is importantin cryptography and other intelligence fields. For example, they need to distinguish signal from noise. Cryptanalysts havedevised algorithms to determine whether a given text is in fact nonsense or not. These algorithms typically analyze the presence of repetitions and redundancy in a text; in meaningful texts, certain frequently used words recur, for example, the, is and and in a text in the English language. A random scattering of letters, punctuation marks and spaces do not exhibit these regularities. Zipf's law attempts to state this analysis mathematically. By contrast, cryptographers typically seek to make their cipher texts resemble random distributions, to avoid telltale repetitions and patterns which may give an opening for cryptanalysis.

It is harder for cryptographers to deal with the presence or absence of meaning in a text in which the level of redundancy and repetition is higher than found in natural languages (for example, in the mysterious text of the Voynich manuscript).

Teaching machines to talk nonsenseScientists have attempted to teach machines to produce nonsense. The Markov chain technique is one method which has been used to generate texts by algorithm and randomizing techniques that seem meaningful. Another method is sometimes called the Mad Libs method: it involves the creation of templates for various sentence structures, and filling in the blanks with noun phrases or verb phrases; these phrase-generation procedures can be looped to add recursion, giving the output the appearance of greater complexity and sophistication. Racter was a computer program which generated nonsense texts by this method; however, Racter’s book, The Policeman’s Beard is Half Constructed, proved to have been the product of heavy human editing of the program's output.

n is only secure if the encryption device itself has not been tampered with.

The war drew in all the world's economic great powers,[6] assembled in two opposing alliances: the Allies (based on the Triple Entente of the United Kingdom, France and the Russian

Page 317: Tugas tik di kelas xi ips 3

Empire) and the Central Powers of Germany and Austria-Hungary.Although Italy had also been a member of the Triple Alliance alongside Germany and Austria-Hungary, it did not join the Central Powers, as Austria-Hungary had taken the offensive against the terms of the alliance.[7] These alliances were reorganised and expanded as more nations entered the war: Italy, Japan and the United States joined the Allies, and the Ottoman Empire and Bulgaria the Central Powers. More than 70 million military personnel, including 60 million Europeans,were mobilised in one of the largest wars in history.[8][9] The trigger for war was the 28 June 1914 assassination of ArchdukeFranz Ferdinand of Austria, heir to the throne of Austria-Hungary, by Yugoslav nationalist Gavrilo Princip in Sarajevo. This set off a diplomatic crisis when Austria-Hungary delivered an ultimatum to the Kingdom of Serbia,[10][11] and entangled international alliances formed over the previous decades were invoked. Within weeks, the major powers were at war and the conflict soon spread around the world.

On 28 July, the Austro-Hungarians declared war on Serbia and subsequently invaded.[12][13] As Russia mobilised in support of Serbia, Germany invaded neutral Belgium and Luxembourg before moving towards France, leading Britain to declare war on Germany. After the German march on Paris was halted, what became known as the Western Front settled into a battle of attrition, with a trench line that would change little until 1917. Meanwhile, on the Eastern Front, the Russian army was successful against the Austro-Hungarians, but was stopped in its invasion of East Prussia by the Germans. In November 1914,the Ottoman Empire joined the Central Powers, opening fronts in the Caucasus, Mesopotamia and the Sinai. Italy joined the Allies in 1915 and Bulgaria joined the Central Powers in the same year, while Romania joined the Allies in 1916, and the United States joined the Allies in 1917.

The Russian government collapsed in March 1917, and a subsequent revolution in November brought the Russians to terms with the Central Powers via the Treaty of Brest Litovsk,which constituted a massive German victory until nullified by the 1918 victory of the Western allies. After a stunning Spring 1918 German offensive along the Western Front, the Allies rallied and drove back the Germans in a series of

Page 318: Tugas tik di kelas xi ips 3

successful offensives. On 4 November 1918, the Austro-Hungarian empire agreed to an armistice, and Germany, which had its own trouble with revolutionaries, agreed to an armistice on 11 November 1918, ending the war in victory for the Allies.

By the end of the war, the German Empire, Russian Empire, Austro-Hungarian Empire and the Ottoman Empire had ceased to exist. National borders were redrawn, with several independentnations restored or created, and Germany's colonies were parceled out among the winners. During the Paris Peace conference of 1919, the Big Four (Britain, France, the United States and Italy) imposed their terms in a series of treaties.The League of Nations was formed with the aim of preventing any repetition of such a conflict. This, however, failed with weakened states, economic depression, renewed European nationalism, and the German feeling of humiliation contributing to the rise of Nazism. These conditions eventually contributed to World War II.

Contents 1 Etymology 2 Background

o 2.1 Political and military allianceso 2.2 Arms raceo 2.3 Conflicts in the Balkans

3 Prelude o 3.1 Sarajevo assassinationo 3.2 Escalation of violence in Bosnia and Herzegovinao 3.3 July Crisis

4 Progress of the war o 4.1 Opening hostilities

4.1.1 Confusion among the Central Powers 4.1.2 Serbian campaign 4.1.3 German forces in Belgium and France 4.1.4 Asia and the Pacific 4.1.5 African campaigns 4.1.6 Indian support for the Allies

o 4.2 Western Front 4.2.1 Trench warfare begins 4.2.2 Continuation of trench warfare

Page 319: Tugas tik di kelas xi ips 3

o 4.3 Naval waro 4.4 Southern theatres

4.4.1 War in the Balkans 4.4.2 Ottoman Empire 4.4.3 Italian participation 4.4.4 Romanian participation

o 4.5 Eastern Front 4.5.1 Initial actions 4.5.2 Russian Revolution 4.5.3 Czechoslovak Legion

o 4.6 Central Powers peace overtureso 4.7 1917–1918

4.7.1 Developments in 1917 4.7.2 Ottoman Empire conflict, 1917–1918 4.7.3 Entry of the United States 4.7.4 German Spring Offensive of 1918 4.7.5 New states under war zone

o 4.8 Allied victory: summer 1918 onwards 4.8.1 Hundred Days Offensive 4.8.2 Armistices and capitulations

5 Aftermath o 5.1 Formal end of the waro 5.2 Peace treaties and national boundarieso 5.3 National identitieso 5.4 Health effects

6 Technology o 6.1 Ground warfareo 6.2 Navalo 6.3 Aviation

7 War crimes o 7.1 Baralong incidentso 7.2 HMHS Llandovery Castleo 7.3 Chemical weapons in warfareo 7.4 Genocide and ethnic cleansing

7.4.1 Russian Empireo 7.5 Rape of Belgium

8 Soldiers' experiences o 8.1 Prisoners of waro 8.2 Military attachés and war correspondents

9 Support and opposition to the war o 9.1 Supporto 9.2 Opposition

Page 320: Tugas tik di kelas xi ips 3

9.2.1 Conscription 10 Legacy and memory

o 10.1 Historiographyo 10.2 Memorialso 10.3 Cultural memoryo 10.4 Social traumao 10.5 Discontent in Germanyo 10.6 Economic effects

11 Media 12 See also 13 Footnotes 14 Notes 15 References

o 15.1 Primary sourceso 15.2 Historiography and memory

16 External links o 16.1 Animated mapso 16.2 Library guides

EtymologyFrom the time of its start until the approach of World War II,it was called simply the World War or the Great War and thereafter the First World War or World War I.[14][15]

In Canada, Maclean's Magazine in October 1914 said, "Some wars name themselves. This is the Great War."[16] During the Interwarperiod (1918–1939), the war was most often called the World Warand the Great War in English-speaking countries.

The term "First World War" was first used in September 1914 bythe German philosopher Ernst Haeckel, who claimed that "there is no doubt that the course and character of the feared 'European War' ... will become the first world war in the fullsense of the word."[17] After the onset of the Second World War in 1939, the terms World War I or the First World War became standard, with British and Canadian historians favouring the First World War, and Americans World War I.

BackgroundMain article: Causes of World War I

Page 321: Tugas tik di kelas xi ips 3

Military alliances leading to World War I; Triple Entente in green; Central Powers in brown

Political and military alliances

In the 19th century, the major European powers had gone to great lengths to maintain a balance of power throughout Europe, resulting in the existence of a complex network of political and military alliances throughout the continent by 1900.[7] These had started in 1815, with the Holy Alliance between Prussia, Russia, and Austria. Then, in October 1873, German Chancellor Otto von Bismarck negotiated the League of the Three Emperors (German: Dreikaiserbund) between the monarchs of Austria-Hungary, Russia and Germany. This agreement failed because Austria-Hungary and Russia could not agree over Balkanpolicy, leaving Germany and Austria-Hungary in an alliance formed in 1879, called the Dual Alliance. This was seen as a method of countering Russian influence in the Balkans as the Ottoman Empire continued to weaken.[7] In 1882, this alliance was expanded to include Italy in what became the Triple Alliance.[18]

Bismarck had especially worked to hold Russia at Germany's side to avoid a two-front war with France and Russia. When Wilhelm II ascended to the throne as German Emperor (Kaiser), Bismarck was compelled to retire and his system of alliances was gradually de-emphasised. For example, the Kaiser refused to renew the Reinsurance Treaty with Russia in 1890. Two yearslater, the Franco-Russian Alliance was signed to counteract the force of the Triple Alliance. In 1904, Britain signed a series of agreements with France, the Entente Cordiale, and in1907, Britain and Russia signed the Anglo-Russian Convention. While these agreements did not formally ally Britain with France or Russia, they made British entry into any future

Page 322: Tugas tik di kelas xi ips 3

conflict involving France or Russia a possibility, and the system of interlocking bilateral agreements became known as the Triple Entente.[7]

Arms race

German industrial and economic power had grown greatly after unification and the foundation of the Empire in 1871 followingthe Franco-Prussian War. From the mid-1890s on, the governmentof Wilhelm II used this base to devote significant economic resources for building up the Kaiserliche Marine (Imperial German Navy), established by Admiral Alfred von Tirpitz, in rivalry with the British Royal Navy for world naval supremacy.[19] As a result, each nation strove to out-build the other in capital ships. With the launch of HMS   Dreadnought in 1906, the British Empire expanded on its significant advantage over its German rival.[19] The arms race between Britain and Germany eventually extended to the rest of Europe, with all the major powers devoting their industrial base to producing the equipment and weapons necessary for a pan-European conflict.[20] Between 1908 and 1913, the military spending of the European powers increased by 50%.[21]

Sarajevo citizens reading a poster with the proclamation of the Austrian annexation in 1908.

Conflicts in the Balkans

Austria-Hungary precipitated the Bosnian crisis of 1908–1909 by officially annexing the former Ottoman territory of Bosnia and Herzegovina, which it had occupied since 1878. This angered the Kingdom of Serbia and its patron, the Pan-Slavic and Orthodox Russian Empire.[22] Russian political manoeuvring in the region destabilised peace accords, which were already

Page 323: Tugas tik di kelas xi ips 3

fracturing in what was known as the "powder keg of Europe".[22] In 1912 and 1913, the First Balkan War was fought between the Balkan League and the fracturing Ottoman Empire. The resultingTreaty of London further shrank the Ottoman Empire, creating an independent Albanian State while enlarging the territorial holdings of Bulgaria, Serbia, Montenegro, and Greece. When Bulgaria attacked Serbia and Greece on 16 June 1913, it lost most of Macedonia to Serbia and Greece and Southern Dobruja toRomania in the 33-day Second Balkan War, further destabilisingthe region.[23]

Prelude

This picture is usually associated with the arrest of Gavrilo Princip, although some[24][25] believe it depicts Ferdinand Behr, a bystander.

Sarajevo assassination

Main article: Assassination of Archduke Franz Ferdinand

On 28 June 1914, Austrian Archduke Franz Ferdinand visited theBosnian capital, Sarajevo. A group of six assassins (Cvjetko Popović, Gavrilo Princip, Muhamed Mehmedbašić, Nedeljko Čabrinović, Trifko Grabež, Vaso Čubrilović) from the nationalist group Mlada Bosna, supplied by the Black Hand, hadgathered on the street where the Archduke's motorcade would pass. Čabrinović threw a grenade at the car, but missed. Some nearby were injured by the blast, but Franz Ferdinand's convoycarried on. The other assassins failed to act as the cars drove past them. About an hour later, when Franz Ferdinand wasreturning from a visit at the Sarajevo Hospital with those wounded in the assassination attempt, the convoy took a wrong turn into a street where, by coincidence, Princip stood. With a pistol, Princip shot and killed Franz Ferdinand and his wife

Page 324: Tugas tik di kelas xi ips 3

Sophie. The reaction among the people in Austria was mild, almost indifferent. As historian Zbyněk Zeman later wrote, "the event almost failed to make any impression whatsoever. OnSunday and Monday (28 and 29 June), the crowds in Vienna listened to music and drank wine, as if nothing had happened."[26][27]

Crowds on the streets in the aftermath of the anti-Serb riots in Sarajevo, 29 June 1914.

Escalation of violence in Bosnia and Herzegovina

Main articles: Anti-Serb riots in Sarajevo and Schutzkorps

However, in Sarajevo itself, Austrian authorities encouraged violence against the Serb residents, which resulted in anti-Serb riots in Sarajevo, in which Croats and Bosnian Muslims killed two ethnic Serbs and damaged numerous Serb-owned buildings.[28][29] The events have been described as having the characteristics of a pogrom. Writer Ivo Andrić referred to theviolence as the "Sarajevo frenzy of hate."[30] Violent actions against ethnic Serbs were organized not only in Sarajevo, but also in many other large Austro-Hungarian cities in modern-dayCroatia, and Bosnia and Herzegovina.[31] Austro-Hungarian authorities in Bosnia and Herzegovina imprisoned and extradited approximately 5,500 prominent Serbs, 700 to 2,200 of whom died in prison. A further 460 Serbs were sentenced to death and a predominantly Muslim special militia known as the Schutzkorps was established and carried out the persecution of Serbs.[32][33][34][35]

July Crisis

Main article: July Crisis

Page 325: Tugas tik di kelas xi ips 3

The assassination led to a month of diplomatic manoeuvring between Austria-Hungary, Germany, Russia, France, and Britain called the July Crisis. Believing correctly that Serbian officials (especially the officers of the Black Hand) were involved in the plot to murder the Archduke, and wanting to finally end Serbian interference in Bosnia,[36] Austria-Hungary delivered to Serbia on 23 July the July Ultimatum, a series often demands that were made intentionally unacceptable, in an effort to provoke a war with Serbia.[37] The next day, after theCouncil of Ministers of Russia was held under the chairmanshipof the Tsar at Krasnoe Selo, Russia ordered general mobilization for Odessa, Kiev, Kazan and Moscow military districts and fleets of the Baltic and the Black Sea. They also asked for other regions to accelerate preparations for general mobilization. Serbia decreed general mobilization on the 25th and that night, declared that they accepted all the terms of the ultimatum, except the one claiming that Austrian investigators visit the country. Following this, Austria brokeoff diplomatic relations with Serbia, and the next day ordereda partial mobilization. Finally, on 28 July 1914, Austria-Hungary declared war on Serbia.

On 29 July, Russia in support of its Serb protégé, unilaterally declared – outside of the conciliation procedure provided by the Franco-Russian military agreements – partial mobilization against Austria-Hungary. German Chancellor Bethmann-Hollweg was then allowed until the 31st for an appropriate response. On the 30th, Russia ordered general mobilization against Germany. In response, the following day, Germany declared a "state of danger of war." This also led to the general mobilization in Austria-Hungary on 4 August. Kaiser Wilhelm II asked his cousin, Tsar Nicolas II, to suspend the Russian general mobilization. When he refused, Germany issued an ultimatum demanding the arrest of its mobilization and commitment not to support Serbia. Another wassent to France, asking her not to support Russia if it were tocome to the defence of Serbia. On 1 August, after the Russian response, Germany mobilized and declared war on Russia.

The German government issued demands that France remain neutral as they had to decide which deployment plan to implement, it being difficult if not impossible to change the

Page 326: Tugas tik di kelas xi ips 3

deployment whilst it was underway. The modified German Schlieffen Plan, Aufmarsch II West, would deploy 80% of the army in the west, and Aufmarsch I Ost and Aufmarsch II Ost would deploy 60% in the west and 40% in the east as this was the maximum that the East Prussian railway infrastructure could carry. TheFrench did not respond but sent a mixed message by ordering their troops to withdraw 10 km (6 mi) from the border to avoidany incidents while ordering the mobilisation of her reserves.Germany responded by mobilising its own reserves and implementing Aufmarsch II West. Germany attacked Luxembourg on 2 August and on 3 August declared war on France.[38] On 4 August, after Belgium refused to permit German troops to cross its borders into France, Germany declared war on Belgium as well.[38][39][40] Britain declared war on Germany at 19:00 UTC on 4 August 1914 (effective from 11 pm), following an "unsatisfactory reply" to the British ultimatum that Belgium must be kept neutral.[41]

Progress of the warOpening hostilities

Confusion among the Central Powers

The strategy of the Central Powers suffered from miscommunication. Germany had promised to support Austria-Hungary's invasion of Serbia, but interpretations of what thismeant differed. Previously tested deployment plans had been replaced early in 1914, but had never been tested in exercises. Austro-Hungarian leaders believed Germany would cover its northern flank against Russia.[42] Germany, however, envisioned Austria-Hungary directing most of its troops against Russia, while Germany dealt with France. This confusion forced the Austro-Hungarian Army to divide its forces between the Russian and Serbian fronts.

Serbian campaign

Page 327: Tugas tik di kelas xi ips 3

Serbian Army Blériot XI "Oluj", 1915.Main article: Serbian Campaign (World War I)

Austria invaded and fought the Serbian army at the Battle of Cer and Battle of Kolubara beginning on 12 August. Over the next two weeks, Austrian attacks were thrown back with heavy losses, which marked the first major Allied victories of the war and dashed Austro-Hungarian hopes of a swift victory. As aresult, Austria had to keep sizable forces on the Serbian front, weakening its efforts against Russia.[43] Serbia's defeatof the Austro-Hungarian invasion of 1914 counts among the major upset victories of the last century.[44]

German forces in Belgium and France

Main article: Western Front (World War I)

British hospital at the Western Front.

At the outbreak of World War I, 80% of the German army (consisting in the West of seven field armies) was deployed inthe west according to the plan Aufmarsch II West. However, they were then assigned the operation of the retired deployment plan Aufmarsch I West, also known as the Schlieffen Plan. This would march German armies through northern Belgium and into France, in an attempt to encircle the French army and then breach the 'second defensive area' of the fortresses of Verdunand Paris and the Marne river.[10]

Page 328: Tugas tik di kelas xi ips 3

Aufmarsch I West was one of four deployment plans available to the German General Staff in 1914, each plan favouring but not specifying a certain operation that was well-known to the officers expected to carry it out under their own initiative with minimal oversight. Aufmarsch I West, designed for a one-front war with France, had been retired once it became clear that it was irrelevant to the wars Germany could expect to face; both Russia and Britain were expected to help France andthere was no possibility of Italian nor Austro-Hungarian troops being available for operations against France. But despite its unsuitability, and the availability of more sensible and decisive options, it retained a certain allure due to its offensive nature and the pessimism of pre-war thinking, which expected offensive operations to be short-lived, costly in casualties and unlikely to be decisive. Accordingly, the Aufmarsch II West deployment was changed for the offensive of 1914, despite its unrealistic goals and the insufficient forces Germany had available for decisive success.[45]

The plan called for the right flank of the German advance to bypass the French armies concentrated on the Franco-German border, defeat the French forces closer to Luxembourg and Belgium and move south to Paris. Initially the Germans were successful, particularly in the Battle of the Frontiers (14–24August). By 12 September, the French, with assistance from theBritish Expeditionary Force (BEF), halted the German advance east of Paris at the First Battle of the Marne (5–12 September), and pushed the German forces back some 50 km (31 mi). The French offensive into southern Alsace, launched on 20 August with the Battle of Mulhouse, had limited success.

German soldiers in a railway goods wagon on the way to the front in 1914. Early in the war, all sides expected the conflict to be a short one.

In the east, the Russians invaded with two armies. In response, Germany rapidly moved the 8th Field Army from its previous role as reserve for the invasion of France, to East Prussia by rail across the German Empire. This army, led by general Paul von Hindenburg defeated Russia in a series of battles collectively known as the First Battle of Tannenberg (17 August – 2 September). While the Russian invasion failed,

Page 329: Tugas tik di kelas xi ips 3

it caused the diversion of German troops to the east, allowingthe tactical Allied victory at the First Battle of the Marne. This meant that Germany failed to achieve its objective of avoiding a long two-front war. However, the German army had fought its way into a good defensive position inside France and effectively halved France's supply of coal. It had also killed or permanently crippled 230,000 more French and Britishtroops than it itself had lost. Despite this, communications problems and questionable command decisions cost Germany the chance of a more decisive outcome.[46]

Asia and the Pacific

Military recruitment in Melbourne, Australia, 1914.Main article: Asian and Pacific theatre of World War I

New Zealand occupied German Samoa (later Western Samoa) on 30 August 1914. On 11 September, the Australian Naval and Military Expeditionary Force landed on the island of Neu Pommern (later New Britain), which formed part of German New Guinea. On 28 October, the German cruiser SMS   Emden sank the Russian cruiser Zhemchug in the Battle of Penang. Japan seizedGermany's Micronesian colonies and, after the Siege of Tsingtao, the German coaling port of Qingdao on the Chinese Shandong peninsula. As Vienna refused to withdraw the Austro-Hungarian cruiser SMS   Kaiserin Elisabeth from Tsingtao, Japan declared war not only on Germany, but also on Austria-Hungary;the ship participated in the defense of Tsingtao where it was sunk in November 1914.[47] Within a few months, the Allied forces had seized all the German territories in the Pacific; only isolated commerce raiders and a few holdouts in New Guinea remained.[48][49]

African campaigns

Main article: African theatre of World War I

Page 330: Tugas tik di kelas xi ips 3

Some of the first clashes of the war involved British, French,and German colonial forces in Africa. On 6–7 August, French and British troops invaded the German protectorate of Togolandand Kamerun. On 10 August, German forces in South-West Africa attacked South Africa; sporadic and fierce fighting continued for the rest of the war. The German colonial forces in German East Africa, led by Colonel Paul von Lettow-Vorbeck, fought a guerrilla warfare campaign during World War I and only surrendered two weeks after the armistice took effect in Europe.[50]

Indian support for the Allies

Further information: Third Anglo-Afghan War and Hindu–German Conspiracy

Contrary to British fears of a revolt in India, the outbreak of the war saw an unprecedented outpouring of loyalty and goodwill towards Britain.[51][52] Indian political leaders from the Indian National Congress and other groups were eager to support the British war effort, since they believed that strong support for the war effort would further the cause of Indian Home Rule. The Indian Army in fact outnumbered the British Army at the beginning of the war; about 1.3 million Indian soldiers and labourers served in Europe, Africa, and the Middle East, while the central government and the princelystates sent large supplies of food, money, and ammunition. In all, 140,000 men served on the Western Front and nearly 700,000 in the Middle East. Casualties of Indian soldiers totalled 47,746 killed and 65,126 wounded during World War I.[53] The suffering engendered by the war, as well as the failureof the British government to grant self-government to India after the end of hostilities, bred disillusionment and fuelledthe campaign for full independence that would be led by Mohandas K. Gandhi and others.

Western Front

Main article: Western Front (World War I)

Trench warfare begins

Page 331: Tugas tik di kelas xi ips 3

Royal Irish Rifles in a communications trench, first day on the Somme, 1916.

Military tactics before World War I had failed to keep pace with advances in technology and had become obsolete. These advances had allowed the creation of strong defensive systems,which out-of-date military tactics could not break through formost of the war. Barbed wire was a significant hindrance to massed infantry advances, while artillery, vastly more lethal than in the 1870s, coupled with machine guns, made crossing open ground extremely difficult.[54] Commanders on both sides failed to develop tactics for breaching entrenched positions without heavy casualties. In time, however, technology began to produce new offensive weapons, such as gas warfare and the tank.[55]

Just after the First Battle of the Marne (5–12 September 1914), Entente and German forces repeatedly attempted manoeuvring to the north to outflank each other: this series of manoeuvres became known as the "Race to the Sea". When these outflanking efforts failed, Britain and France soon found themselves facing an uninterrupted line of entrenched German forces from Lorraine to Belgium's coast.[10] Britain and France sought to take the offensive, while Germany defended the occupied territories. Consequently, German trenches were much better constructed than those of their enemy; Anglo-French trenches were only intended to be "temporary" before their forces broke through the German defences.[56]

Both sides tried to break the stalemate using scientific and technological advances. On 22 April 1915, at the Second Battleof Ypres, the Germans (violating the Hague Convention) used chlorine gas for the first time on the Western Front. Several types of gas soon became widely used by both sides, and thoughit never proved a decisive, battle-winning weapon, poison gas

Page 332: Tugas tik di kelas xi ips 3

became one of the most-feared and best-remembered horrors of the war.[57][58] Tanks were first used in combat by the British during the Battle of Flers–Courcelette (part of the Battle of the Somme) on 15 September 1916, with only partial success. However, their effectiveness would grow as the war progressed;the Germans employed only small numbers of their own design, supplemented by captured Allied tanks.

French 87th regiment near Verdun, 1916.

Continuation of trench warfare

Canadian troops advancing with a British Mark II tank at the Battle of Vimy Ridge, 1917.

Neither side proved able to deliver a decisive blow for the next two years. Throughout 1915–17, the British Empire and France suffered more casualties than Germany, because of both the strategic and tactical stances chosen by the sides. Strategically, while the Germans only mounted one major offensive, the Allies made several attempts to break through the German lines.

Page 333: Tugas tik di kelas xi ips 3

In February 1916 the Germans attacked the French defensive positions at Verdun. Lasting until December 1916, the battle saw initial German gains, before French counter-attacks returned matters to near their starting point. Casualties weregreater for the French, but the Germans bled heavily as well, with anywhere from 700,000[59] to 975,000[60] casualties suffered between the two combatants. Verdun became a symbol of French determination and self-sacrifice.[61]

The Battle of the Somme was an Anglo-French offensive that ranfrom July to November 1916. The opening of this offensive (1 July 1916) saw the British Army endure the bloodiest day in its history, suffering 57,470 casualties, including 19,240 dead, on the first day alone. The entire Somme offensive cost the British Army some 420,000 casualties. The French suffered another estimated 200,000 casualties and the Germans an estimated 500,000.[62]

Protracted action at Verdun throughout 1916,[63] combined with the bloodletting at the Somme, brought the exhausted French army to the brink of collapse. Futile attempts at frontal assault came at a high price for both the British and the French and led to the widespread French Army Mutinies, after the failure of the costly Nivelle Offensive of April–May 1917.[64] The concurrent British Battle of Arras was more limited in scope, and more successful, although ultimately of little strategic value.[65][66] A smaller part of the Arras offensive, the capture of Vimy Ridge by the Canadian Corps, became highlysignificant to that country: the idea that Canada's national identity was born out of the battle is an opinion widely held in military and general histories of Canada.[67][68]

The last large-scale offensive of this period was a British attack (with French support) at Passchendaele (July–November 1917). This offensive opened with great promise for the Allies, before bogging down in the October mud. Casualties, though disputed, were roughly equal, at some 200,000–400,000 per side.

These years of trench warfare in the West saw no major exchanges of territory and, as a result, are often thought of as static and unchanging. However, throughout this period,

Page 334: Tugas tik di kelas xi ips 3

British, French, and German tactics constantly evolved to meetnew battlefield challenges.

Naval war

Battleships of the Hochseeflotte, 1917.Main article: Naval warfare of World War I

At the start of the war, the German Empire had cruisers scattered across the globe, some of which were subsequently used to attack Allied merchant shipping. The British Royal Navy systematically hunted them down, though not without some embarrassment from its inability to protect Allied shipping. For example, the German detached light cruiser SMS Emden, partof the East-Asia squadron stationed at Qingdao, seized or destroyed 15 merchantmen, as well as sinking a Russian cruiserand a French destroyer. However, most of the German East-Asia squadron—consisting of the armoured cruisers Scharnhorst and Gneisenau, light cruisers Nürnberg and Leipzig and two transport ships—did not have orders to raid shipping and was instead underway to Germany when it met British warships. The German flotilla and Dresden sank two armoured cruisers at the Battle of Coronel, but was almost destroyed at the Battle of the Falkland Islands in December 1914, with only Dresden and a few auxiliaries escaping, but at the Battle of Más a Tierra these too were destroyed or interned.[69]

Soon after the outbreak of hostilities, Britain began a naval blockade of Germany. The strategy proved effective, cutting off vital military and civilian supplies, although this blockade violated accepted international law codified by several international agreements of the past two centuries.[70] Britain mined international waters to prevent any ships from entering entire sections of ocean, causing danger to even neutral ships.[71] Since there was limited response to this tactic, Germany expected a similar response to its unrestricted submarine warfare.[72]

Page 335: Tugas tik di kelas xi ips 3

The 1916 Battle of Jutland (German: Skagerrakschlacht, or "Battle of the Skagerrak") developed into the largest naval battle of the war, the only full-scale clash of battleships during the war, and one of the largest in history. It took place on 31 May – 1 June 1916, in the North Sea off Jutland. The Kaiserliche Marine's High Seas Fleet, commanded by Vice Admiral Reinhard Scheer, squared off against the Royal Navy's Grand Fleet, led by Admiral Sir John Jellicoe. The engagement was a stand off, as the Germans, outmanoeuvred by the larger British fleet, managed to escape and inflicted more damage to the British fleet than they received. Strategically, however, the British asserted their control of the sea, and the bulk ofthe German surface fleet remained confined to port for the duration of the war.[73]

U-155 exhibited near Tower Bridge in London, after the 1918 Armistice.

German U-boats attempted to cut the supply lines between NorthAmerica and Britain.[74] The nature of submarine warfare meant that attacks often came without warning, giving the crews of the merchant ships little hope of survival.[74][75] The United States launched a protest, and Germany changed its rules of engagement. After the sinking of the passenger ship RMS Lusitania in 1915, Germany promised not to target passenger liners, while Britain armed its merchant ships, placing them beyond the protection of the "cruiser rules", which demanded warning and placing crews in "a place of safety" (a standard that lifeboats did not meet).[76] Finally, in early 1917, Germany adopted a policy of unrestricted submarine warfare, realising that the Americans would eventually enter the war.[74]

[77] Germany sought to strangle Allied sea lanes before the United States could transport a large army overseas, but couldmaintain only five long-range U-boats on station, to limited effect.[74]

Page 336: Tugas tik di kelas xi ips 3

The U-boat threat lessened in 1917, when merchant ships began travelling in convoys, escorted by destroyers. This tactic made it difficult for U-boats to find targets, which significantly lessened losses; after the hydrophone and depth charges were introduced, accompanying destroyers could attack a submerged submarine with some hope of success. Convoys slowed the flow of supplies, since ships had to wait as convoys were assembled. The solution to the delays was an extensive program of building new freighters. Troopships were too fast for the submarines and did not travel the North Atlantic in convoys.[78] The U-boats had sunk more than 5,000 Allied ships, at a cost of 199 submarines.[79] World War I also saw the first use of aircraft carriers in combat, with HMS   Furious launching Sopwith Camels in a successful raid against the Zeppelin hangars at Tondern in July 1918, as well as blimps for antisubmarine patrol.[80]

Southern theatres

War in the Balkans

Main articles: Balkans Campaign (World War I), Bulgaria duringWorld War I, Serbian Campaign (World War I) and Macedonian Front

Bulgarian soldiers in a trench, preparing to fire against an incoming airplane.

Page 337: Tugas tik di kelas xi ips 3

Austro-Hungarian troops executing captured Serbians, 1917. Serbia lost about 850,000 people during the war, a quarter of its pre-war population.[81]

Refugee transport from Serbia in Leibnitz, Styria, 1914.

Faced with Russia, Austria-Hungary could spare only one-third of its army to attack Serbia. After suffering heavy losses, the Austrians briefly occupied the Serbian capital, Belgrade. A Serbian counter-attack in the Battle of Kolubara succeeded in driving them from the country by the end of 1914. For the first ten months of 1915, Austria-Hungary used most of its military reserves to fight Italy. German and Austro-Hungarian diplomats, however, scored a coup by persuading Bulgaria to join the attack on Serbia on 6 September 1915 in Pless.[82] The Austro-Hungarian provinces of Slovenia, Croatia and Bosnia provided troops for Austria-Hungary, invading Serbia as well as fighting Russia and Italy. Montenegro allied itself with Serbia.[83]

Serbia was conquered in a little more than a month, as the Central Powers, now including Bulgaria, sent in 600,000 troops. The Serbian army, fighting on two fronts and facing certain defeat, retreated into northern Albania. The Serbs suffered defeat in the Battle of Kosovo. Montenegro covered the Serbian retreat towards the Adriatic coast in the Battle of Mojkovac in 6–7 January 1916, but ultimately the Austrians also conquered Montenegro. The surviving Serbian soldiers wereevacuated by ship to Greece.[84] After conquest, Serbia was divided between Austro-Hungary and Bulgaria.

In late 1915, a Franco-British force landed at Salonica in Greece, to offer assistance and to pressure its government to declare war against the Central Powers. However, the pro-German King Constantine I dismissed the pro-Allied government

Page 338: Tugas tik di kelas xi ips 3

of Eleftherios Venizelos before the Allied expeditionary forcearrived.[85] The friction between the King of Greece and the Allies continued to accumulate with the National Schism, whicheffectively divided Greece between regions still loyal to the king and the new provisional government of Venizelos in Salonica. After intense negotiations and an armed confrontation in Athens between Allied and royalist forces (anincident known as Noemvriana), the King of Greece resigned andhis second son Alexander took his place; Greece then officially joined the war on the side of the Allies.

In the beginning, the Macedonian Front was mostly static. French and Serbian forces retook limited areas of Macedonia byrecapturing Bitola on 19 November 1916 following the costly Monastir Offensive, which brought stabilization of the front.[86]

Serbian and French troops finally made a breakthrough in September 1918, after most of the German and Austro-Hungarian troops had been withdrawn. The Bulgarians suffered their only defeat of the war at the Battle of Dobro Pole. Bulgaria capitulated two weeks later, on 29 September 1918.[87] The German high command responded by despatching troops to hold the line, but these forces were far too weak to reestablish a front.[88]

The disappearance of the Macedonian Front meant that the road to Budapest and Vienna was now opened to Allied forces. Hindenburg and Ludendorff concluded that the strategic and operational balance had now shifted decidedly against the Central Powers and, a day after the Bulgarian collapse, insisted on an immediate peace settlement.[89]

Ottoman Empire

Main article: Middle Eastern theatre of World War I

Page 339: Tugas tik di kelas xi ips 3

Mustafa Kemal Atatürk at the trenches of Gallipoli during the Gallipoli Campaign.

Ottoman 3rd Army troopers with winter gear.

British artillery battery on Mount Scopus in the Battle of Jerusalem, 1917.

Russian forest trench at the 1914–1915 Battle of Sarikamish.

The Ottoman Empire joined the Central Powers in the war with the secret Ottoman–German Alliance signed in August 1914.[90]

Page 340: Tugas tik di kelas xi ips 3

The Ottomans threatened Russia's Caucasian territories and Britain's communications with India via the Suez Canal.

As the conflict progressed, the Ottoman Empire took advantage of the European powers' preoccupation with the war and conducted large-scale ethnic cleansing of the indigenous Greek, Assyrian and Armenian Christian populations, known as the Greek genocide, Assyrian Genocide and Armenian genocide.[91]

[92][93]

The British and French opened overseas fronts with the Gallipoli (1915) and Mesopotamian campaigns (1914). In Gallipoli, the Ottoman Empire successfully repelled the British, French, and Australian and New Zealand Army Corps (ANZACs). In Mesopotamia, by contrast, after the disastrous Siege of Kut (1915–16), British Imperial forces reorganised and captured Baghdad in March 1917. The British were aided in Mesopotamia by local Arab and Assyrian tribesmen, while the Ottomans employed local Kurdish and Turcoman tribes.[94]

Further to the west, the Suez Canal was defended from Ottoman attacks in 1915 and 1916; in August, a German and Ottoman force was defeated at the Battle of Romani by the ANZAC Mounted Division and the 52nd (Lowland) Infantry Division. Following this victory, a Egyptian Expeditionary Force advanced across the Sinai Peninsula, pushing Ottoman forces back in the Battle of Magdhaba in December and the Battle of Rafa on the border between the Egyptian Sinai and Ottoman Palestine in January 1917.[95]

Page 341: Tugas tik di kelas xi ips 3

Xmas card from British Mesopotamian Expeditionary Force with list of engagements, Basra, 1917

Russian armies generally saw success in the Caucasus. Enver Pasha, supreme commander of the Ottoman armed forces, was ambitious and dreamed of re-conquering central Asia and areas that had been lost to Russia previously. He was, however, a poor commander.[96] He launched an offensive against the Russians in the Caucasus in December 1914 with 100,000 troops;insisting on a frontal attack against mountainous Russian positions in winter. He lost 86% of his force at the Battle ofSarikamish.[97]

In December 1914 the Ottoman Empire, with German support, invaded Persia (modern Iran) in an effort to cut off British and Russian access to petroleum reservoirs around Baku near the Caspian Sea.[98] Persia, ostensibly neutral, had long been under the spheres of British and Russian influence. The Ottomans and Germans were aided by Kurdish and Azeri forces, together with a large number of major Iranian tribes, such as the Qashqai, Tangistanis, Luristanis, and Khamseh, while the Russians and British had the support of Assyrian and Armenian forces. The Persian Campaign was to last until 1918 and end infailure for the Ottomans and their allies, however the Russianwithdrawal from the war in 1917 led to Armenian and Assyrian forces, who had hitherto inflicted a series of defeats upon the forces of the Ottomans and their allies, being cut off from supply lines, outnumbered, outgunned and isolated, forcing them to fight and flee towards British lines in northern Mesopotamia.[99]

General Yudenich, the Russian commander from 1915 to 1916, drove the Turks out of most of the southern Caucasus with a string of victories.[97] In 1917, Russian Grand Duke Nicholas assumed command of the Caucasus front. Nicholas planned a railway from Russian Georgia to the conquered territories, so that fresh supplies could be brought up for a new offensive in1917. However, in March 1917 (February in the pre-revolutionary Russian calendar), the Czar abdicated in the course of the February Revolution and the Russian Caucasus Army began to fall apart.

Page 342: Tugas tik di kelas xi ips 3

Instigated by the Arab bureau of the British Foreign Office, the Arab Revolt started June 1916 at the Battle of Mecca, led by Sherif Hussein of Mecca, and ended with the Ottoman surrender of Damascus. Fakhri Pasha, the Ottoman commander of Medina, resisted for more than two and half years during the Siege of Medina before surrendering.[100]

Along the border of Italian Libya and British Egypt, the Senussi tribe, incited and armed by the Turks, waged a small-scale guerrilla war against Allied troops. The British were forced to dispatch 12,000 troops to oppose them in the SenussiCampaign. Their rebellion was finally crushed in mid-1916.[101]

Total Allied casualties on the Ottoman fronts amounted 650,000men. Total Ottoman casualties were 725,000 (325,000 dead and 400,000 wounded).[102]

Italian participation

Austro-Hungarian troops, Tyrol.

Depiction of the Battle of Doberdò, fought in August 1916 between the Italian and the Austro-Hungarian armies.

Page 343: Tugas tik di kelas xi ips 3

Main articles: Italian Campaign (World War I) and Albania during World War IFurther information: Battles of the Isonzo

Italy had been allied with the German and Austro-Hungarian Empires since 1882 as part of the Triple Alliance. However, the nation had its own designs on Austrian territory in Trentino, the Austrian Littoral, Fiume (Rijeka) and Dalmatia. Rome had a secret 1902 pact with France, effectively nullifying its alliance.[103] At the start of hostilities, Italyrefused to commit troops, arguing that the Triple Alliance wasdefensive and that Austria-Hungary was an aggressor. The Austro-Hungarian government began negotiations to secure Italian neutrality, offering the French colony of Tunisia in return. The Allies made a counter-offer in which Italy would receive the Southern Tyrol, Austrian Littoral and territory onthe Dalmatian coast after the defeat of Austria-Hungary. This was formalised by the Treaty of London. Further encouraged by the Allied invasion of Turkey in April 1915, Italy joined the Triple Entente and declared war on Austria-Hungary on 23 May. Fifteen months later, Italy declared war on Germany.[104]

The Italians had numerical superiority but this advantage was lost, not only because of the difficult terrain in which fighting took place, but also because of the strategies and tactics employed.[105] Field Marshal Luigi Cadorna, a staunch proponent of the frontal assault, had dreams of breaking into the Slovenian plateau, taking Ljubljana and threatening Vienna.

On the Trentino front, the Austro-Hungarians took advantage ofthe mountainous terrain, which favoured the defender. After aninitial strategic retreat, the front remained largely unchanged, while Austrian Kaiserschützen and Standschützen engaged Italian Alpini in bitter hand-to-hand combat throughout the summer. The Austro-Hungarians counterattacked in the Altopiano of Asiago, towards Verona and Padua, in the spring of 1916 (Strafexpedition), but made little progress.[106]

Beginning in 1915, the Italians under Cadorna mounted eleven offensives on the Isonzo front along the Isonzo (Soča) River, northeast of Trieste. All eleven offensives were repelled by the Austro-Hungarians, who held the higher ground. In the

Page 344: Tugas tik di kelas xi ips 3

summer of 1916, after the Battle of Doberdò, the Italians captured the town of Gorizia. After this minor victory, the front remained static for over a year, despite several Italianoffensives, centred on the Banjšice and Karst Plateau east of Gorizia.

The Central Powers launched a crushing offensive on 26 October1917, spearheaded by the Germans. They achieved a victory at Caporetto (Kobarid). The Italian Army was routed and retreatedmore than 100 kilometres (62 mi) to reorganise, stabilising the front at the Piave River. Since the Italian Army had suffered heavy losses in the Battle of Caporetto, the Italian Government called to arms the so-called '99 Boys (Ragazzi del '99):that is, all males born on 1899 and after, and so were 18 years old or older. In 1918, the Austro-Hungarians failed to break through in a series of battles on the Piave and were finally decisively defeated in the Battle of Vittorio Veneto in October of that year. On 1 November, the Italian Navy destroyed much of the Austro-Hungarian fleet stationed in Pula, preventing it from being handed over to the new State ofSlovenes, Croats and Serbs. On 3 November, the Italians occupied Trieste from the sea. On the same day, the Armistice of Villa Giusti was signed. By mid-November 1918, the Italian military occupied the entire former Austrian Littoral and had seized control of the portion of Dalmatia that had been guaranteed to Italy by the London Pact.[107] By the end of hostilities in November 1918,[108] Admiral Enrico Millo declaredhimself Italy's Governor of Dalmatia.[108] Austria-Hungary surrendered in early November 1918.[109][110]

Romanian participation

Main article: Romania during World War I

Marshal Joffre inspecting Romanian troops, 1916.

Romania had been allied with the Central Powers since 1882. When the war began, however, it declared its neutrality, arguing that because Austria-Hungary had itself declared war

Page 345: Tugas tik di kelas xi ips 3

on Serbia, Romania was under no obligation to join the war. When the Entente Powers promised Romania large territories of eastern Hungary (Transylvania and Banat), which had a large Romanian population, in exchange for Romania's declaring war on the Central Powers, the Romanian government renounced its neutrality and, on 27 August 1916, the Romanian Army launched an attack against Austria-Hungary, with limited Russian support. The Romanian offensive was initially successful, pushing back the Austro-Hungarian troops in Transylvania, but a counterattack by the forces of the Central Powers drove backthe Russo-Romanian forces.[111] As a result of the Battle of Bucharest, the Central Powers occupied Bucharest on 6 December1916. Fighting in Moldova continued in 1917, resulting in a costly stalemate for the Central Powers.[112][113] Russian withdrawal from the war in late 1917 as a result of the October Revolution meant that Romania was forced to sign an armistice with the Central Powers on 9 December 1917.

Romanian troops during the Battle of Mărăşeşti, 1917.

In January 1918, Romanian forces established control over Bessarabia as the Russian Army abandoned the province. Although a treaty was signed by the Romanian and the BolshevikRussian governments following talks from 5–9 March 1918 on thewithdrawal of Romanian forces from Bessarabia within two months, on 27 March 1918 Romania attached Bessarabia to its territory, formally based on a resolution passed by the local assembly of that territory on its unification with Romania.[114]

Romania officially made peace with the Central Powers by signing the Treaty of Bucharest on 7 May 1918. Under that treaty, Romania was obliged to end the war with the Central Powers and make small territorial concessions to Austria-Hungary, ceding control of some passes in the Carpathian Mountains, and to grant oil concessions to Germany. In exchange, the Central Powers recognised the sovereignty of

Page 346: Tugas tik di kelas xi ips 3

Romania over Bessarabia. The treaty was renounced in October 1918 by the Alexandru Marghiloman government, and Romania nominally re-entered the war on 10 November 1918. The next day, the Treaty of Bucharest was nullified by the terms of theArmistice of Compiègne.[115][116] Total Romanian deaths from 1914 to 1918, military and civilian, within contemporary borders, were estimated at 748,000.[117]

Eastern Front

Main article: Eastern Front (World War I)

Initial actions

Russian troops in a trench, awaiting a German attack, 1917.

While the Western Front had reached stalemate, the war continued in East Europe.[118] Initial Russian plans called for simultaneous invasions of Austrian Galicia and East Prussia. Although Russia's initial advance into Galicia was largely successful, it was driven back from East Prussia by Hindenburgand Ludendorff at the Battle of Tannenberg and the Masurian Lakes in August and September 1914.[119][120] Russia's less developed industrial base and ineffective military leadership was instrumental in the events that unfolded. By the spring of1915, the Russians had retreated to Galicia, and, in May, the Central Powers achieved a remarkable breakthrough on Poland's southern frontiers.[121] On 5 August, they captured Warsaw and forced the Russians to withdraw from Poland.

Russian Revolution

Page 347: Tugas tik di kelas xi ips 3

Main article: Russian Revolution

Despite the success of the June 1916 Brusilov Offensive in eastern Galicia,[122] dissatisfaction with the Russian government's conduct of the war grew. The offensive's success was undermined by the reluctance of other generals to commit their forces to support the victory. Allied and Russian forceswere revived only temporarily by Romania's entry into the war on 27 August. German forces came to the aid of embattled Austro-Hungarian units in Transylvania while a German-Bulgarian force attacked from the south, and Bucharest fell tothe Central Powers on 6 December. Meanwhile, unrest grew in Russia, as the Tsar remained at the front. Empress Alexandra'sincreasingly incompetent rule drew protests and resulted in the murder of her favourite, Rasputin, at the end of 1916.

In March 1917, demonstrations in Petrograd culminated in the abdication of Tsar Nicholas II and the appointment of a weak Provisional Government, which shared power with the Petrograd Soviet socialists. This arrangement led to confusion and chaosboth at the front and at home. The army became increasingly ineffective.[121]

Treaty of Brest-Litovsk, 1918.1. Count Ottokar von Czernin2. Richard von Kühlmann3. Vasil Radoslavov

Following the Tsar's abdication, Vladimir Lenin was allowed passage by train back into Russia from Switzerland, and financed by Germany. Discontent and the weaknesses of the Provisional Government led to a rise in the popularity of the Bolshevik Party, led by Lenin, which demanded an immediate endto the war. The Revolution of November was followed in December by an armistice and negotiations with Germany. At first, the Bolsheviks refused the German terms, but when

Page 348: Tugas tik di kelas xi ips 3

German troops began marching across the Ukraine unopposed, thenew government acceded to the Treaty of Brest-Litovsk on 3 March 1918. The treaty ceded vast territories, including Finland, the Baltic provinces, parts of Poland and Ukraine to the Central Powers.[123] Despite this enormous apparent German success, the manpower required for German occupation of formerRussian territory may have contributed to the failure of the Spring Offensive and secured relatively little food or other materiel for the Central Powers war effort.

With the adoption of the Treaty of Brest-Litovsk, the Entente no longer existed. The Allied powers led a small-scale invasion of Russia, partly to stop Germany from exploiting Russian resources, and to a lesser extent, to support the "Whites" (as opposed to the "Reds") in the Russian Civil War.[124] Allied troops landed in Arkhangelsk and in Vladivostok as part of the North Russia Intervention.

Czechoslovak Legion

Czechoslovak Legion, Vladivostok, 1918.Main article: Czechoslovak Legion

The Czechoslovak Legion fought with the Entente; their goal was to win support for the independence of Czechoslovakia. TheLegion in Russia was established in 1917, in December 1917 in France (including volunteers from America) and in April 1918 in Italy. Czechoslovak Legion troops defeated the Austro-Hungarian army at the Ukrainian village Zborov in July 1917. After this success, the number of Czechoslovak legionaries increased, as well as Czechoslovak military power. In the Battle of Bakhmach, the Legion defeated the Germans and forcedthem to make a truce.

In Russia, they were heavily involved in the Russian Civil Warfighting the Bolsheviks, at times controlling most of the Trans-Siberian railway and conquering all major cities in Siberia. The presence of the Czechoslovak Legion near the

Page 349: Tugas tik di kelas xi ips 3

Yekaterinburg appears to have been one of the motivating forces for the Bolshevik execution of the Tsar and his family in July 1918. Legionaries came less than a week afterwards andcaptured the city. Because Russia's European ports were not safe, the corps was to be evacuated by a long detour via the port of Vladivostok. The last transport was the American ship Heffron in September 1920.

Central Powers peace overtures

"They shall not pass", a phrase typically associated with the defense of Verdun.

In December 1916, after ten brutal months of the Battle of Verdun and a successful offensive against Romania, the Germansattempted to negotiate a peace with the Allies. Soon after, the US president, Woodrow Wilson, attempted to intervene as a peacemaker, asking in a note for both sides to state their demands. Lloyd George's War Cabinet considered the German offer to be a ploy to create divisions amongst the Allies. After initial outrage and much deliberation, they took Wilson's note as a separate effort, signalling that the UnitedStates was on the verge of entering the war against Germany following the "submarine outrages". While the Allies debated aresponse to Wilson's offer, the Germans chose to rebuff it in favour of "a direct exchange of views". Learning of the Germanresponse, the Allied governments were free to make clear demands in their response of 14 January. They sought restoration of damages, the evacuation of occupied territories, reparations for France, Russia and Romania, and arecognition of the principle of nationalities. This included the liberation of Italians, Slavs, Romanians, Czecho-Slovaks, and the creation of a "free and united Poland". On the question of security, the Allies sought guarantees that would prevent or limit future wars, complete with sanctions, as a

Page 350: Tugas tik di kelas xi ips 3

condition of any peace settlement.[125] The negotiations failed and the Entente powers rejected the German offer, because Germany did not state any specific proposals. To Wilson, the Entente powers stated that they would not start peace negotiations until the Central powers evacuated all occupied Allied territories and provided indemnities for all damage which had been done.[126]

1917–1918

Developments in 1917

German film crew recording the action.

Events of 1917 proved decisive in ending the war, although their effects were not fully felt until 1918.

The British naval blockade began to have a serious impact on Germany. In response, in February 1917, the German General Staff convinced Chancellor Theobald von Bethmann-Hollweg to declare unrestricted submarine warfare, with the goal of starving Britain out of the war. German planners estimated that unrestricted submarine warfare would cost Britain a monthly shipping loss of 600,000 tons. The General Staff acknowledged that the policy would almost certainly bring the United States into the conflict, but calculated that British shipping losses would be so high that they would be forced to sue for peace after 5 to 6 months, before American intervention could make an impact. In reality, tonnage sunk rose above 500,000 tons per month from February to July. It peaked at 860,000 tons in April. After July, the newly re-introduced convoy system became extremely effective in reducing the U-boat threat. Britain was safe from starvation, while German industrial output fell and the United States troops joined the war in large numbers far earlier than Germany had anticipated.

Page 351: Tugas tik di kelas xi ips 3

Haut-Rhin, France, 1917.

On 3 May 1917, during the Nivelle Offensive, the French 2nd Colonial Division, veterans of the Battle of Verdun, refused orders, arriving drunk and without their weapons. Their officers lacked the means to punish an entire division, and harsh measures were not immediately implemented. The French Army Mutinies eventually spread to a further 54 French divisions and saw 20,000 men desert. However, appeals to patriotism and duty, as well as mass arrests and trials, encouraged the soldiers to return to defend their trenches, although the French soldiers refused to participate in furtheroffensive action.[127] Robert Nivelle was removed from command by 15 May, replaced by General Philippe Pétain, who suspended bloody large-scale attacks.

The victory of Austria-Hungary and Germany at the Battle of Caporetto led the Allies to convene the Rapallo Conference at which they formed the Supreme War Council to coordinate planning. Previously, British and French armies had operated under separate commands.

In December, the Central Powers signed an armistice with Russia. This released large numbers of German troops for use in the west. With German reinforcements and new American troops pouring in, the outcome was to be decided on the Western Front. The Central Powers knew that they could not wina protracted war, but they held high hopes for success based on a final quick offensive. Furthermore, the leaders of the Central Powers and the Allies became increasingly fearful of social unrest and revolution in Europe. Thus, both sides urgently sought a decisive victory.[128]

Page 352: Tugas tik di kelas xi ips 3

In 1917, Emperor Charles I of Austria secretly attempted separate peace negotiations with Clemenceau, through his wife's brother Sixtus in Belgium as an intermediary, without the knowledge of Germany. Italy opposed the proposals. When the negotiations failed, his attempt was revealed to Germany resulting in a diplomatic catastrophe.[129][130]

Ottoman Empire conflict, 1917–1918

Main article: Sinai and Palestine Campaign

British troops on the march in Mesopotamia, 1917.

Ottoman troops in Mesopotamia.

In March and April 1917, at the First and Second Battles of Gaza, German and Ottoman forces stopped the advance of the Egyptian Expeditionary Force, which had begun in August 1916 at the Battle of Romani.[131][132] At the end of October, the Sinai and Palestine Campaign resumed, when General Edmund Allenby's XXth Corps, XXI Corps and Desert Mounted Corps won the Battle of Beersheba.[133] Two Ottoman armies were defeated afew weeks later at the Battle of Mughar Ridge and, early in December, Jerusalem was captured following another Ottoman defeat at the Battle of Jerusalem (1917).[134][135][136] About this time, Friedrich Freiherr Kress von Kressenstein was relieved of his duties as the Eighth Army's commander, replaced by Djevad Pasha, and a few months later the commander of the

Page 353: Tugas tik di kelas xi ips 3

Ottoman Army in Palestine, Erich von Falkenhayn, was replaced by Otto Liman von Sanders.[137][138]

Early in 1918, the front line was extended into the Jordan Valley, was occupied, following the First Transjordan and the Second Transjordan attack by British Empire forces in March and April 1918.[139] During March, most of the Egyptian Expeditionary Force's British infantry and Yeomanry cavalry were sent to fight on the Western Front as a consequence of the Spring Offensive. They were replaced by Indian Army units.During several months of reorganisation and training during the summer, a number of attacks were carried out on sections of the Ottoman front line. These pushed the front line north to more advantageous positions in preparation for an attack and to acclimatise the newly arrived Indian Army infantry. It was not until the middle of September that the integrated force was ready for large-scale operations.

The reorganised Egyptian Expeditionary Force, with an additional mounted division, broke Ottoman forces at the Battle of Megiddo in September 1918. In two days the British and Indian infantry, supported by a creeping barrage, broke the Ottoman front line and captured the headquarters of the Eighth Army (Ottoman Empire) at Tulkarm, the continuous trenchlines at Tabsor, Arara and the Seventh Army (Ottoman Empire) headquarters at Nablus. The Desert Mounted Corps rode through the break in the front line created by the infantry and, during virtually continuous operations by Australian Light Horse, British mounted Yeomanry, Indian Lancers and New Zealand Mounted Rifle brigades in the Jezreel Valley, they captured Nazareth, Afulah and Beisan, Jenin, along with Haifa on the Mediterranean coast and Daraa east of the Jordan River on the Hejaz railway. Samakh and Tiberias on the Sea of Galilee, were captured on the way northwards to Damascus. Meanwhile, Chaytor's Force of Australian light horse, New Zealand mounted rifles, Indian, British West Indies and Jewishinfantry captured the crossings of the Jordan River, Es Salt, Amman and at Ziza most of the Fourth Army (Ottoman Empire). The Armistice of Mudros, signed at the end of October, ended hostilities with the Ottoman Empire when fighting was continuing north of Aleppo.

Entry of the United States

Page 354: Tugas tik di kelas xi ips 3

Main article: American entry into World War I

President Wilson before Congress, announcing the break in official relations with Germany on 3 February 1917.

At the outbreak of the war, the United States pursued a policyof non-intervention, avoiding conflict while trying to broker a peace. When the German U-boat SM U-20 sank the British linerRMS Lusitania on 7 May 1915 with 128 Americans among the dead, President Woodrow Wilson insisted that "America is too proud to fight" but demanded an end to attacks on passenger ships. Germany complied. Wilson unsuccessfully tried to mediate a settlement. However, he also repeatedly warned that the UnitedStates would not tolerate unrestricted submarine warfare, in violation of international law. The former president Theodore Roosevelt denounced German acts as "piracy".[140] Wilson was narrowly reelected in 1916 as his supporters emphasized "he kept us out of war".

In January 1917, Germany resumed unrestricted submarine warfare, realizing it would mean American entry. The German Foreign Minister, in the Zimmermann Telegram, invited Mexico to join the war as Germany's ally against the United States. In return, the Germans would finance Mexico's war and help it recover the territories of Texas, New Mexico, and Arizona.[141] The United Kingdom intercepted the message and presented it tothe US embassy in the UK. From there it made its way to President Wilson who released the Zimmermann note to the public, and Americans saw it as casus belli. Wilson called on antiwar elements to end all wars, by winning this one and eliminating militarism from the globe. He argued that the war was so important that the US had to have a voice in the peace conference.[142] After the sinking of seven US merchant ships bysubmarines and the publication of the Zimmermann telegram, Wilson called for war on Germany,[143] which the US Congress declared on 6 April 1917.

Page 355: Tugas tik di kelas xi ips 3

The United States was never formally a member of the Allies but became a self-styled "Associated Power". The United Stateshad a small army, but, after the passage of the Selective Service Act, it drafted 2.8 million men,[144] and, by summer 1918, was sending 10,000 fresh soldiers to France every day. In 1917, the US Congress gave US citizenship to Puerto Ricans when they were drafted to participate in World War I, as part of the Jones Act. Germany had miscalculated, believing it would be many more months before American soldiers would arrive and that their arrival could be stopped by U-boats.[145]

The United States Navy sent a battleship group to Scapa Flow to join with the British Grand Fleet, destroyers to Queenstown, Ireland, and submarines to help guard convoys. Several regiments of US Marines were also dispatched to France. The British and French wanted American units used to reinforce their troops already on the battle lines and not waste scarce shipping on bringing over supplies. General John J. Pershing, American Expeditionary Forces (AEF) commander, refused to break up American units to be used as reinforcements for British Empire and French units. As an exception, he did allow African-American combat regiments to be used in French divisions. The Harlem Hellfighters fought aspart of the French 16th Division, and earned a unit Croix de Guerre for their actions at Château-Thierry, Belleau Wood, andSechault.[146] AEF doctrine called for the use of frontal assaults, which had long since been discarded by British Empire and French commanders because of the large loss of life.[147]

German Spring Offensive of 1918

Main article: Spring Offensive

British 55th Division soldiers, blinded by tear gas during theBattle of Estaires, 10 April 1918.

Page 356: Tugas tik di kelas xi ips 3

French soldiers under General Gouraud, with machine guns amongst the ruins of a cathedral near the Marne, 1918.

Ludendorff drew up plans (codenamed Operation Michael) for the1918 offensive on the Western Front. The Spring Offensive sought to divide the British and French forces with a series of feints and advances. The German leadership hoped to end thewar before significant US forces arrived. The operation commenced on 21 March 1918, with an attack on British forces near Amiens. German forces achieved an unprecedented advance of 60 kilometres (37 mi).[148]

British and French trenches were penetrated using novel infiltration tactics, also named Hutier tactics, after General Oskar von Hutier, by specially trained units called stormtroopers. Previously, attacks had been characterised by long artillery bombardments and massed assaults. However, in the Spring Offensive of 1918, Ludendorff used artillery only briefly and infiltrated small groups of infantry at weak points. They attacked command and logistics areas and bypassedpoints of serious resistance. More heavily armed infantry thendestroyed these isolated positions. This German success reliedgreatly on the element of surprise.[149]

The front moved to within 120 kilometres (75 mi) of Paris. Three heavy Krupp railway guns fired 183 shells on the capital, causing many Parisians to flee. The initial offensivewas so successful that Kaiser Wilhelm II declared 24 March a national holiday. Many Germans thought victory was near. Afterheavy fighting, however, the offensive was halted. Lacking tanks or motorised artillery, the Germans were unable to consolidate their gains. This situation was not helped by the now stretched supply lines as a result of their rapid advance over devastated ground.[150]

Page 357: Tugas tik di kelas xi ips 3

General Foch pressed to use the arriving American troops as individual replacements, whereas Pershing sought to field American units as an independent force. These units were assigned to the depleted French and British Empire commands on28 March. A Supreme War Council of Allied forces was created at the Doullens Conference on 5 November 1917.[151] General Fochwas appointed as supreme commander of the Allied forces. Haig,Petain, and Pershing retained tactical control of their respective armies; Foch assumed a coordinating rather than a directing role, and the British, French, and US commands operated largely independently.[151]

Following Operation Michael, Germany launched Operation Georgette against the northern English Channel ports. The Allies halted the drive after limited territorial gains by Germany. The German Army to the south then conducted Operations Blücher and Yorck, pushing broadly towards Paris. Operation Marne was launched on 15 July, in an attempt to encircle Reims and beginning the Second Battle of the Marne. The resulting counterattack, which started the Hundred Days Offensive, marked the first successful Allied offensive of thewar.

By 20 July, the Germans had retreated across the Marne to their starting lines,[152] having achieved little, and the German Army never regained the initiative. German casualties between March and April 1918 were 270,000, including many highly trained storm troopers.

Meanwhile, Germany was falling apart at home. Anti-war marchesbecame frequent and morale in the army fell. Industrial outputwas 53% of 1913 levels.

New states under war zone

In the late spring of 1918, three new states were formed in the South Caucasus: the First Republic of Armenia, the Azerbaijan Democratic Republic, and the Democratic Republic ofGeorgia, which declared their independence from the Russian Empire.[153] Two other minor entities were established, the Centrocaspian Dictatorship and South West Caucasian Republic (the former was liquidated by Azerbaijan in the autumn of 1918and the latter by a joint Armenian-British task force in early

Page 358: Tugas tik di kelas xi ips 3

1919). With the withdrawal of the Russian armies from the Caucasus front in the winter of 1917–18, the three major republics braced for an imminent Ottoman advance, which commenced in the early months of 1918. Solidarity was briefly maintained when the Transcaucasian Federative Republic was created in the spring of 1918, but this collapsed in May, whenthe Georgians asked and received protection from Germany and the Azerbaijanis concluded a treaty with the Ottoman Empire that was more akin to a military alliance. Armenia was left tofend for itself and struggled for five months against the threat of a full-fledged occupation by the Ottoman Turks.[153]

Allied victory: summer 1918 onwards

Allies increased their front-line rifle strength while German strength fell in half in 1918[154]

Hundred Days Offensive

Main articles: Hundred Days Offensive and Weimar RepublicAerial view of ruins of Vaux-devant-Damloup, France, 1918.

The Allied counteroffensive, known as the Hundred Days Offensive, began on 8 August 1918, with the Battle of Amiens. The battle involved over 400 tanks and 120,000 British, Dominion, and French troops, and by the end of its first day agap 15 mi (24 km) long had been created in the German lines. The defenders displayed a marked collapse in morale, causing Ludendorff to refer to this day as the "Black Day of the

Page 359: Tugas tik di kelas xi ips 3

German army".[155][156] After an advance as far as 14 miles (23 km), German resistance stiffened, and the battle was concluded on 12 August.

Rather than continuing the Amiens battle past the point of initial success, as had been done so many times in the past, the Allies shifted their attention elsewhere. Allied leaders had now realised that to continue an attack after resistance had hardened was a waste of lives, and it was better to turn aline than to try to roll over it. They began to undertake attacks in quick order to take advantage of successful advances on the flanks, then broke them off when each attack lost its initial impetus.[157]

Canadian Scottish, advancing during the Battle of the Canal duNord, 1918.

British and Dominion forces launched the next phase of the campaign with the Battle of Albert on 21 August.[158] The assault was widened by French[159] and then further British forces in the following days. During the last week of August the pressure along a 70-mile (113 km) front against the enemy was heavy and unrelenting. From German accounts, "Each day wasspent in bloody fighting against an ever and again on-stormingenemy, and nights passed without sleep in retirements to new lines."[157]

Faced with these advances, on 2 September the German Oberste Heeresleitung (OHL) issued orders to withdraw to the Hindenburg Line in the south. This ceded without a fight the salient seized the previous April.[160] According to Ludendorff "We had to admit the necessity ... to withdraw the entire front from the Scarpe to the Vesle."[161]

Page 360: Tugas tik di kelas xi ips 3

September saw the Allied advance to the Hindenburg Line in thenorth and centre. The Germans continued to fight strong rear-guard actions and launched numerous counterattacks on lost positions, but only a few succeeded, and then only temporarily. Contested towns, villages, heights, and trenches in the screening positions and outposts of the Hindenburg Linecontinued to fall to the Allies, with the BEF alone taking 30,441 prisoners in the last week of September. On 24 September an assault by both the British and French came within 2 miles (3.2 km) of St. Quentin.[159] The Germans had nowretreated to positions at or behind the Hindenburg Line.

An American major, piloting an observation balloon near the front, 1918.

In nearly four weeks of fighting beginning 8 August, over 100,000 German prisoners were taken, 75,000 by the BEF and therest by the French. As of "The Black Day of the German Army", the German High Command realised that the war was lost and made attempts to reach a satisfactory end. The day after that battle, Ludendorff said: "We cannot win the war any more, but we must not lose it either." On 11 August he offered his resignation to the Kaiser, who refused it, replying, "I see that we must strike a balance. We have nearly reached the limit of our powers of resistance. The war must be ended." On 13 August, at Spa, Hindenburg, Ludendorff, the Chancellor, andForeign Minister Hintz agreed that the war could not be ended

Page 361: Tugas tik di kelas xi ips 3

militarily and, on the following day, the German Crown Councildecided that victory in the field was now most improbable. Austria and Hungary warned that they could only continue the war until December, and Ludendorff recommended immediate peacenegotiations. Prince Rupprecht warned Prince Max of Baden: "Our military situation has deteriorated so rapidly that I no longer believe we can hold out over the winter; it is even possible that a catastrophe will come earlier." On 10 September Hindenburg urged peace moves to Emperor Charles of Austria, and Germany appealed to the Netherlands for mediation. On 14 September Austria sent a note to all belligerents and neutrals suggesting a meeting for peace talkson neutral soil, and on 15 September Germany made a peace offer to Belgium. Both peace offers were rejected, and on 24 September OHL informed the leaders in Berlin that armistice talks were inevitable.[159]

The final assault on the Hindenburg Line began with the Meuse-Argonne Offensive, launched by French and American troops on 26 September. The following week, cooperating French and American units broke through in Champagne at the Battle of Blanc Mont Ridge, forcing the Germans off the commanding heights, and closing towards the Belgian frontier.[162] On 8 October the line was pierced again by British and Dominion troops at the Battle of Cambrai.[163] The German army had to shorten its front and use the Dutch frontier as an anchor to fight rear-guard actions as it fell back towards Germany.

When Bulgaria signed a separate armistice on 29 September, Ludendorff, having been under great stress for months, suffered something similar to a breakdown. It was evident thatGermany could no longer mount a successful defence.[164][165]

Men of US 64th Regiment, 7th Infantry Division, celebrate the news of the Armistice, 11 November 1918.

Page 362: Tugas tik di kelas xi ips 3

News of Germany's impending military defeat spread throughout the German armed forces. The threat of mutiny was rife. Admiral Reinhard Scheer and Ludendorff decided to launch a last attempt to restore the "valour" of the German Navy. Knowing the government of Prince Maximilian of Baden would veto any such action, Ludendorff decided not to inform him. Nonetheless, word of the impending assault reached sailors at Kiel. Many, refusing to be part of a naval offensive, which they believed to be suicidal, rebelled and were arrested. Ludendorff took the blame; the Kaiser dismissed him on 26 October. The collapse of the Balkans meant that Germany was about to lose its main supplies of oil and food. Its reserves had been used up, even as US troops kept arriving at the rate of 10,000 per day.[166] The Americans supplied more than 80% of Allied oil during the war, meaning no such loss of supplies could affect the Allied effort.[167]

With the military faltering and with widespread loss of confidence in the Kaiser, Germany moved towards peace. Prince Maximilian of Baden took charge of a new government as Chancellor of Germany to negotiate with the Allies. Negotiations with President Wilson began immediately, in the hope that he would offer better terms than the British and French. Wilson demanded a constitutional monarchy and parliamentary control over the German military.[168] There was no resistance when the Social Democrat Philipp Scheidemann on 9 November declared Germany to be a republic. The Kaiser, kings and other hereditary rulers all were removed from power.Imperial Germany was dead; a new Germany had been born: the Weimar Republic.[169]

Armistices and capitulations

Main article: Armistice of 11 November 1918

Page 363: Tugas tik di kelas xi ips 3

Ferdinand Foch, second from right, pictured outside the carriage in Compiègne after agreeing to the armistice that ended the war there. The carriage was later chosen by Nazi Germany as the symbolic setting of Pétain's June 1940 armistice.[170]

The collapse of the Central Powers came swiftly. Bulgaria was the first to sign an armistice, on 29 September 1918 at Saloniki.[171] On 30 October, the Ottoman Empire capitulated, signing the Armistice of Mudros.[171]

On 24 October, the Italians began a push that rapidly recovered territory lost after the Battle of Caporetto. This culminated in the Battle of Vittorio Veneto, which marked the end of the Austro-Hungarian Army as an effective fighting force. The offensive also triggered the disintegration of the Austro-Hungarian Empire. During the last week of October, declarations of independence were made in Budapest, Prague, and Zagreb. On 29 October, the imperial authorities asked Italy for an armistice. But the Italians continued advancing, reaching Trento, Udine, and Trieste. On 3 November, Austria-Hungary sent a flag of truce to ask for an armistice. The terms, arranged by telegraph with the Allied Authorities in Paris, were communicated to the Austrian commander and accepted. The Armistice with Austria was signed in the Villa Giusti, near Padua, on 3 November. Austria and Hungary signed separate armistices following the overthrow of the Habsburg Monarchy. Following the outbreak of the German Revolution of

Page 364: Tugas tik di kelas xi ips 3

1918–1919, a republic was proclaimed on 9 November. The Kaiserfled to the Netherlands.

The New York Times of 11 November 1918.

On 11 November, at 5:00 am, an armistice with Germany was signed in a railroad carriage at Compiègne. At 11 am on 11 November 1918—"the eleventh hour of the eleventh day of the eleventh month"—a ceasefire came into effect. During the six hours between the signing of the armistice and its taking effect, opposing armies on the Western Front began to withdrawfrom their positions, but fighting continued along many areas of the front, as commanders wanted to capture territory beforethe war ended.

The occupation of the Rhineland took place following the Armistice. The occupying armies consisted of American, Belgian, British and French forces.

In November 1918, the Allies had ample supplies of men and materiel to invade Germany. Yet at the time of the armistice, no Allied force had crossed the German frontier; the Western

Page 365: Tugas tik di kelas xi ips 3

Front was still some 450 mi (720 km) from Berlin; and the Kaiser's armies had retreated from the battlefield in good order. These factors enabled Hindenburg and other senior German leaders to spread the story that their armies had not really been defeated. This resulted in the stab-in-the-back legend,[172][173] which attributed Germany's defeat not to its inability to continue fighting (even though up to a million soldiers were suffering from the 1918 flu pandemic and unfit to fight), but to the public's failure to respond to its "patriotic calling" and the supposed intentional sabotage of the war effort, particularly by Jews, Socialists, and Bolsheviks.

The Allies had much more potential wealth they could spend on the war. One estimate (using 1913 US dollars) is that the Allies spent $58 billion on the war and the Central Powers only $25 billion. Among the Allies, the UK spent $21 billion and the US $17 billion; among the Central Powers Germany spent$20 billion.[174]

AftermathMain article: Aftermath of World War I

The French military cemetery at the Douaumont ossuary, which contains the remains of more than 130,000 unknown soldiers.

In the aftermath of the war, four empires disappeared: the German, Austro-Hungarian, Ottoman, and Russian. Numerous nations regained their former independence, and new ones created. Four dynasties, together with their ancillary aristocracies, all fell after the war: the Hohenzollerns, the Habsburgs, and the Ottomans. Belgium and Serbia were badly damaged, as was France, with 1.4 million soldiers dead,[175] notcounting other casualties. Germany and Russia were similarly affected.[176]

Page 366: Tugas tik di kelas xi ips 3

Formal end of the war

A formal state of war between the two sides persisted for another seven months, until the signing of the Treaty of Versailles with Germany on 28 June 1919. The United States Senate did not ratify the treaty despite public support for it,[177][178] and did not formally end its involvement in the war until the Knox–Porter Resolution was signed on 2 July 1921 by President Warren G. Harding.[179] For the United Kingdom and theBritish Empire, the state of war ceased under the provisions of the Termination of the Present War (Definition) Act 1918 with respect to:

Germany on 10 January 1920.[180]

Austria on 16 July 1920.[181]

Bulgaria on 9 August 1920.[182]

Hungary on 26 July 1921.[183]

Turkey on 6 August 1924.[184]

After the Treaty of Versailles, treaties with Austria, Hungary, Bulgaria, and the Ottoman Empire were signed. However, the negotiation of the latter treaty with the OttomanEmpire was followed by strife (the Turkish War of Independence), and a final peace treaty between the Allied Powers and the country that would shortly become the Republic of Turkey was not signed until 24 July 1923, at Lausanne.

Some war memorials date the end of the war as being when the Versailles Treaty was signed in 1919, which was when many of the troops serving abroad finally returned to their home countries; by contrast, most commemorations of the war's end concentrate on the armistice of 11 November 1918. Legally, theformal peace treaties were not complete until the last, the Treaty of Lausanne, was signed. Under its terms, the Allied forces divested Constantinople on 23 August 1923.

Peace treaties and national boundaries

Page 367: Tugas tik di kelas xi ips 3

The Signing of Peace in the Hall of Mirrors, Versailles, 28th June 1919

After the war, the Paris Peace Conference imposed a series of peace treaties on the Central Powers officially ending the war. The 1919 Treaty of Versailles dealt with Germany, and building on Wilson's 14th point, brought into being the Leagueof Nations on 28 June 1919.[185][186]

The Central Powers had to acknowledge responsibility for "all the loss and damage to which the Allied and Associated Governments and their nationals have been subjected as a consequence of the war imposed upon them by" their aggression.In the Treaty of Versailles, this statement was Article 231. This article became known as War Guilt clause as the majority of Germans felt humiliated and resentful.[187] Overall the Germans felt they had been unjustly dealt by what they called the "diktat of Versailles." Schulze says, the Treaty placed Germany, "under legal sanctions, deprived of military power, economically ruined, and politically humiliated."[188] Belgian historian Laurence Van Ypersele emphasizes the central role played by memory of the war and the Versailles Treaty in German politics in the 1920s and 1930s:

Active denial of war guilt in Germany and German resentment at both reparations and continued Allied occupation of the Rhineland made widespread revision of the meaning and memory of the war problematic. The legendof the "stab in the back" and the wish to revise the "Versailles diktat", and the belief in an international threat aimed at the elimination of the German nation

Page 368: Tugas tik di kelas xi ips 3

persisted at the heart of German politics. Even a man of peace such as Stresemann publicly rejected German guilt. As for the Nazis, they waved the banners of domestic treason and international conspiracy in an attempt to galvanize the German nation into a spirit of revenge. Like a Fascist Italy, Nazi Germany sought to redirect thememory of the war to the benefit of its own policies.[189]

Meanwhile, new nations liberated from German rule viewed the treaty as recognition of wrongs committed against small nations by much larger aggressive neighbors.[190] The Peace Conference required all the defeated powers to pay reparationsfor all the damage done to civilians. However, owing to economic difficulties and Germany being the only defeated power with an intact economy, the burden fell largely on Germany.

Austria-Hungary was partitioned into several successor states,including Austria, Hungary, Czechoslovakia and Yugoslavia, largely but not entirely along ethnic lines. Transylvania was shifted from Hungary to Greater Romania. The details were contained in the Treaty of Saint-Germain and the Treaty of Trianon. As a result of the Treaty of Trianon, 3.3 million Hungarians came under foreign rule. Although the Hungarians made up 54% of the population of the pre-war Kingdom of Hungary, only 32% of its territory was left to Hungary. Between 1920 and 1924, 354,000 Hungarians fled former Hungarian territories attached to Romania, Czechoslovakia, andYugoslavia.[191]

The Russian Empire, which had withdrawn from the war in 1917 after the October Revolution, lost much of its western frontier as the newly independent nations of Estonia, Finland,Latvia, Lithuania, and Poland were carved from it. Romania took control of Bessarabia in April 1918.[192]

The Ottoman Empire disintegrated, and much of its non-Anatolian territory was awarded to various Allied powers as protectorates. The Turkish core in Anatolia was reorganised asthe Republic of Turkey. The Ottoman Empire was to be partitioned by the Treaty of Sèvres of 1920. This treaty was never ratified by the Sultan and was rejected by the Turkish National Movement, leading to the victorious Turkish War of

Page 369: Tugas tik di kelas xi ips 3

Independence and the much less stringent 1923 Treaty of Lausanne.

National identities

Further information: Sykes–Picot Agreement

Poland reemerged as an independent country, after more than a century. The Kingdom of Serbia and its dynasty, as a "minor Entente nation" and the country with the most casualties per capita,[193][194][195] became the backbone of a new multinational state, the Kingdom of Serbs, Croats and Slovenes, later renamed Yugoslavia. Czechoslovakia, combining the Kingdom of Bohemia with parts of the Kingdom of Hungary, became a new nation. Russia became the Soviet Union and lost Finland, Estonia, Lithuania, and Latvia, which became independent countries. The Ottoman Empire was soon replaced by Turkey and several other countries in the Middle East.

Map of territorial changes in Europe after World War I (as of 1923).

In the British Empire, the war unleashed new forms of nationalism. In Australia and New Zealand the Battle of Gallipoli became known as those nations' "Baptism of Fire". Itwas the first major war in which the newly established countries fought, and it was one of the first times that Australian troops fought as Australians, not just subjects of the British Crown. Anzac Day, commemorating the Australian andNew Zealand Army Corps, celebrates this defining moment.[196][197]

After the Battle of Vimy Ridge, where the Canadian divisions fought together for the first time as a single corps, Canadians began to refer to theirs as a nation "forged from fire".[198] Having succeeded on the same battleground where the "mother countries" had previously faltered, they were for the

Page 370: Tugas tik di kelas xi ips 3

first time respected internationally for their own accomplishments. Canada entered the war as a Dominion of the British Empire and remained so, although it emerged with a greater measure of independence.[199][200] When Britain declared war in 1914, the dominions were automatically at war; at the conclusion, Canada, Australia, New Zealand, and South Africa were individual signatories of the Treaty of Versailles.[201]

The establishment of the modern state of Israel and the roots of the continuing Israeli–Palestinian conflict are partially found in the unstable power dynamics of the Middle East that resulted from World War I.[202] Before the end of the war, the Ottoman Empire had maintained a modest level of peace and stability throughout the Middle East.[203] With the fall of the Ottoman government, power vacuums developed and conflicting claims to land and nationhood began to emerge.[204] The political boundaries drawn by the victors of World War I were quickly imposed, sometimes after only cursory consultation with the local population. These continue to be problematic inthe 21st-century struggles for national identity.[205][206] While the dissolution of the Ottoman Empire at the end of World War I was pivotal in contributing to the modern political situation of the Middle East, including the Arab-Israeli conflict,[207][208][209] the end of Ottoman rule also spawned lesserknown disputes over water and other natural resources.[210]

Health effects

Transporting Ottoman wounded at Sirkeci.

Page 371: Tugas tik di kelas xi ips 3

Emergency military hospital during the Spanish flu pandemic, which killed about 675,000 people in the United States alone. Camp Funston, Kansas, 1918.

The war had profound consequences in the health of soldiers. Of the 60 million European military personnel who were mobilized from 1914 to 1918, 8   million were killed , 7 million were permanently disabled, and 15 million were seriously injured. Germany lost 15.1% of its active male population, Austria-Hungary lost 17.1%, and France lost 10.5%.[211] In Germany civilian deaths were 474,000 higher than in peacetime,due in large part to food shortages and malnutrition that weakened resistance to disease.[212] By the end of the war, starvation caused by famine had killed approximately 100,000 people in Lebanon.[213] Between 5 and 10 million people died in the Russian famine of 1921.[214] By 1922, there were between 4.5 million and 7 million homeless children in Russia as a result of nearly a decade of devastation from World War I, the Russian Civil War, and the subsequent famine of 1920–1922.[215] Numerous anti-Soviet Russians fled the country after the Revolution; by the 1930s, the northern Chinese city of Harbin had 100,000 Russians.[216] Thousands more emigrated to France, England, and the United States.

In Australia, the effects of the war on the economy were no less severe. The Australian prime minister, Billy Hughes, wrote to the British prime minister, Lloyd George, "You have assured us that you cannot get better terms. I much regret it,and hope even now that some way may be found of securing agreement for demanding reparation commensurate with the tremendous sacrifices made by the British Empire and her Allies."[217] Australia received ₤5,571,720 war reparations, butthe direct cost of the war to Australia had been ₤376,993,052,and, by the mid-1930s, repatriation pensions, war gratuities, interest and sinking fund charges were ₤831,280,947.[217] Of

Page 372: Tugas tik di kelas xi ips 3

about 416,000 Australians who served, about 60,000 were killedand another 152,000 were wounded.[218]

Diseases flourished in the chaotic wartime conditions. In 1914alone, louse-borne epidemic typhus killed 200,000 in Serbia.[219] From 1918 to 1922, Russia had about 25 million infections and 3 million deaths from epidemic typhus.[220] In 1923, 13 million Russians contracted malaria, a sharp increase from thepre-war years.[221] In addition, a major influenza epidemic spread around the world. Overall, the 1918 flu pandemic killedat least 50 million people.[222][223]

Lobbying by Chaim Weizmann and fear that American Jews would encourage the United States to support Germany culminated in the British government's Balfour Declaration of 1917, endorsing creation of a Jewish homeland in Palestine.[224] A total of more than 1,172,000 Jewish soldiers served in the Allied and Central Power forces in World War I, including 275,000 in Austria-Hungary and 450,000 in Czarist Russia.[225]

The social disruption and widespread violence of the Russian Revolution of 1917 and the ensuing Russian Civil War sparked more than 2,000 pogroms in the former Russian Empire, mostly in Ukraine.[226] An estimated 60,000–200,000 civilian Jews were killed in the atrocities.[227]

In the aftermath of World War I, Greece fought against Turkishnationalists led by Mustafa Kemal, a war which eventually resulted in a massive population exchange between the two countries under the Treaty of Lausanne.[228] According to various sources,[229] several hundred thousand Greeks died during this period, which was tied in with the Greek Genocide.[230]

TechnologySee also: Technology during World War I and Weapons of World War I

Ground warfare

Page 373: Tugas tik di kelas xi ips 3

A Russian armoured car, 1919

World War I began as a clash of 20th-century technology and 19th-century tactics, with the inevitably large ensuing casualties. By the end of 1917, however, the major armies, nownumbering millions of men, had modernised and were making use of telephone, wireless communication,[231] armoured cars, tanks,[232] and aircraft. Infantry formations were reorganised, so that 100-man companies were no longer the main unit of manoeuvre; instead, squads of 10 or so men, under the command of a junior NCO, were favoured.

Artillery also underwent a revolution. In 1914, cannons were positioned in the front line and fired directly at their targets. By 1917, indirect fire with guns (as well as mortars and even machine guns) was commonplace, using new techniques for spotting and ranging, notably aircraft and the often overlooked field telephone.[233] Counter-battery missions becamecommonplace, also, and sound detection was used to locate enemy batteries.

Germany was far ahead of the Allies in utilising heavy indirect fire. The German Army employed 150 mm (6 in) and 210 mm (8 in) howitzers in 1914, when typical French and British guns were only 75 mm (3 in) and 105 mm (4 in). The British had a 6 inch (152 mm) howitzer, but it was so heavy ithad to be hauled to the field in pieces and assembled. The Germans also fielded Austrian 305 mm (12 in) and 420 mm (17 in) guns and, even at the beginning of the war, had inventories of various calibers of Minenwerfer, which were ideally suited for trench warfare.[234]

Much of the combat involved trench warfare, in which hundreds often died for each yard gained. Many of the deadliest battlesin history occurred during World War I. Such battles include Ypres, the Marne, Cambrai, the Somme, Verdun, and Gallipoli. The Germans employed the Haber process of nitrogen fixation to

Page 374: Tugas tik di kelas xi ips 3

provide their forces with a constant supply of gunpowder despite the British naval blockade.[235] Artillery was responsible for the largest number of casualties[236] and consumed vast quantities of explosives. The large number of head wounds caused by exploding shells and fragmentation forced the combatant nations to develop the modern steel helmet, led by the French, who introduced the Adrian helmet in1915. It was quickly followed by the Brodie helmet, worn by British Imperial and US troops, and in 1916 by the distinctiveGerman Stahlhelm, a design, with improvements, still in use today.

Gas! GAS! Quick, boys! – Anecstasy of fumbling,Fitting the clumsy helmets just in time;But someone still was yelling out and stumbling,And flound'ring like a man in fire or lime ...Dim, through the misty panes and thick green light,As under a green sea, I sawhim drowning.

—Wilfred Owen, Dulce et Decorum est, 1917[237]

A Canadian soldier with mustard gasburns, ca. 1917–1918.

The widespread use of chemical warfare was a distinguishing feature of the conflict. Gases used included chlorine, mustardgas and phosgene. Few war casualties were caused by gas,[238] aseffective countermeasures to gas attacks were quickly created,such as gas masks. The use of chemical warfare and small-scalestrategic bombing were both outlawed by the Hague Conventions of 1899 and 1907, and both proved to be of limited effectiveness,[239] though they captured the public imagination.[240]

The most powerful land-based weapons were railway guns, manufactured by the Krupp works, weighing hundreds of tons

Page 375: Tugas tik di kelas xi ips 3

apiece. These were nicknamed Big Berthas, even though the namesake was not a railway gun. Germany developed the Paris Gun, able to bombard Paris from over 100 kilometres (62 mi), though shells were relatively light at 94 kilograms (210 lb).

British Vickers machine gun, 1917.

Trenches, machine guns, air reconnaissance, barbed wire, and modern artillery with fragmentation shells helped bring the battle lines of World War I to a stalemate. The British and the French sought a solution with the creation of the tank andmechanised warfare. The British first tanks were used during the Battle of the Somme on 15 September 1916. Mechanical reliability was an issue, but the experiment proved its worth.Within a year, the British were fielding tanks by the hundreds, and they showed their potential during the Battle ofCambrai in November 1917, by breaking the Hindenburg Line, while combined arms teams captured 8,000 enemy soldiers and 100 guns. Meanwhile, the French introduced the first tanks with a rotating turret, the Renault FT, which became a decisive tool of the victory. The conflict also saw the introduction of light automatic weapons and submachine guns, such as the Lewis Gun, the Browning automatic rifle, and the Bergmann MP18.

Another new weapon, the flamethrower, was first used by the German army and later adopted by other forces. Although not ofhigh tactical value, the flamethrower was a powerful, demoralising weapon that caused terror on the battlefield.

Trench railways evolved to supply the enormous quantities of food, water, and ammunition required to support large numbers of soldiers in areas where conventional transportation systemshad been destroyed. Internal combustion engines and improved traction systems for automobiles and trucks/lorries eventuallyrendered trench railways obsolete.

Page 376: Tugas tik di kelas xi ips 3

Naval

Germany deployed U-boats (submarines) after the war began. Alternating between restricted and unrestricted submarine warfare in the Atlantic, the Kaiserliche Marine employed them to deprive the British Isles of vital supplies. The deaths of British merchant sailors and the seeming invulnerability of U-boats led to the development of depth charges (1916), hydrophones (passive sonar, 1917), blimps, hunter-killer submarines (HMS R-1 , 1917), forward-throwing anti-submarine weapons, and dipping hydrophones (the latter two both abandoned in 1918).[80] To extend their operations, the Germans proposed supply submarines (1916). Most of these would be forgotten in the interwar period until World War II revived the need.

Aviation

Main article: Aviation in World War I

RAF Sopwith Camel. In April 1917, the average life expectancy of a British pilot on the Western Front was 93 flying hours.[241]

Fixed-wing aircraft were first used militarily by the Italiansin Libya on 23 October 1911 during the Italo-Turkish War for reconnaissance, soon followed by the dropping of grenades and aerial photography the next year. By 1914, their military utility was obvious. They were initially used for reconnaissance and ground attack. To shoot down enemy planes, anti-aircraft guns and fighter aircraft were developed. Strategic bombers were created, principally by the Germans andBritish, though the former used Zeppelins as well.[242] Towards the end of the conflict, aircraft carriers were used for the first time, with HMS   Furious launching Sopwith Camels in a raid to destroy the Zeppelin hangars at Tondern in 1918.[243]

Page 377: Tugas tik di kelas xi ips 3

Manned observation balloons, floating high above the trenches,were used as stationary reconnaissance platforms, reporting enemy movements and directing artillery. Balloons commonly hada crew of two, equipped with parachutes,[244] so that if there was an enemy air attack the crew could parachute to safety. Atthe time, parachutes were too heavy to be used by pilots of aircraft (with their marginal power output), and smaller versions were not developed until the end of the war; they were also opposed by the British leadership, who feared they might promote cowardice.[245]

Recognised for their value as observation platforms, balloons were important targets for enemy aircraft. To defend them against air attack, they were heavily protected by antiaircraft guns and patrolled by friendly aircraft; to attack them, unusual weapons such as air-to-air rockets were even tried. Thus, the reconnaissance value of blimps and balloons contributed to the development of air-to-air combat between all types of aircraft, and to the trench stalemate, because it was impossible to move large numbers of troops undetected. The Germans conducted air raids on England during 1915 and 1916 with airships, hoping to damage British morale and cause aircraft to be diverted from the front lines, and indeed the resulting panic led to the diversion of several squadrons of fighters from France.[242][245]

War crimesBaralong incidents

Main article: Baralong incidents

On 19 August 1915, the German submarine U-27 was sunk by the British Q-ship HMS Baralong . All German survivors were summarily executed by Baralong's crew on the orders of Lieutenant Godfrey Herbert, the captain of the ship. The shooting was reported to the media by American citizens who were on board the Nicosia, a British freighter loaded with war supplies, which was stopped by U-27 just minutes before the incident.[246]

Page 378: Tugas tik di kelas xi ips 3

On 24 September, Baralong destroyed U-41, which was in the process of sinking the cargo ship Urbino. According to Karl Goetz, the submarine's commander, Baralong continued to fly theUS flag after firing on U-41 and then rammed the lifeboat – carrying the German survivors – sinking it.[247]

HMHS Llandovery Castle

The Canadian hospital ship HMHS Llandovery Castle was torpedoed by the German submarine SM U-86 on 27 June 1918 in violation of international law. Only 24 of the 258 medical personnel, patients, and crew survived. Survivors reported that the U-boat surfaced and ran down the lifeboats, machine-gunning survivors in the water. The U-boat captain, Helmut Patzig, was charged with war crimes in Germany following the war, but escaped prosecution by going to the Free City of Danzig, beyond the jurisdiction of German courts.[248]

Chemical weapons in warfare

Main article: Chemical weapons in World War IFrench soldiers making a gas and flame attack on German trenches in Flanders

The first successful use of poison gas as a weapon of warfare occurred during the Second Battle of Ypres (April 22-May 25, 1915).[249] Gas was soon used by all major belligerents throughout the war. It is estimated that the use of chemical weapons employed by both sides throughout the war had inflicted 1.3 million casualties. For example, the British hadover 180,000 chemical weapons casualties during the war, and up to one-third of American casualties were caused by them. The Russian Army reportedly suffered roughly 500,000 chemical weapon casualties in World War I.[250] The use of chemical weapons in warfare was in direct violation of the 1899 Hague Declaration Concerning Asphyxiating Gases and the 1907 Hague Convention on Land Warfare, which prohibited their use.[251][252]

Poison gas was not only limited to combatants but also civilians as civilian towns were at risk from winds blowing the poison gases through. Civilians rarely had a warning system put into place to alert their neighbors of the danger. In addition to poor warning systems, civilians often did not

Page 379: Tugas tik di kelas xi ips 3

have access to effective gas masks. An estimated 100,000–260,000 civilian casualties were caused by chemical weapons during the conflict and tens of thousands of more (along with military personnel) died from scarring of the lungs, skin damage, and cerebral damage in the years after the conflict ended. Many commanders on both sides knew that such weapons would cause major harm to civilians as wind would blow poison gases into nearby civilian towns but nonetheless continued to use them throughout the war. British Field Marshal Sir DouglasHaig wrote in his diary: "My officers and I were aware that such weapons would cause harm to women and children living in nearby towns, as strong winds were common in the battlefront. However, because the weapon was to be directed against the enemy, none of us were overly concerned at all."[253][254][255][256]

Genocide and ethnic cleansing

Austro-Hungarian soldiers executing men and women in Serbia, 1916.[257]

Armenians killed during the Armenian Genocide. Image taken from Ambassador Morgenthau's Story, written by Henry Morgenthau, Sr. and published in 1918.[258]

Main article: Ottoman casualties of World War ISee also: Armenian Genocide, Assyrian Genocide, Greek genocideand Genocide denial

The ethnic cleansing of the Ottoman Empire's Armenian population, including mass deportations and executions, duringthe final years of the Ottoman Empire is considered genocide.

Page 380: Tugas tik di kelas xi ips 3

[259] The Ottomans saw the entire Armenian population as an enemy [260] that had chosen to side with Russia at the beginning of the war.[261] In early 1915, a number of Armenians joined theRussian forces, and the Ottoman government used this as a pretext to issue the Tehcir Law (Law on Deportation). This authorized the deportation of Armenians from the Empire's eastern provinces to Syria between 1915 and 1917. The exact number of deaths is unknown, however the International Association of Genocide Scholars estimates over 1 million.[259]

[262] The government of Turkey has consistently rejected chargesof genocide, arguing that those who died were victims of inter-ethnic fighting, famine, or disease during World War I.[263] Other ethnic groups were similarly attacked by the OttomanEmpire during this period, including Assyrians and Greeks, andsome scholars consider those events to be part of the same policy of extermination.[264][265][266]

Russian Empire

Main article: Anti-Jewish pogroms in the Russian EmpireSee also: Russian occupation of Eastern Galicia, 1914–1915, Volhynia and Volga Germans

Many pogroms accompanied the Russian Revolution of 1917 and the ensuing Russian Civil War. 60,000–200,000 civilian Jews were killed in the atrocities throughout the former Russian Empire (mostly within the Pale of Settlement in present-day Ukraine).[267]

Rape of Belgium

Main article: Rape of Belgium

The German invaders treated any resistance—such as sabotaging rail lines—as illegal and immoral, and shot the offenders and burned buildings in retaliation. In addition, they tended to suspect that most civilians were potential franc-tireurs (guerrillas) and, accordingly, took and sometimes killed hostages from among the civilian population. The German army executed over 6,500 French and Belgian civilians between August and November 1914, usually in near-random large-scale shootings of civilians ordered by junior German officers. The German Army destroyed 15,000–20,000 buildings—most famously

Page 381: Tugas tik di kelas xi ips 3

the university library at Louvain—and generated a wave of refugees of over a million people. Over half the German regiments in Belgium were involved in major incidents.[268] Thousands of workers were shipped to Germany to work in factories. British propaganda dramatizing the Rape of Belgium attracted much attention in the United States, while Berlin said it was both lawful and necessary because of the threat offranc-tireurs like those in France in 1870.[269] The British andFrench magnified the reports and disseminated them at home andin the United States, where they played a major role in dissolving support for Germany.[270][271]

Soldiers' experiencesMain articles: List of last surviving World War I veterans by country, World War I casualties, Commonwealth War Graves Commission and American Battle Monuments Commission

The First Contingent of the Bermuda Volunteer Rifle Corps to the 1 Lincolns, training in Bermuda for the Western Front, winter 1914–1915. The two BVRC contingents suffered 75% casualties.

The British soldiers of the war were initially volunteers but increasingly were conscripted into service. Surviving veterans, returning home, often found that they could only discuss their experiences amongst themselves. Grouping together, they formed "veterans' associations" or "Legions".

Prisoners of war

Main article: World War I prisoners of war in GermanyGerman prisoners in a French prison camp, during the later part of the war.

About eight million men surrendered and were held in POW campsduring the war. All nations pledged to follow the Hague Conventions on fair treatment of prisoners of war, and the survival rate for POWs was generally much higher than that of

Page 382: Tugas tik di kelas xi ips 3

their peers at the front.[272] Individual surrenders were uncommon; large units usually surrendered en masse. At the siege of Maubeuge about 40,000 French soldiers surrendered, atthe battle of Galicia Russians took about 100,000 to 120,000 Austrian captives, at the Brusilov Offensive about 325,000 to 417,000 Germans and Austrians surrendered to Russians, at the Battle of Tannenberg 92,000 Russians surrendered. When the besieged garrison of Kaunas surrendered in 1915, some 20,000 Russians became prisoners, at the battle near Przasnysz (February–March 1915) 14,000 Germans surrendered to Russians, at the First Battle of the Marne about 12,000 Germans surrendered to the Allies. 25–31% of Russian losses (as a proportion of those captured, wounded, or killed) were to prisoner status; for Austria-Hungary 32%, for Italy 26%, for France 12%, for Germany 9%; for Britain 7%. Prisoners from theAllied armies totalled about 1.4 million (not including Russia, which lost 2.5–3.5 million men as prisoners). From theCentral Powers about 3.3 million men became prisoners; most ofthem surrendered to Russians.[273] Germany held 2.5 million prisoners; Russia held 2.2–2.9 million; while Britain and France held about 720,000. Most were captured just before the Armistice. The United States held 48,000. The most dangerous moment was the act of surrender, when helpless soldiers were sometimes gunned down.[274][275] Once prisoners reached a camp, conditions were, in general, satisfactory (and much better than in World War II), thanks in part to the efforts of the International Red Cross and inspections by neutral nations. However, conditions were terrible in Russia: starvation was common for prisoners and civilians alike; about 15–20% of the prisoners in Russia died and in Central Powers imprisonment—8%of Russians.[276] In Germany, food was scarce, but only 5% died.[277][278][279]

Page 383: Tugas tik di kelas xi ips 3

Emaciated Indian Army soldier who survived the Siege of Kut.

The Ottoman Empire often treated POWs poorly.[280] Some 11,800 British Empire soldiers, most of them Indians, became prisoners after the Siege of Kut in Mesopotamia in April 1916;4,250 died in captivity.[281] Although many were in a poor condition when captured, Ottoman officers forced them to march1,100 kilometres (684 mi) to Anatolia. A survivor said: "We were driven along like beasts; to drop out was to die."[282] Thesurvivors were then forced to build a railway through the Taurus Mountains.

In Russia, when the prisoners from the Czech Legion of the Austro-Hungarian army were released in 1917, they re-armed themselves and briefly became a military and diplomatic force during the Russian Civil War.

While the Allied prisoners of the Central Powers were quickly sent home at the end of active hostilities, the same treatmentwas not granted to Central Power prisoners of the Allies and Russia, many of whom served as forced labor, e.g., in France until 1920. They were released only after many approaches by the Red Cross to the Allied Supreme Council.[283] German prisoners were still being held in Russia as late as 1924.[284]

Military attachés and war correspondents

Main article: Military attachés and war correspondents in the First World War

Military and civilian observers from every major power closelyfollowed the course of the war. Many were able to report on

Page 384: Tugas tik di kelas xi ips 3

events from a perspective somewhat akin to modern "embedded" positions within the opposing land and naval forces.

Support and opposition to the warSupport

Poster urging women to join the British war effort, published by the Young Women's Christian Association

In the Balkans, Yugoslav nationalists such as the leader, AnteTrumbić, strongly supported the war, desiring the freedom of Yugoslavs from Austria-Hungary and other foreign powers and the creation of an independent Yugoslavia.[285] The Yugoslav Committee was formed in Paris on 30 April 1915 but shortly moved its office to London; Trumbić led the Committee.[285] In April 1918, the Rome Congress of Oppressed Nationalities met, including Czechoslovak, Italian, Polish, Transylvanian, and Yugoslav representatives who urged the Allies to support national self-determination for the peoples residing within Austria-Hungary.[286]

In the Middle East, Arab nationalism soared in Ottoman territories in response to the rise of Turkish nationalism during the war, with Arab nationalist leaders advocating the creation of a pan-Arab state.[287] In 1916, the Arab Revolt

Page 385: Tugas tik di kelas xi ips 3

began in Ottoman-controlled territories of the Middle East in an effort to achieve independence.[287]

A number of socialist parties initially supported the war whenit began in August 1914.[286] But European socialists split on national lines, with the concept of class conflict held by radical socialists such as Marxists and syndicalists being overborne by their patriotic support for war.[288] Once the war began, Austrian, British, French, German, and Russian socialists followed the rising nationalist current by supporting their countries' intervention in the war.[289]

Italian nationalism was stirred by the outbreak of the war andwas initially strongly supported by a variety of political factions. One of the most prominent and popular Italian nationalist supporters of the war was Gabriele d'Annunzio, whopromoted Italian irredentism and helped sway the Italian public to support intervention in the war.[290] The Italian Liberal Party, under the leadership of Paolo Boselli, promotedintervention in the war on the side of the Allies and utilisedthe Dante Alighieri Society to promote Italian nationalism.[291]

Italian socialists were divided on whether to support the war or oppose it; some were militant supporters of the war, including Benito Mussolini and Leonida Bissolati.[292] However, the Italian Socialist Party decided to oppose the war after anti-militarist protestors were killed, resulting in a generalstrike called Red Week.[293] The Italian Socialist Party purged itself of pro-war nationalist members, including Mussolini.[293]

Mussolini, a syndicalist who supported the war on grounds of irredentist claims on Italian-populated regions of Austria-Hungary, formed the pro-interventionist Il Popolo d'Italia and the Fasci Rivoluzionario d'Azione Internazionalista ("Revolutionary Fasci for International Action") in October 1914 that later developed into the Fasci di Combattimento in 1919, the origin of fascism.[294] Mussolini's nationalism enabled him to raise funds from Ansaldo (an armaments firm) and other companies to create Il Popolo d'Italia to convince socialists and revolutionaries to support the war.[295]

Opposition

Main articles: Opposition to World War I and French Army Mutinies

Page 386: Tugas tik di kelas xi ips 3

Sackville Street (now O'Connell Street) after the 1916 Easter Rising in Dublin.

Once war was declared, many socialists and trade unions backedtheir governments. Among the exceptions were the Bolsheviks, the Socialist Party of America, and the Italian Socialist Party, and individuals such as Karl Liebknecht, Rosa Luxemburg, and their followers in Germany.

Benedict XV, elected to the papacy less than three months intoWorld War I, made the war and its consequences the main focus of his early pontificate. In stark contrast to his predecessor,[296] five days after his election he spoke of his determination to do what he could to bring peace. His first encyclical, Ad beatissimi Apostolorum, given 1 November 1914, was concerned with this subject. Benedict XV found his abilities and unique position as a religious emissary of peaceignored by the belligerent powers. The 1915 Treaty of London between Italy and the Triple Entente included secret provisions whereby the Allies agreed with Italy to ignore papal peace moves towards the Central Powers. Consequently, the publication of Benedict's proposed seven-point Peace Note of August 1917 was roundly ignored by all parties except Austria-Hungary.[297]

The Deserter, 1916. Anti-war cartoon depicting Jesus facing a firing squad with soldiers from five European countries.

In Britain, in 1914, the Public Schools Officers' Training Corps annual camp was held at Tidworth Pennings, near

Page 387: Tugas tik di kelas xi ips 3

Salisbury Plain. Head of the British Army, Lord Kitchener, wasto review the cadets, but the imminence of the war prevented him. General Horace Smith-Dorrien was sent instead. He surprised the two-or-three thousand cadets by declaring (in the words of Donald Christopher Smith, a Bermudian cadet who was present), "that war should be avoided at almost any cost, that war would solve nothing, that the whole of Europe and more besides would be reduced to ruin, and that the loss of life would be so large that whole populations would be decimated. In our ignorance I, and many of us, felt almost ashamed of a British General who uttered such depressing and unpatriotic sentiments, but during the next four years, those of us who survived the holocaust—probably not more than one-quarter of us—learned how right the General's prognosis was and how courageous he had been to utter it."[298] Voicing these sentiments did not hinder Smith-Dorien's career, or prevent him from doing his duty in World War I to the best of his abilities.

Execution at Verdun at the time of the mutinies in 1917.

German Revolution, Kiel, 1918.

Many countries jailed those who spoke out against the conflict. These included Eugene Debs in the United States and Bertrand Russell in Britain. In the US, the Espionage Act of 1917 and Sedition Act of 1918 made it a federal crime to oppose military recruitment or make any statements deemed "disloyal". Publications at all critical of the government were removed from circulation by postal censors,[142] and many

Page 388: Tugas tik di kelas xi ips 3

served long prison sentences for statements of fact deemed unpatriotic.

A number of nationalists opposed intervention, particularly within states that the nationalists were hostile to. Although the vast majority of Irish people consented to participate in the war in 1914 and 1915, a minority of advanced Irish nationalists staunchly opposed taking part.[299] The war began amid the Home Rule crisis in Ireland that had resurfaced in 1912 and, by July 1914, there was a serious possibility of an outbreak of civil war in Ireland.[300] Irish nationalists and Marxists attempted to pursue Irish independence, culminating in the Easter Rising of 1916, with Germany sending 20,000 rifles to Ireland to stir unrest in Britain.[300] The UK government placed Ireland under martial law in response to theEaster Rising; although, once the immediate threat of revolution had dissipated, the authorities did try to make concessions to nationalist feeling.[301]

Other opposition came from conscientious objectors—some socialist, some religious—who refused to fight. In Britain, 16,000 people asked for conscientious objector status.[302] Someof them, most notably prominent peace activist Stephen Henry Hobhouse, refused both military and alternative service.[303] Many suffered years of prison, including solitary confinement and bread and water diets. Even after the war, in Britain manyjob advertisements were marked "No conscientious objectors need apply".

The Central Asian Revolt started in the summer of 1916, when the Russian Empire government ended its exemption of Muslims from military service.[304]

In 1917, a series of French Army Mutinies led to dozens of soldiers being executed and many more imprisoned.

In Milan, in May 1917, Bolshevik revolutionaries organised andengaged in rioting calling for an end to the war, and managed to close down factories and stop public transportation.[305] TheItalian army was forced to enter Milan with tanks and machine guns to face Bolsheviks and anarchists, who fought violently until 23 May when the army gained control of the city. Almost

Page 389: Tugas tik di kelas xi ips 3

50 people (including three Italian soldiers) were killed and over 800 people arrested.[305]

In September 1917, Russian soldiers in France began questioning why they were fighting for the French at all and mutinied.[306] In Russia, opposition to the war led to soldiers also establishing their own revolutionary committees, which helped foment the October Revolution of 1917, with the call going up for "bread, land, and peace". The Bolsheviks agreed to a peace treaty with Germany, the peace of Brest-Litovsk, despite its harsh conditions.

In northern Germany, the end of October 1918 saw the beginningof the German Revolution of 1918–1919. Units of the German Navy refused to set sail for a last, large-scale operation in a war which they saw as good as lost; this initiated the uprising. The sailors' revolt which then ensued in the naval ports of Wilhelmshaven and Kiel spread across the whole country within days and led to the proclamation of a republic on 9 November 1918 and shortly thereafter to the abdication ofKaiser Wilhelm II.

Conscription

Young men registering for conscription, New York City, June 5,1917.

Conscription was common in most European countries. However itwas controversial in English speaking countries. It was especially unpopular among minority ethnic groups—especially the Irish Catholics in Ireland[307] and Australia, and the French Catholics in Canada. In Canada the issue produced a major political crisis that permanently alienated the Francophiles. It opened a political gap between French Canadians, who believed their true loyalty was to Canada and

Page 390: Tugas tik di kelas xi ips 3

not to the British Empire, and members of the Anglophone majority, who saw the war as a duty to their British heritage.[308] In Australia, a sustained pro-conscription campaign by Billy Hughes, the Prime Minister, caused a split in the Australian Labor Party, so Hughes formed the Nationalist Partyof Australia in 1917 to pursue the matter. Farmers, the labourmovement, the Catholic Church, and the Irish Catholics successfully opposed Hughes' push, which was rejected in two plebiscites.[309]

In Britain, conscription resulted in the calling up of nearly every physically fit man in Britain—six of ten million eligible. Of these, about 750,000 lost their lives; Most deaths were to young unmarried men; however, 160,000 wives lost husbands and 300,000 children lost fathers.[310] In the United States, conscription began in 1917 and was generally well received, with a few pockets of opposition in isolated rural areas.[311]

Legacy and memoryMain article: World War I in popular culture

... "Strange, friend," I said, "Here is no cause to mourn.""None," said the other, "Save the undone years"... 

—Wilfred Owen, Strange Meeting, 1918[237]

The first tentative efforts to comprehend the meaning and consequences of modern warfare began during the initial phasesof the war, and this process continued throughout and after the end of hostilities, and still is underway, more than a century later.

Historiography

Historian Heather Jones argues that the historiography has been reinvigorated by the cultural turn in recent years. Scholars have raised entirely new questions regarding militaryoccupation, radicalizion of politics, race, and the male body.Furthermore, new research has revised our understanding of five major topics that historians have long debated. These are: Why did the war begin? Why did the Allies win? Were the

Page 391: Tugas tik di kelas xi ips 3

generals to blame for the high casualty rates? How did the soldiers endure the horrors of trench warfare? To what extent did the civilian homefront accept and endorse the war effort?[312]

Memorials

A typical village war memorial to soldiers killed in World WarIMain article: World War I memorials

Memorials were erected in thousands of villages and towns. Close to battlefields, those buried in improvised burial grounds were gradually moved to formal graveyards under the care of organisations such as the Commonwealth War Graves Commission, the American Battle Monuments Commission, the German War Graves Commission, and Le Souvenir français. Many of these graveyards also have central monuments to the missingor unidentified dead, such as the Menin Gate memorial and the Thiepval Memorial to the Missing of the Somme.

On 3 May 1915, during the Second Battle of Ypres, Lieutenant Alexis Helmer was killed. At his graveside, his friend John McCrae, M.D., of Guelph, Ontario, Canada, wrote the memorable poem In Flanders Fields as a salute to those who perished in the Great War. Published in Punch on 8 December 1915, it is still recited today, especially on Remembrance Day and Memorial Day.[313][314]

National WWI Museum and Memorial in Kansas City, Missouri, is a United States memorial dedicated to all Americans who servedin World War I. The Liberty Memorial was dedicated on 1 November 1921, when the supreme Allied commanders spoke to a crowd of more than 100,000 people.[315] It was the only time in history these leaders were together in one place—Lieutenant

Page 392: Tugas tik di kelas xi ips 3

General Baron Jacques of Belgium; General Armando Diaz of Italy; Marshal Ferdinand Foch of France; General Pershing of the United States; and Admiral D. R. Beatty of Britain.[316] After three years of construction, the Liberty Memorial was completed and President Calvin Coolidge delivered the dedication speech to a crowd of 150,000 people in 1926. Liberty Memorial is also home to the National World War I Museum, the only museum in the United States dedicated solely to World War I.[315]

The UK Government has budgeted substantial resources to the commemoration of the war during the period 2014 to 2018. The lead body is the Imperial War Museum.[317] On 3 August 2014, French President Francois Hollande and German President Joachim Gauck together marked the centenary of Germany's declaration of war on France by laying the first stone of a memorial in Vieil Armand, known in German as Hartmannswillerkopf, for French and German soldiers killed in the war.[318]

Cultural memory

Page 393: Tugas tik di kelas xi ips 3

Left: John McCrae, author of In Flanders Fields.Right: Siegfried Sassoon.

World War I had a lasting impact on social memory. It was seenby many in Britain as signalling the end of an era of stability stretching back to the Victorian period, and across Europe many regarded it as a watershed.[319] Historian Samuel Hynes explained:

A generation of innocent young men, their heads full of high abstractions like Honour, Glory and England, went off to war to make the world safe for democracy. They were slaughtered instupid battles planned by stupid generals. Those who survived were shocked, disillusioned and embittered by their war experiences, and saw that their real enemies were not the Germans, but the old men at home who had lied to them. They rejected the values of the society that had sent them to war, and in doing so separated their own generation from the past and from their cultural inheritance.[320]

This has become the most common perception of World War I, perpetuated by the art, cinema, poems, and stories published subsequently. Films such as All Quiet on the Western Front, Paths of Gloryand King & Country have perpetuated the idea, while war-time films including Camrades, Poppies of Flanders, and Shoulder Arms indicate that the most contemporary views of the war were overall far more positive.[321] Likewise, the art of Paul Nash, John Nash, Christopher Nevinson, and Henry Tonks in Britain painted a negative view of the conflict in keeping with the growing perception, while popular war-time artists such as Muirhead Bone painted more serene and pleasant interpretationssubsequently rejected as inaccurate.[320] Several historians like John Terraine, Niall Ferguson and Gary Sheffield have challenged these interpretations as partial and polemical views:

These beliefs did not become widely shared because they offered the only accurate interpretation of wartime events. Inevery respect, the war was much more complicated than they suggest. In recent years, historians have argued persuasively against almost every popular cliché of World War I. It has been pointed out that, although the losses were devastating, their greatest impact was socially and geographically limited.

Page 394: Tugas tik di kelas xi ips 3

The many emotions other than horror experienced by soldiers inand out of the front line, including comradeship, boredom, andeven enjoyment, have been recognised. The war is not now seen as a 'fight about nothing', but as a war of ideals, a strugglebetween aggressive militarism and more or less liberal democracy. It has been acknowledged that British generals wereoften capable men facing difficult challenges, and that it wasunder their command that the British army played a major part in the defeat of the Germans in 1918: a great forgotten victory.[321]

Though these views have been discounted as "myths",[320][322] theyare common.[321] They have dynamically changed according to contemporary influences, reflecting in the 1950s perceptions of the war as "aimless" following the contrasting Second WorldWar and emphasising conflict within the ranks during times of class conflict in the 1960s.[321] The majority of additions to the contrary are often rejected.[321]

Social trauma

A 1919 book for veterans, from the US War Department.

The social trauma caused by unprecedented rates of casualties manifested itself in different ways, which have been the subject of subsequent historical debate.[323]

The optimism of la belle époque was destroyed, and those who had fought in the war were referred to as the Lost Generation.[324] For years afterwards, people mourned the dead, the missing,

Page 395: Tugas tik di kelas xi ips 3

and the many disabled.[325] Many soldiers returned with severe trauma, suffering from shell shock (also called neurasthenia, a condition related to posttraumatic stress disorder).[326] Manymore returned home with few after-effects; however, their silence about the war contributed to the conflict's growing mythological status.[323] Though many participants did not sharein the experiences of combat or spend any significant time at the front, or had positive memories of their service, the images of suffering and trauma became the widely shared perception.[323] Such historians as Dan Todman, Paul Fussell, and Samuel Heyns have all published works since the 1990s arguing that these common perceptions of the war are factuallyincorrect.[323]

Discontent in Germany

The rise of Nazism and Fascism included a revival of the nationalist spirit and a rejection of many post-war changes. Similarly, the popularity of the stab-in-the-back legend (German: Dolchstoßlegende) was a testament to the psychological state of defeated Germany and was a rejection of responsibility for the conflict. This conspiracy theory of betrayal became common, and the German populace came to see themselves as victims. The widespread acceptance of the "stab-in-the-back" theory delegitimized the Weimar government and destabilized the system, opening it to extremes of right and left.

Communist and fascist movements around Europe drew strength from this theory and enjoyed a new level of popularity. These feelings were most pronounced in areas directly or harshly affected by the war. Adolf Hitler was able to gain popularity by utilising German discontent with the still controversial Treaty of Versailles.[327] World War II was in part a continuation of the power struggle never fully resolved by World War I. Furthermore, it was common for Germans in the 1930s to justify acts of aggression due to perceived injustices imposed by the victors of World War I.[328][329][330] American historian William Rubinstein wrote that:

The 'Age of Totalitarianism' included nearly all of the infamous examples of genocide in modern history, headed by theJewish Holocaust, but also comprising the mass murders and

Page 396: Tugas tik di kelas xi ips 3

purges of the Communist world, other mass killings carried outby Nazi Germany and its allies, and also the Armenian genocideof 1915. All these slaughters, it is argued here, had a commonorigin, the collapse of the elite structure and normal modes of government of much of central, eastern and southern Europe as a result of World War I, without which surely neither Communism nor Fascism would have existed except in the minds of unknown agitators and crackpots.[331]

Economic effects

See also: Economic history of World War I

Poster showing women workers, 1915.

One of the most dramatic effects of the war was the expansion of governmental powers and responsibilities in Britain, France, the United States, and the Dominions of the British Empire. To harness all the power of their societies, governments created new ministries and powers. New taxes were levied and laws enacted, all designed to bolster the war effort; many have lasted to this day. Similarly, the war strained the abilities of some formerly large and bureaucratised governments, such as in Austria-Hungary and Germany.

Gross domestic product (GDP) increased for three Allies (Britain, Italy, and US), but decreased in France and Russia, in neutral Netherlands, and in the three main Central Powers.

Page 397: Tugas tik di kelas xi ips 3

The shrinkage in GDP in Austria, Russia, France, and the Ottoman Empire ranged between 30% to 40%. In Austria, for example, most pigs were slaughtered, so at war's end there wasno meat.

In all nations, the government's share of GDP increased, surpassing 50% in both Germany and France and nearly reaching that level in Britain. To pay for purchases in the United States, Britain cashed in its extensive investments in American railroads and then began borrowing heavily on Wall Street. President Wilson was on the verge of cutting off the loans in late 1916, but allowed a great increase in US government lending to the Allies. After 1919, the US demanded repayment of these loans. The repayments were, in part, fundedby German reparations which, in turn, were supported by American loans to Germany. This circular system collapsed in 1931 and the loans were never repaid. Britain still owed the United States $4.4 billion [332] of World War I debt in 1934, andthis money was never repaid.[333]

Macro- and micro-economic consequences devolved from the war. Families were altered by the departure of many men. With the death or absence of the primary wage earner, women were forcedinto the workforce in unprecedented numbers. At the same time,industry needed to replace the lost labourers sent to war. This aided the struggle for voting rights for women.[334]

Hyperinflation reduced German banknotes' value so much that they could be used as wallpaper. Many savers were ruined.[335]

Page 398: Tugas tik di kelas xi ips 3

World War I further compounded the gender imbalance, adding tothe phenomenon of surplus women. The deaths of nearly one million men during the war in Britain increased the gender gapby almost a million; from 670,000 to 1,700,000. The number of unmarried women seeking economic means grew dramatically. In addition, demobilisation and economic decline following the war caused high unemployment. The war increased female employment; however, the return of demoblised men displaced many from the workforce, as did the closure of many of the wartime factories.

In Britain, rationing was finally imposed in early 1918, limited to meat, sugar, and fats (butter and margarine), but not bread. The new system worked smoothly. From 1914 to 1918, trade union membership doubled, from a little over four million to a little over eight million.

Britain turned to her colonies for help in obtaining essentialwar materials whose supply had become difficult from traditional sources. Geologists such as Albert Ernest Kitson were called on to find new resources of precious minerals in the African colonies. Kitson discovered important new depositsof manganese, used in munitions production, in the Gold Coast.[336]

Article 231 of the Treaty of Versailles (the so-called "war guilt" clause) stated Germany accepted responsibility for "allthe loss and damage to which the Allied and Associated Governments and their nationals have been subjected as a consequence of the war imposed upon them by the aggression of Germany and her allies."[337] It was worded as such to lay a legal basis for reparations, and the same clause was inserted,mutatis mutandis "in the treaties with Austria and Hungary, neither of whom interpreted it as declaration of war guilt."[338] In 1921, the total reparation sum was placed at 132billion gold marks. However, "Allied experts knew that Germanycould not pay" this sum. The total sum was divided into three categories, with the third being "deliberately designed to be chimerical" and its "primary function was to mislead public opinion ... into believing the "total sum was being maintained."[339] Thus, 50 billion gold marks (12.5 billion dollars) "represented the actual Allied assessment of German

Page 399: Tugas tik di kelas xi ips 3

capacity to pay" and "therefore ... represented the total German reparations" figure that had to be paid.[339]

This figure could be paid in cash or in kind (coal, timber, chemical dyes, etc.). In addition, some of the territory lost—via the treaty of Versailles—was credited towards the reparation figure as were other acts such as helping to restore the Library of Louvain.[340] By 1929, the Great Depression arrived, causing political chaos throughout the world.[341] In 1932 the payment of reparations was suspended by the international community, by which point Germany had only paid the equivalent of 20.598 billon gold marks in reparations.[342] With the rise of Adolf Hitler, all bonds and loans that had been issued and taken out during the 1920s and early 1930s were cancelled. David Andelman notes "refusing to pay doesn't make an agreement null and void. The bonds, the agreement, still exist." Thus, following the Second World War,at the London Conference in 1953, Germany agreed to resume payment on the money borrowed. On 3 October 2010, Germany madethe final payment on these bonds.[343]

Computational hardness assumptionFrom Wikipedia, the free encyclopedia

In cryptography, a major goal is to create cryptographic primitives with provable security. In some cases, cryptographic protocols are found to have information theoretic security; the one-time pad is a common example. However, information theoretic security cannot always be achieved; in such cases, cryptographers fall back to computational security. Roughly speaking, this means that these systems are secure assuming that any adversaries are computationally limited, as all adversaries are in practice. Becausehardness of a problem is difficult to prove, in practice certain problems are "assumed" to be difficult.

Common cryptographic hardness assumptions

Page 400: Tugas tik di kelas xi ips 3

There are many common cryptographic hardness assumptions. While the difficulty of solving any of the underlying problemsis unproven, some assumptions on the computational hardness are stronger than others. Note that if assumption A is stronger than assumption B, that means solving the problem underlying assumption B is polytime reducible to solving the problem underlying assumption A – which means that if B is solvable in poly time,A definitely is, but the reverse doesn't follow. When devisingcryptographic protocols, one hopes to be able to prove security using the weakest possible assumptions.

This is a list of some of the most common cryptographic hardness assumptions, and some cryptographic protocols that use them.

Integer factorization o Rabin cryptosystem o Blum Blum Shub generator o Okamoto–Uchiyama cryptosystem o Hofheinz–Kiltz–Shoup cryptosystem

RSA problem (weaker than factorization) o RSA cryptosystem

Quadratic residuosity problem (stronger than factorization)

o Goldwasser–Micali cryptosystem Decisional composite residuosity assumption (stronger

than factorization) o Paillier cryptosystem

Higher residuosity problem (stronger than factorization) o Benaloh cryptosystem o Naccache–Stern cryptosystem

Phi-hiding assumption (stronger than factorization) o Cachin–Micali–Stadler PIR

Discrete log problem (DLP) Computational Diffie–Hellman assumption (CDH; stronger

than DLP) o Diffie–Hellman key exchange

Decisional Diffie–Hellman assumption (DDH; stronger than CDH)

o ElGamal encryption Shortest Vector Problem

o NTRUEncrypt

Page 401: Tugas tik di kelas xi ips 3

o NTRUSign

Non-cryptographic hardness assumptionsAs well as their cryptographic applications, hardness assumptions are used in computational complexity theory to provide evidence for mathematical statements that are difficult to prove unconditionally. In these applications, oneproves that the hardness assumption implies some desired complexity-theoretic statement, instead of proving that the statement is itself true. The best-known assumption of this type is the assumption that P ≠ NP,[1] but others include the exponential time hypothesis [2] and the unique games conjecture.[3]

Integer factorization

From Wikipedia, the free encyclopedia

"Prime decomposition" redirects here. For the prime decomposition theorem for 3-manifolds, see Prime decomposition(3-manifold).

List of unsolved problems in computer science

Can integer factorization be done in polynomial time?

In number theory, integer factorization is the decomposition of a composite number into a product of smaller integers. If these integers are further restricted to prime numbers, the process is called prime factorization.

When the numbers are very large, no efficient, non-quantum integer factorization algorithm is known; an effort by severalresearchers concluded in 2009, factoring a 232-digit number (RSA-768), utilizing hundreds of machines over a span of two years.[1] However, it has not been proven that no efficient algorithm exists. The presumed difficulty of this problem is at the heart of widely used algorithms in cryptography such as

Page 402: Tugas tik di kelas xi ips 3

RSA. Many areas of mathematics and computer science have been brought to bear on the problem, including elliptic curves, algebraic number theory, and quantum computing.

Not all numbers of a given length are equally hard to factor. The hardest instances of these problems (for currently known techniques) are semiprimes, the product of two prime numbers. When they are both large, for instance more than two thousand bits long, randomly chosen, and about the same size (but not too close, e.g., to avoid efficient factorization by Fermat's factorization method), even the fastest prime factorization algorithms on the fastest computers can take enough time to make the search impractical; that is, as the number of digits of the primes being factored increases, the number of operations required to perform the factorization on any computer increases drastically.

Many cryptographic protocols are based on the difficulty of factoring large composite integers or a related problem—for example, the RSA problem. An algorithm that efficiently factors an arbitrary integer would render RSA-based public-keycryptography insecure.

Contents

1 Prime decomposition

2 Current state of the art

2.1 Difficulty and complexity

3 Factoring algorithms

3.1 Special-purpose

3.2 General-purpose

Page 403: Tugas tik di kelas xi ips 3

3.3 Other notable algorithms

4 Heuristic running time

5 Rigorous running time

5.1 Schnorr-Seysen-Lenstra Algorithm

5.2 Expected running time

6 Applications

6.1 Fast Fourier Transforms

7 See also

8 Notes

9 References

10 External links

Prime decomposition

This image demonstrates the prime decomposition of 864. A shorthand way of writing the resulting prime factors is 25 × 33

By the fundamental theorem of arithmetic, every positive integer greater than one has a unique prime factorization. A special case for one can be avoided using an appropriate notion of the empty product. However, the fundamental theorem of arithmetic gives no insight into how to obtain an integer'sprime factorization; it only guarantees its existence.

Given a general algorithm for integer factorization, one can factor any integer down to its constituent prime factors by repeated application of this algorithm. However, this is not the case with a special-purpose factorization algorithm, since

Page 404: Tugas tik di kelas xi ips 3

it may not apply to the smaller factors that occur during decomposition, or may execute very slowly on these values. Forexample, if N is the number (2521 − 1) × (2607 − 1), then trial division will quickly factor 10 × N as 2 × 5 × N, but will not quickly factor N into its factors.

Current state of the art

See also: integer factorization records

Among the b-bit numbers, the most difficult to factor in practice using existing algorithms are those that are productsof two primes of similar size. For this reason, these are the integers used in cryptographic applications. The largest such semiprime yet factored was RSA-768, a 768-bit number with 232 decimal digits, on December 12, 2009.[1] This factorization was a collaboration of several research institutions, spanningtwo years and taking the equivalent of almost 2000 years of computing on a single-core 2.2 GHz AMD Opteron. Like all recent factorization records, this factorization was completedwith a highly optimized implementation of the general number field sieve run on hundreds of machines.

Difficulty and complexity

No algorithm has been published that can factor all integers in polynomial time, i.e., that can factor b-bit numbers in time O(bk) for some constant k. There are published algorithmsthat are faster than O((1+ε)b) for all positive ε, i.e., sub-exponential.

The best published asymptotic running time is for the general number field sieve (GNFS) algorithm, which, for a b-bit numbern, is:

Page 405: Tugas tik di kelas xi ips 3

O\left(\exp\left(\left(\begin{matrix}\frac{64}{9}\end{matrix} b\right)^{1\over3} (\log b)^{2\over3}\right)\right).

For current computers, GNFS is the best published algorithm for large n (more than about 100 digits). For a quantum computer, however, Peter Shor discovered an algorithm in 1994 that solves it in polynomial time. This will have significant implications for cryptography if quantum computation is possible. Shor's algorithm takes only O(b3) time and O(b) space on b-bit number inputs. In 2001, the first seven-qubit quantum computer became the first to run Shor's algorithm. It factored the number 15.[2]

When discussing what complexity classes the integer factorization problem falls into, it's necessary to distinguish two slightly different versions of the problem:

The function problem version: given an integer N, find an integer d with 1 < d < N that divides N (or conclude that N isprime). This problem is trivially in FNP and it's not known whether it lies in FP or not. This is the version solved by practical implementations.

The decision problem version: given an integer N and an integer M with 1 < M < N, does N have a factor d with 1 < d ≤ M? This version is useful because most well-studied complexityclasses are defined as classes of decision problems, not function problems.

For M=\sqrt{N}, the decision problem is equivalent to asking if N is prime.

Page 406: Tugas tik di kelas xi ips 3

An algorithm for either version provides one for the other. Repeated application of the function problem (applied to d andN/d, and their factors, if needed) will eventually provide either a factor of N no larger than M or a factorization into primes all greater than M. All known algorithms for the decision problem work in this way. Hence it is only of theoretical interest that, with at most \log{N} queries using an algorithm for the decision problem, one would isolate a factor of N (or prove it prime) by binary search.

It is not known exactly which complexity classes contain the decision version of the integer factorization problem. It is known to be in both NP and co-NP. This is because both YES andNO answers can be verified in polynomial time. An answer of YES can be certified by exhibiting a factorization N = d(N/d) with d ≤ M. An answer of NO can be certified by exhibiting thefactorization of N into distinct primes, all larger than M. Wecan verify their primality using the AKS primality test, and that their product is N by multiplication. The fundamental theorem of arithmetic guarantees that there is only one possible string that will be accepted (providing the factors are required to be listed in order), which shows that the problem is in both UP and co-UP.[3] It is known to be in BQP because of Shor's algorithm. It is suspected to be outside of all three of the complexity classes P, NP-complete, and co-NP-complete. It is therefore a candidate for the NP-intermediate complexity class. If it could be proved that it is in either NP-Complete or co-NP-Complete, that would imply NP = co-NP. That would be a very surprising result, and therefore integer factorization is widely suspected to be outside both of those classes. Many people have tried to find classical polynomial-time algorithms for it and failed, and therefore it is widely suspected to be outside P.

In contrast, the decision problem "is N a composite number?" (or equivalently: "is N a prime number?") appears to be much easier than the problem of actually finding the factors of N.

Page 407: Tugas tik di kelas xi ips 3

Specifically, the former can be solved in polynomial time (in the number n of digits of N) with the AKS primality test. In addition, there are a number of probabilistic algorithms that can test primality very quickly in practice if one is willing to accept the vanishingly small possibility of error. The easeof primality testing is a crucial part of the RSA algorithm, as it is necessary to find large prime numbers to start with.

Factoring algorithms

Special-purpose

A special-purpose factoring algorithm's running time depends on the properties of the number to be factored or on one of its unknown factors: size, special form, etc. Exactly what therunning time depends on varies between algorithms.

An important subclass of special-purpose factoring algorithms is the Category 1 or First Category algorithms, whose running time depends on the size of smallest prime factor. Given an integer of unknown form, these methods are usually applied before general-purpose methods to remove small factors.[4] Forexample, trial division is a Category 1 algorithm.

Trial division

Wheel factorization

Pollard's rho algorithm

Algebraic-group factorisation algorithms, among which are Pollard's p − 1 algorithm, Williams' p + 1 algorithm, and Lenstra elliptic curve factorization

Fermat's factorization method

Euler's factorization method

Page 408: Tugas tik di kelas xi ips 3

Special number field sieve

General-purpose

A general-purpose factoring algorithm, also known as a Category 2, Second Category, or Kraitchik family algorithm (after Maurice Kraitchik),[4] has a running time which dependssolely on the size of the integer to be factored. This is the type of algorithm used to factor RSA numbers. Most general-purpose factoring algorithms are based on the congruence of squares method.

Dixon's algorithm

Continued fraction factorization (CFRAC)

Quadratic sieve

Rational sieve

General number field sieve

Shanks' square forms factorization (SQUFOF)

Other notable algorithms

Shor's algorithm, for quantum computers

Heuristic running time

Page 409: Tugas tik di kelas xi ips 3

In number theory, there are many integer factoring algorithms that heuristically have expected running time

L_n\left[1/2,1+o(1)\right]=e^{(1+o(1))(\log n)^{\frac{1}{2}}(\log \log n)^{\frac{1}{2}}}

in o and L-notation. Some examples of those algorithms are theelliptic curve method and the quadratic sieve. Another such algorithm is the class group relations method proposed by Schnorr,[5] Seysen,[6] and Lenstra,[7] that is proved under the assumption of the Generalized Riemann Hypothesis (GRH).

Rigorous running time

The Schnorr-Seysen-Lenstra probabilistic algorithm has been rigorously proven by Lenstra and Pomerance[8] to have expectedrunning time L_n\left[1/2,1+o(1)\right] by replacing the GRH assumption with the use of multipliers. The algorithm uses theclass group of positive binary quadratic forms of discriminantΔ denoted by GΔ. GΔ is the set of triples of integers (a, b, c) in which those integers are relative prime.

Schnorr-Seysen-Lenstra Algorithm

Given is an integer n that will be factored, where n is an oddpositive integer greater than a certain constant. In this factoring algorithm the discriminant Δ is chosen as a multipleof n, Δ= -dn, where d is some positive multiplier. The algorithm expects that for one d there exist enough smooth forms in GΔ. Lenstra and Pomerance show that the choice of d can be restricted to a small set to guarantee the smoothness result.

Page 410: Tugas tik di kelas xi ips 3

Denote by PΔ the set of all primes q with Kronecker symbol \left(\tfrac{\Delta}{q}\right)=1. By constructing a set of generators of GΔ and prime forms fq of GΔ with q in PΔ a sequence of relations between the set of generators and fq areproduced. The size of q can be bounded by c_0(\log|\Delta|)^2 for some constant c_0.

The relation that will be used is a relation between the product of powers that is equal to the neutral element of GΔ. These relations will be used to construct a so-called ambiguous form of GΔ, which is an element of GΔ of order dividing 2. By calculating the corresponding factorization of Δ and by taking a gcd, this ambiguous form provides the complete prime factorization of n. This algorithm has these main steps:

Let n be the number to be factored.

Let Δ be a negative integer with Δ = -dn, where d is a multiplier and Δ is the negative discriminant of some quadratic form.

Take the t first primes p_1=2,p_2=3,p_3=5, \dots ,p_t, forsome t\in{\mathbb N}.

Let f_q be a random prime form of GΔ with \left(\tfrac{\Delta}{q}\right)=1.

Find a generating set X of GΔ

Collect a sequence of relations between set X and {fq : q ∈ PΔ} satisfying: \left(\prod_{x \in X_{}} x^{r(x)}\right).\left(\prod_{q \in P_\Delta} f^{t(q)}_{q}\right) = 1

Construct an ambiguous form (a, b, c) that is an element f∈ GΔ of order dividing 2 to obtain a coprime factorization of the largest odd divisor of Δ in which Δ = -4a.c or a(a - 4c) or (b - 2a).(b + 2a)

Page 411: Tugas tik di kelas xi ips 3

If the ambiguous form provides a factorization of n then stop, otherwise find another ambiguous form until the factorization of n is found. In order to prevent useless ambiguous forms from generating, build up the 2-Sylow group S2(Δ) of G(Δ).

To obtain an algorithm for factoring any positive integer, it is necessary to add a few steps to this algorithm such as trial division, and the Jacobi sum test.

Expected running time

The algorithm as stated is a probabilistic algorithm as it makes random choices. Its expected running time is at most L_n\left[1/2,1+o(1)\right].[8]

Information-theoretic security

From Wikipedia, the free encyclopedia

(Redirected from Information theoretic security)

A cryptosystem is information-theoretically secure if its security derives purely from information theory. That is, it cannot be broken even when the adversary has unlimited computing power. Theadversary simply does not have enough information to break the encryption, so these cryptosystems are considered cryptanalytically unbreakable.

An encryption protocol that has information-theoretic securitydoes not depend for its effectiveness on unproven assumptions about computational hardness, and such an algorithm is not vulnerable to future developments in computer power such as quantum computing. An example of an information-theoretically secure cryptosystem is the one-time pad. The concept of information-theoretically secure

Page 412: Tugas tik di kelas xi ips 3

communication was introduced in 1949 by American mathematician Claude Shannon, the inventor of information theory, who used it to prove that the one-time pad system was secure.[1] Information-theoretically secure cryptosystems have been used for the most sensitive governmental communications, such as diplomatic cables andhigh-level military communications, because of the great efforts enemy governments expend toward breaking them.

An interesting special case is perfect security: an encryptionalgorithm is perfectly secure if a ciphertext produced using it provides no information about the plaintext without knowledge of thekey. If E is a perfectly secure encryption function, for any fixed message m there must exist for each ciphertext c at least one key k such that c = E_k(m). It has been proved that any cipher with the perfect secrecy property must use keys with effectively the same requirements as one-time pad keys.[1]

It is common for a cryptosystem to leak some information but nevertheless maintain its security properties even against an adversary that has unlimited computational resources. Such a cryptosystem would have information theoretic but not perfect security. The exact definition of security would depend on the cryptosystem in question.

There are a variety of cryptographic tasks for which information-theoretic security is a meaningful and useful requirement. A few of these are:

Secret sharing schemes such as Shamir's are information-theoretically secure (and also perfectly secure) in that less than the requisite number of shares of the secret provide no information about the secret.

More generally, secure multiparty computation protocols often, but not always, have information-theoretic security.

Page 413: Tugas tik di kelas xi ips 3

Private information retrieval with multiple databases can be achieved with information-theoretic privacy for the user's query.

Reductions between cryptographic primitives or tasks can often be achieved information-theoretically. Such reductions are important from a theoretical perspective, because they establish that primitive \Pi can be realized if primitive \Pi' can be realized.

Symmetric encryption can be constructed under an information-theoretic notion of security called entropic security, which assumes that the adversary knows almost nothing about the message being sent. The goal here is to hide all functions of the plaintext rather than all information about it.

Quantum cryptography is largely part of information-theoretic cryptography.

Besides the conventional methods of secrecy that just hides the message, there are scenarios that we need a stronger type of secrecy which not only hides the message by encryption, but also hides the existence of communication that is called covert communication. [2]

Contents

1 Physical layer encryption

2 Unconditional security

3 See also

4 References

Physical layer encryption

Page 414: Tugas tik di kelas xi ips 3

A weaker notion of security defined by Aaron D. Wyner established a now flourishing area of research known as physical layer encryption.[3] This exploits the physical wireless channel forits security by communications, signal processing, and coding techniques. The security is provable, unbreakable, and quantifiable (in bits/second/hertz).

Wyner's initial physical layer encryption work in the 1970s posed the Alice – Bob – Eve problem in which Alice wants to send a message to Bob without Eve decoding it. It was shown that if the channel from Alice to Bob is statistically better than the channel from Alice to Eve, secure communication is possible.[4] This is intuitive, but Wyner measured the secrecy in information theoretic terms defining secrecy capacity, which essentially is the rate at which Alice can transmit secret information to Bob. Shortly after, Imre Csiszár and Körner showed that secret communication was possible even when Eve had a statistically better channel to Alice than did Bob.[5] More recent theoretical results are concerned with determining the secrecy capacity and optimal power allocation in broadcast fading channels.[6][7] There are caveats, as many capacities are not computable unless the assumption is made that Alice knows the channel to Eve. If this were known, Alice could simply place a null in Eve's direction. Secrecy capacity for MIMO and multiple colluding eavesdroppers is more recent and ongoing work,[8][9] and these results still make the non-useful assumption about eavesdropper channel state information knowledge.

Still other work is less theoretical and attempts to compare implementable schemes. One physical layer encryption scheme is to broadcast artificial noise in all directions except that of Bob's channel, basically jamming Eve. One paper by Negi and Goel details the implementation, and Khisti and Wornell computed the secrecy capacity when only statistics about Eve's channel are known.[10][11]

Parallel to this work in the information theory community is work in the antenna community that has been termed near-field directantenna modulation or directional modulation.[12] It was shown that

Page 415: Tugas tik di kelas xi ips 3

by using a parasitic array, the transmitted modulation in different directions could be controlled independently.[13] Secrecy could be realized by making the modulations in undesired directions difficultto decode. Directional modulation data transmission was experimentally demonstrated using a phased array.[14] Others have demonstrated directional modulation with switched arrays and phase-conjugating lenses.[15][16][17]

This type of directional modulation is really a subset of Negiand Goel's additive artificial noise encryption scheme. Another scheme using pattern-reconfigurable transmit antennas for Alice called reconfigurable multiplicative noise (RMN) complements additive artificial noise.[18] The two work well together in channelsimulations in which nothing is assumed known to Alice or Bob about the eavesdroppers.

Unconditional security

Information-theoretic security is often used interchangeably with unconditional security. However, the latter term can also referto systems that don't rely on unproven computational hardness assumptions. Today these systems are essentially the same as those that are information-theoretically secure. Nevertheless, it does notalways have to be that way. One day RSA might be proved secure (it relies on the assertion that factoring large numbers is hard), thus becoming unconditionally secure, but it will never be information-theoretically secure (because even though no efficient algorithms for factoring large primes may exist, in principle it can still be done given unlimited computational power).

One-time pad

From Wikipedia, the free encyclopedia

Excerpt from a one-time pad

In cryptography, the one-time pad (OTP) is an encryption technique that cannot be cracked if used correctly. In this

Page 416: Tugas tik di kelas xi ips 3

technique, a plaintext is paired with a random secret key (also referred to as a one-time pad). Then, each bit or character of the plaintext is encrypted by combining it with the corresponding bit orcharacter from the pad using modular addition. If the key is truly random, is at least as long as the plaintext, is never reused in whole or in part, and is kept completely secret, then the resulting ciphertext will be impossible to decrypt or break.[1][2][3] It has also been proven that any cipher with the perfect secrecy property must use keys with effectively the same requirements as OTP keys.[4]However, practical problems have prevented one-time pads from being widely used.

First described by Frank Miller in 1882,[5][6] the one-time pad was re-invented in 1917. On July 22, 1919, U.S. Patent 1,310,719was issued to Gilbert S. Vernam for the XOR operation used for the encryption of a one-time pad.[7] It is derived from the Vernam cipher, named after Gilbert Vernam, one of its inventors. Vernam's system was a cipher that combined a message with a key read from a punched tape. In its original form, Vernam's system was vulnerable because the key tape was a loop, which was reused whenever the loop made a full cycle. One-time use came later, when Joseph Mauborgne recognized that if the key tape were totally random, then cryptanalysis would be impossible.[8]

The "pad" part of the name comes from early implementations where the key material was distributed as a pad of paper, so that the top sheet could be easily torn off and destroyed after use. For ease of concealment, the pad was sometimes reduced to such a small size that a powerful magnifying glass was required to use it. The KGB used pads of such size that they could fit in the palm of one's hand,[9] or in a walnut shell.[10] To increase security, one-time pads were sometimes printed onto sheets of highly flammable nitrocellulose, so that they could be quickly burned after use.

There is some ambiguity to the term because some authors[who?]use the terms "Vernam cipher" and "one-time pad" synonymously, whileothers refer to any additive stream cipher as a "Vernam cipher",

Page 417: Tugas tik di kelas xi ips 3

including those based on a cryptographically secure pseudorandom number generator (CSPRNG).[11]

Contents

1 History of invention

2 Example

2.1 Attempt at cryptanalysis

3 Perfect secrecy

4 Problems

4.1 Key distribution

4.2 Authentication

4.3 True randomness

5 Uses

5.1 Applicability

5.2 Historical uses

5.3 Exploits

6 See also

7 References

8 Further reading

9 External links

History of invention

Frank Miller in 1882 was the first to describe the one-time pad system for securing telegraphy.[6][12]

Page 418: Tugas tik di kelas xi ips 3

The next one-time pad system was electrical. In 1917, Gilbert Vernam (of AT&T Corporation) invented and later patented in 1919 (U.S. Patent 1,310,719) a cipher based on teleprinter technology. Each character in a message was electrically combined with a character on a paper tape key. Joseph Mauborgne (then a captain in the U.S. Army and later chief of the Signal Corps) recognized that the character sequence on the key tape could be completely random and that, if so, cryptanalysis would be more difficult. Together they invented the first one-time tape system.[11]

The next development was the paper pad system. Diplomats had long used codes and ciphers for confidentiality and to minimize telegraph costs. For the codes, words and phrases were converted to groups of numbers (typically 4 or 5 digits) using a dictionary-like codebook. For added security, secret numbers could be combined with (usually modular addition) each code group before transmission, withthe secret numbers being changed periodically (this was called superencryption). In the early 1920s, three German cryptographers (Werner Kunze, Rudolf Schauffler and Erich Langlotz), who were involved in breaking such systems, realized that they could never bebroken if a separate randomly chosen additive number was used for every code group. They had duplicate paper pads printed with lines of random number groups. Each page had a serial number and eight lines. Each line had six 5-digit numbers. A page would be used as a work sheet to encode a message and then destroyed. The serial numberof the page would be sent with the encoded message. The recipient would reverse the procedure and then destroy his copy of the page. The German foreign office put this system into operation by 1923.[11]

A separate notion was the use of a one-time pad of letters to encode plaintext directly as in the example below. Leo Marks describes inventing such a system for the British Special OperationsExecutive during World War II, though he suspected at the time that it was already known in the highly compartmentalized world of cryptography, as for instance at Bletchley Park.[13]

Page 419: Tugas tik di kelas xi ips 3

The final discovery was by Claude Shannon in the 1940s who recognized and proved the theoretical significance of the one-time pad system. Shannon delivered his results in a classified report in 1945, and published them openly in 1949.[4] At the same time, Vladimir Kotelnikov had independently proven absolute security of the one-time pad; his results were delivered in 1941 in a report that apparently remains classified.[14]

Example

Suppose Alice wishes to send the message "HELLO" to Bob. Assume two pads of paper containing identical random sequences of letters were somehow previously produced and securely issued to both. Alice chooses the appropriate unused page from the pad. The way to do this is normally arranged for in advance, as for instance 'use the 12th sheet on 1 May', or 'use the next available sheet for the next message'.

The material on the selected sheet is the key for this message. Each letter from the pad will be combined in a predetermined way with one letter of the message. (It is common, butnot required, to assign each letter a numerical value, e.g., "A" is 0, "B" is 1, and so on.)

In this example, the technique is to combine the key and the message using modular addition. The numerical values of corresponding message and key letters are added together, modulo 26.So, if key material begins with "XMCKL" and the message is "HELLO", then the coding would be done as follows:

H E L L O message

7 (H) 4 (E) 11 (L) 11 (L) 14 (O) message

+ 23 (X) 12 (M) 2 (C) 10 (K) 11 (L) key

Page 420: Tugas tik di kelas xi ips 3

= 30 16 13 21 25 message + key

= 4 (E) 16 (Q) 13 (N) 21 (V) 25 (Z) message + key (mod 26)

E Q N V Z → ciphertext

If a number is larger than 26, then the remainder after subtraction of 26 is taken in modular arithmetic fashion. This simply means that if the computations "go past" Z, the sequence starts again at A.

The ciphertext to be sent to Bob is thus "EQNVZ". Bob uses thematching key page and the same process, but in reverse, to obtain the plaintext. Here the key is subtracted from the ciphertext, againusing modular arithmetic:

E Q N V Z ciphertext

4 (E) 16 (Q) 13 (N) 21 (V) 25 (Z) ciphertext

- 23 (X) 12 (M) 2 (C) 10 (K) 11 (L) key

= -19 4 11 11 14 ciphertext – key

= 7 (H) 4 (E) 11 (L) 11 (L) 14 (O) ciphertext – key (mod 26)

H E L L O → message

Similar to the above, if a number is negative then 26 is addedto make the number zero or higher.

Thus Bob recovers Alice's plaintext, the message "HELLO". BothAlice and Bob destroy the key sheet immediately after use, thus preventing reuse and an attack against the cipher. The KGB often issued its agents one-time pads printed on tiny sheets of "flash

Page 421: Tugas tik di kelas xi ips 3

paper"—paper chemically converted to nitrocellulose, which burns almost instantly and leaves no ash.[15]

The classical one-time pad of espionage used actual pads of minuscule, easily concealed paper, a sharp pencil, and some mental arithmetic. The method can be implemented now as a software program,using data files as input (plaintext), output (ciphertext) and key material (the required random sequence). The XOR operation is often used to combine the plaintext and the key elements, and is especially attractive on computers since it is usually a native machine instruction and is therefore very fast. However, it is difficult to ensure that the key material is actually random, is used only once, never becomes known to the opposition, and is completely destroyed after use. The auxiliary parts of a software one-time pad implementation present real challenges: secure handling/transmission of plaintext, truly random keys, and one-time-only use of the key.

Attempt at cryptanalysis

To continue the example from above, suppose Eve intercepts Alice's ciphertext: "EQNVZ". If Eve had infinite time, she would find that the key "XMCKL" would produce the plaintext "HELLO", but she would also find that the key "TQURI" would produce the plaintext"LATER", an equally plausible message:

4 (E) 16 (Q) 13 (N) 21 (V) 25 (Z) ciphertext

− 19 (T) 16 (Q) 20 (U) 17 (R) 8 (I) possible key

= −15 0 −7 4 17 ciphertext-key

= 11 (L) 0 (A) 19 (T) 4 (E) 17 (R) ciphertext-key (mod 26)

In fact, it is possible to "decrypt" out of the ciphertext anymessage whatsoever with the same number of characters, simply by

Page 422: Tugas tik di kelas xi ips 3

using a different key, and there is no information in the ciphertextwhich will allow Eve to choose among the various possible readings of the ciphertext.

Perfect secrecy

One-time pads are "information-theoretically secure" in that the encrypted message (i.e., the ciphertext) provides no informationabout the original message to a cryptanalyst (except the maximum possible length[16] of the message). This is a very strong notion ofsecurity first developed during WWII by Claude Shannon and proved, mathematically, to be true for the one-time pad by Shannon about thesame time. His result was published in the Bell Labs Technical Journal in 1949.[17] Properly used one-time pads are secure in this sense even against adversaries with infinite computational power.

Claude Shannon proved, using information theory considerations, that the one-time pad has a property he termed perfect secrecy; that is, the ciphertext C gives absolutely no additional information about the plaintext. This is because, given atruly random key which is used only once, a ciphertext can be translated into any plaintext of the same length, and all are equally likely. Thus, the a priori probability of a plaintext message M is the same as the a posteriori probability of a plaintextmessage M given the corresponding ciphertext. Mathematically, this is expressed as H(M)=H(M|C), where H(M) is the entropy of the plaintext and H(M|C) is the conditional entropy of the plaintext given the ciphertext C. Perfect secrecy is a strong notion of cryptanalytic difficulty.[4]

Conventional symmetric encryption algorithms use complex patterns of substitution and transpositions. For the best of these currently in use, it is not known whether there can be a cryptanalytic procedure which can reverse (or, usefully, partially reverse) these transformations without knowing the key used during encryption. Asymmetric encryption algorithms depend on mathematical problems that are thought to be difficult to solve, such as integer

Page 423: Tugas tik di kelas xi ips 3

factorization and discrete logarithms. However there is no proof that these problems are hard, and a mathematical breakthrough could make existing systems vulnerable to attack.

Given perfect secrecy, in contrast to conventional symmetric encryption, OTP is immune even to brute-force attacks. Trying all keys simply yields all plaintexts, all equally likely to be the actual plaintext. Even with known plaintext, like part of the message being known, brute-force attacks cannot be used, since an attacker is unable to gain any information about the parts of the key needed to decrypt the rest of the message.

Problems

Despite Shannon's proof of its security, the one-time pad has serious drawbacks in practice because it requires:

Truly random (as opposed to pseudorandom) one-time pad values, which is a non-trivial requirement. See Pseudorandom number generator.

Secure generation and exchange of the one-time pad values,which must be at least as long as the message. (The security of the one-time pad is only as secure as the security of the one-time pad exchange).

Careful treatment to make sure that it continues to remainsecret, and is disposed of correctly preventing any reuse in whole or part—hence "one time". See data remanence for a discussion of difficulties in completely erasing computer media.

The theoretical perfect security of the one-time-pad applies only in a theoretically perfect setting; no real-world implementation of any cryptosystem can provide perfect security because practical considerations introduce potential vulnerabilities.

Page 424: Tugas tik di kelas xi ips 3

One-time pads solve few current practical problems in cryptography. High quality ciphers are widely available and their security is not considered a major worry at present.[when?][citationneeded] Such ciphers are almost always easier to employ than one-time pads; the amount of key material which must be properly generated and securely distributed is far smaller, and public key cryptography overcomes this problem.[18]

Key distribution

Further information: Key distribution

Because the pad, like all shared secrets, must be passed and kept secure, and the pad has to be at least as long as the message, there is often no point in using one-time padding, as one can simplysend the plain text instead of the pad (as both can be the same sizeand have to be sent securely). However, once a very long pad has been securely sent (e.g., a computer disk full of random data), it can be used for numerous future messages, until the sum of their sizes equals the size of the pad. Quantum key distribution also proposes a solution to this problem.

Distributing very long one-time pad keys is inconvenient and usually poses a significant security risk.[1] The pad is essentiallythe encryption key, but unlike keys for modern ciphers, it must be extremely long and is much too difficult for humans to remember. Storage media such as thumb drives, DVD-Rs or personal digital audioplayers can be used to carry a very large one-time-pad from place toplace in a non-suspicious way, but even so the need to transport thepad physically is a burden compared to the key negotiation protocolsof a modern public-key cryptosystem, and such media cannot reliably be erased securely by any means short of physical destruction (e.g.,incineration). A 4.7 GB DVD-R full of one-time-pad data, if shreddedinto particles 1 mm² in size, leaves over 4 megabits of (admittedly hard to recover, but not impossibly so) data on each particle.[citation needed] In addition, the risk of compromise during transit(for example, a pickpocket swiping, copying and replacing the pad)

Page 425: Tugas tik di kelas xi ips 3

is likely much greater in practice than the likelihood of compromisefor a cipher such as AES. Finally, the effort needed to manage one-time pad key material scales very badly for large networks of communicants—the number of pads required goes up as the square of the number of users freely exchanging messages. For communication between only two persons, or a star network topology, this is less of a problem.

The key material must be securely disposed of after use, to ensure the key material is never reused and to protect the messages sent.[1] Because the key material must be transported from one endpoint to another, and persist until the message is sent or received, it can be more vulnerable to forensic recovery than the transient plaintext it protects (see data remanence).

Authentication

As traditionally used, one-time pads provide no message authentication, the lack of which can pose a security threat in real-world systems. The straightforward XORing with the keystream, or the use of any invertible function known to the attacker, such asmod 26 addition, creates a potential vulnerability in message integrity. For example, an attacker who knows that the message contains "meet jane and me tomorrow at three thirty pm" at a particular point can replace that content by any other content of exactly the same length, such as "three thirty meeting is cancelled,stay home", without having access to the one-time pad, a property ofall stream ciphers known as malleability.[19] See also stream cipherattack.

Standard techniques to prevent this, such as the use of a message authentication code can be used along with a one-time pad system to prevent such attacks, as can classical methods such as variable length padding and Russian copulation, but they all lack the perfect security the OTP itself has. Universal hashing provides a way to authenticate messages up to an arbitrary security bound (i.e., for any p>0, a large enough hash ensures that even a

Page 426: Tugas tik di kelas xi ips 3

computationally unbounded attacker's likelihood of successful forgery is less than p), but this uses additional random data from the pad, and removes the possibility of implementing the system without a computer.

True randomness

High-quality random numbers are difficult to generate. The random number generation functions in most programming language libraries are not suitable for cryptographic use. Even those generators that are suitable for normal cryptographic use, including/dev/random and many hardware random number generators, make some use of cryptographic functions whose security is unproven.

In particular, one-time use is absolutely necessary. If a one-time pad is used just twice, simple mathematical operations can reduce it to a running key cipher. If both plaintexts are in a natural language (e.g., English or Russian or Irish) then, even though both are secret, each stands a very high chance of being recovered by heuristic cryptanalysis, with possibly a few ambiguities. Of course the longer message can only be broken for theportion that overlaps the shorter message, plus perhaps a little more by completing a word or phrase. The most famous exploit of thisvulnerability occurred with the Venona project.[20]

Uses

Applicability

Any digital data storage device can be used to transport one-time pad data.

Despite its problems, the one-time-pad retains some practical interest. In some hypothetical espionage situations, the one-time pad might be useful because it can be computed by hand with only pencil and paper. Indeed, nearly all other high quality ciphers are entirely impractical without computers. Spies can receive their padsin person from their "handlers." In the modern world, however,

Page 427: Tugas tik di kelas xi ips 3

computers (such as those embedded in personal electronic devices such as mobile phones) are so ubiquitous that possessing a computer suitable for performing conventional encryption (for example, a phone which can run concealed cryptographic software) will usually not attract suspicion.

The one-time-pad is the most optimal cryptosystem with theoretically perfect secrecy.

The one-time-pad is one of the most practical methods of encryption where one or both parties must do all work by hand, without the aid of a computer. This made it important in the pre-computer era, and it could conceivably still be useful in situationswhere possession of a computer is illegal or incriminating or where trustworthy computers are not available.

One-time pads are practical in situations where two parties in a secure environment must be able to depart from one another and communicate from two separate secure environments with perfect secrecy.

The one-time-pad can be used in superencryption.[21]

The algorithm most commonly associated with quantum key distribution is the one-time pad.

The one-time pad is mimicked by stream ciphers.

The one-time pad can be a part of an introduction to cryptography.[22]

Page 428: Tugas tik di kelas xi ips 3

Historical uses

One-time pads have been used in special circumstances since the early 1900s. In 1923, it was employed for diplomatic communications by the German diplomatic establishment.[23] The Weimar Republic Diplomatic Service began using the method in about 1920. The breaking of poor Soviet cryptography by the British, with messages made public for political reasons in two instances in the 1920s, appear to have induced the U.S.S.R. to adopt one-time pads for some purposes by around 1930. KGB spies are also known to have used pencil and paper one-time pads more recently. Examples include Colonel Rudolf Abel, who was arrested and convicted in New York Cityin the 1950s, and the 'Krogers' (i.e., Morris and Lona Cohen), who were arrested and convicted of espionage in the United Kingdom in the early 1960s. Both were found with physical one-time pads in their possession.

A number of nations have used one-time pad systems for their sensitive traffic. Leo Marks reports that the British Special Operations Executive used one-time pads in World War II to encode traffic between its offices. One-time pads for use with its overseasagents were introduced late in the war.[13] A few British one-time tape cipher machines include the Rockex and Noreen. The German StasiSprach Machine was also capable of using one time tape which East Germany, Russia, and even Cuba used to send encrypted messages to their agents.[24]

The World War II voice scrambler SIGSALY was also a form of one-time system. It added noise to the signal at one end and removedit at the other end. The noise was distributed to the channel ends in the form of large shellac records which were manufactured in unique pairs. There were both starting synchronization and longer-term phase drift problems which arose and were solved before the system could be used.

Page 429: Tugas tik di kelas xi ips 3

The NSA describes one-time tape systems like SIGTOT and 5-UCO as being used for intelligence traffic until the introduction of theelectronic cipher based KW-26 in 1957.[25]

The hotline between Moscow and Washington D.C., established in1963 after the Cuban missile crisis, used teleprinters protected by a commercial one-time tape system. Each country prepared the keying tapes used to encode its messages and delivered them via their embassy in the other country. A unique advantage of the OTP in this case was that neither country had to reveal more sensitive encryption methods to the other.[26]

During the 1983 Invasion of Grenada, U.S. forces found a supply of pairs of one-time pad books in a Cuban warehouse.[27]

Starting in 1988, the African National Congress (ANC) used disk-based one-time pads as part of a secure communication system between ANC leaders outside South Africa and in-country operatives as part of Operation Vula, a successful effort to build a resistancenetwork inside South Africa. Random numbers on the disk were erased after use. A Belgian airline stewardess acted as courier to bring inthe pad disks. A regular resupply of new disks was needed as they were used up fairly quickly. One problem with the system was that itcould not be used for secure data storage. Later Vula added a streamcipher keyed by book codes to solve this problem.[28]

A related notion is the one-time code—a signal, used only once, e.g., "Alpha" for "mission completed", "Bravo" for "mission failed" or even "Torch" for "Allied invasion of French Northern Africa" [29] cannot be "decrypted" in any reasonable sense of the word. Understanding the message will require additional information,often 'depth' of repetition, or some traffic analysis. However, suchstrategies (though often used by real operatives, and baseball coaches) are not a cryptographic one-time pad in any significant sense.

Page 430: Tugas tik di kelas xi ips 3

Exploits

While one-time pads provide perfect secrecy if generated and used properly, small mistakes can lead to successful cryptanalysis:

In 1944–1945, the U.S. Army's Signals Intelligence Servicewas able to solve a one-time pad system used by the German Foreign Office for its high-level traffic, codenamed GEE.[30] GEE was insecure because the pads were not completely random—the machine used to generate the pads produced predictable output.

In 1945, the US discovered that Canberra-Moscow messages were being encrypted first using a code-book and then using a one-time pad. However, the one-time pad used was the same one used by Moscow for Washington, DC-Moscow messages. Combined with the fact that some of the Canberra-Moscow messages included known British government documents, this allowed some of the encrypted messages tobe broken.

Page 431: Tugas tik di kelas xi ips 3

One-time pads were employed by Soviet espionage agencies for covert communications with agents and agent controllers. Analysis has shown that these padswere generated by typists using actual typewriters. This method is of course not truly random, as it makes certain convenient key sequences more likely than others, yet it proved to be generally effective because while a person will not produce truly random sequences they equally do not follow the same kind ofstructured mathematical rules that a machine would either, and each person generates ciphers in a different way making attacking any message challenging. Without copies of the key material used,only some defect in the generation method or

reEspionageFrom Wikipedia, the free encyclopedia"Spy" and "Secret agent" redirect here. For other uses, see Spy (disambiguation) and Secret agent (disambiguation).For other uses, see Espionage (disambiguation).

Espionage or, casually, spying involves a spy ring, governmentand company/firm or individual obtaining information considered secret or confidential without the permission of the holder of the information.[1] Espionage is inherently clandestine, as it is taken for granted that it is unwelcome and in many cases illegal and punishable by law. It is a subset of "intelligence gathering", which otherwise may be conducted from public sources and using perfectly legal and ethical means. It is crucial to distinguish espionage from "intelligence" gathering, as the latter does not necessarily involve espionage, but often collates open-source information.

Espionage is often part of an institutional effort by a government or commercial concern. However, the term is generally associated with state spying on potential or actual

Page 432: Tugas tik di kelas xi ips 3

enemies primarily for military purposes. Spying involving corporations is known as industrial espionage.

One of the most effective ways to gather data and information about the enemy (or potential enemy) is by infiltrating the enemy's ranks. This is the job of the spy (espionage agent). Spies can bring back all sorts of information concerning the size and strength of enemy forces. They can also find dissidents within the enemy's forces and influence them to defect. In times of crisis, spies can also be used to steal technology and to sabotage the enemy in various ways. Counterintelligence operatives can feed false information to enemy spies, protecting important domestic secrets, and preventing attempts at subversion. Nearly every country has very strict laws concerning espionage, and the penalty for being caught is often severe. However, the benefits that can be gained through espionage are generally great enough that most governments and many large corporations make use of it tovarying degrees.

Further information on clandestine HUMINT (human intelligence)information collection techniques is available, including discussions of operational techniques, asset recruiting, and the tradecraft used to collect this information.

Contents 1 History

o 1.1 Early historyo 1.2 Modern development

1.2.1 Military Intelligence 1.2.2 Naval Intelligence 1.2.3 Civil intelligence agencies 1.2.4 Counter-intelligence

o 1.3 First World War 1.3.1 Codebreaking

o 1.4 Russian Revolutiono 1.5 Today

2 Targets of espionage 3 Methods and terminology

o 3.1 Technology and techniques 4 Organization

Page 433: Tugas tik di kelas xi ips 3

5 Industrial espionage 6 Agents in espionage 7 Law 8 Use against non-spies 9 Espionage laws in the UK

o 9.1 Government intelligence laws and its distinctionfrom espionage

10 Military conflicts 11 List of famous spies

o 11.1 World War Io 11.2 World War IIo 11.3 Post World War II

12 Spy fiction o 12.1 World War II: 1939–1945o 12.2 Cold War era: 1945–1991

13 See also 14 References 15 Further reading 16 External links

HistoryEarly history

A bamboo version of The Art of War, written by Sun-Tzu and containing advice on espionage tactics.

Events involving espionage are well documented throughout history. The ancient writings of Chinese and Indian military strategists such as Sun-Tzu and Chanakya contain information on deception and subversion. Chanakya's student Chandragupta Maurya, founder of the Maurya Empire in India, made use of assassinations, spies and secret agents, which are described

Page 434: Tugas tik di kelas xi ips 3

in Chanakya's Arthasastra. The ancient Egyptians had a thoroughlydeveloped system for the acquisition of intelligence, and the Hebrews used spies as well, as in the story of Rahab. Spies were also prevalent in the Greek and Roman empires.[2] During the 13th and 14th centuries, the Mongols relied heavily on espionage in their conquests in Asia and Europe. Feudal Japan often used ninja to gather intelligence.

Aztecs used Pochtecas, people in charge of commerce, as spies and diplomats, and had diplomatic immunity. Along with the pochteca, before a battle or war, secret agents, quimitchin, were sent to spy amongst enemies usually wearing the local costume and speaking the local language, techniques similar tomodern secret agents.[3]

Many modern espionage methods were established by Francis Walsingham in Elizabethan England.[4] Walsingham's staff in England included the cryptographer Thomas Phelippes, who was an expert in deciphering letters and forgery, and Arthur Gregory, who was skilled at breaking and repairing seals without detection.[5]

In 1585, Mary, Queen of Scots was placed in the custody of SirAmias Paulet, who was instructed to open and read all of Mary's clandestine correspondence.[5] In a successful attempt to entrap her, Walsingham arranged a single exception: a covert means for Mary's letters to be smuggled in and out of Chartley in a beer keg. Mary was misled into thinking these secret letters were secure, while in reality they were deciphered and read by Walsingham's agents.[5] He succeeded in intercepting letters that indicated a conspiracy to displace Elizabeth I with Mary, Queen of Scots.

In foreign intelligence, Walsingham's extensive network of "intelligencers", who passed on general news as well as secrets, spanned Europe and the Mediterranean.[5] While foreignintelligence was a normal part of the principal secretary's activities, Walsingham brought to it flair and ambition, and large sums of his own money.[6] He cast his net more widely than others had done previously: expanding and exploiting links across the continent as well as in Constantinople and Algiers, and building and inserting contacts among Catholic exiles.[5]

Page 435: Tugas tik di kelas xi ips 3

Modern development

Political cartoon depicting the Afghan Emir Sher Ali with his "friends" the Russian Bear and British Lion (1878). The Great Game saw the rise of systematic espionage and surveillance throughout the region by both powers.

Modern tactics of espionage and dedicated government intelligence agencies were developed over the course of the late 19th century. A key background to this development was the Great Game, a period denoting the strategic rivalry and conflict that existed between the British Empire and the Russian Empire throughout Central Asia. To counter Russian ambitions in the region and the potential threat it posed to the British position in India, a system of surveillance, intelligence and counterintelligence was built up in the Indian Civil Service. The existence of this shadowy conflict was popularised in Rudyard Kipling's famous spy book, Kim, where he portrayed the Great Game (a phrase he popularised) asan espionage and intelligence conflict that 'never ceases, dayor night'.

Although the techniques originally used were distinctly amateurish - British agents would often pose unconvincingly asbotanists or archaeologists - more professional tactics and systems were slowly put in place. In many respects, it was here that a modern intelligence apparatus with permanent bureaucracies for internal and foreign infiltration and espionage, was first developed. A pioneering cryptographic unit was established as early as 1844 in India, which achievedsome important successes in decrypting Russian communications in the area.[7]

Page 436: Tugas tik di kelas xi ips 3

The establishment of dedicated intelligence organizations was directly linked to the colonial rivalries between the major European powers and the accelerating development of military technology.

An early source of military intelligence was the diplomatic system of military attachés (an officer attached to the diplomatic service operating through the embassy in a foreign country), that became widespread in Europe after the Crimean War. Although officially restricted to a role of transmitting openly received information, they were soon being used to clandestinely gather confidential information and in some cases even to recruit spies and to operate de facto spy rings.

Military Intelligence

Seal of the Evidenzbureau, military intelligence service of the Austrian Empire.

Shaken by the revolutionary years 1848-1849, the Austrian Empire founded the Evidenzbureau in 1850 as the first permanent military intelligence service. It was first used in the 1859 Austro-Sardinian war and the 1866 campaign against Prussia, albeit with little success. The bureau collected intelligence of military relevance from various sources into daily reports to the Chief of Staff (Generalstabschef) and weekly reports to Emperor Franz Joseph. Sections of the Evidenzbureauwere assigned different regions, the most important one was aimed against Russia.

During the Crimean War, the Topographical & Statistic Department T&SD was established within the British War Office as an embryonic military intelligence organization. The department initially focused on the accurate mapmaking of strategically sensitive locations and the collation of militarily relevant statistics. After the deficiencies in the British army's performance during the war became known a

Page 437: Tugas tik di kelas xi ips 3

large-scale reform of army institutions was overseen by the Edward Cardwell. As part of this, the T&SD was reorganized as the Intelligence Branch of the War Office in 1873 with the mission to "collect and classify all possible information relating to the strength, organization etc. of foreign armies... to keep themselves acquainted with the progress madeby foreign countries in military art and science..."[8]

The French Ministry of War authorized the creation of the Deuxième Bureau on June 8, 1871, a service charged with performing "research on enemy plans and operations."[9] This was followed a year later by the creation of a military counter-espionage service. It was this latter service that wasdiscredited through its actions over the notorious Dreyfus Affair, where a French Jewish officer was falsely accused of handing over military secrets to the Germans. As a result of the political division that ensued, responsibility for counter-espionage was moved to the civilian control of the Ministry of the Interior.

Field Marshal Helmuth von Moltke established a military intelligence unit, Abteilung (Section) IIIb, to the German General Staff in 1889 which steadily expanded its operations into France and Russia. The Italian Ufficio Informazioni del Commando Supremo was put on a permanent footing in 1900. AfterRussia's defeat in the Russo-Japanese War of 1904-05, Russian military intelligence was reorganized under the 7th Section ofthe 2nd Executive Board of the great imperial headquarters.[10]

Naval Intelligence

It was not just the army that felt a need for military intelligence. Soon, naval establishments were demanding similar capabilities from their national governments to allow them to keep abreast of technological and strategic developments in rival countries.

The Naval Intelligence Division was set up as the independent intelligence arm of the British Admiralty in 1882 (initially as the Foreign Intelligence Committee) and was headed by Captain William Henry Hall.[11] The division was initially responsible for fleet mobilization and war plans as well as foreign intelligence collection; in the 1900s two further

Page 438: Tugas tik di kelas xi ips 3

responsibilities - issues of strategy and defence and the protection of merchant shipping - were added.

Naval intelligence originated in the same year in the US and was founded by the Secretary of the Navy, William H. Hunt "...for the purpose of collecting and recording such naval information as may be useful to the Department in time of war,as well as in peace." This was followed in October 1885 by theMilitary Information Division, the first standing military intelligence agency of the United States with the duty of collecting military data on foreign nations.[12]

In 1900, the Imperial German Navy established the Nachrichten-Abteilung, which was devoted to gathering intelligence on Britain. The navies of Italy, Russia and Austria-Hungary set up similar services as well.

Civil intelligence agencies

William Melville helped establish the first independent intelligence agency, the British Secret Service, and was appointed as its first chief.

Integrated intelligence agencies run directly by governments were also established. The British Secret Service Bureau was founded in 1909 as the first independent and interdepartmentalagency fully in control over all government espionage activities.

At a time of widespread and growing anti-German feeling and fear, plans were drawn up for an extensive offensive intelligence system to be used an instrument in the event of a

Page 439: Tugas tik di kelas xi ips 3

European war. Due to intense lobbying from William Melville and after he obtained German mobilization plans and proof of their financial support to the Boers, the government authorized the creation of a new intelligence section in the War Office, MO3 (subsequently redesignated M05) headed by Melville, in 1903. Working under cover from a flat in London, Melville ran both counterintelligence and foreign intelligenceoperations, capitalizing on the knowledge and foreign contactshe had accumulated during his years running Special Branch.

Due to its success, the Government Committee on Intelligence, with support from Richard Haldane and Winston Churchill, established the Secret Service Bureau in 1909. It consisted ofnineteen military intelligence departments - MI1 to MI19, but MI5 and MI6 came to be the most recognized as they are the only ones to have remained active to this day.

The Bureau was a joint initiative of the Admiralty, the War Office and the Foreign Office to control secret intelligence operations in the UK and overseas, particularly concentrating on the activities of the Imperial German Government. Its firstdirector was Captain Sir George Mansfield Smith-Cumming alias "C".[13] In 1910, the bureau was split into naval and army sections which, over time, specialised in foreign espionage and internal counter-espionage activities respectively. The Secret Service initially focused its resources on gathering intelligence on German shipbuilding plans and operations. Espionage activity in France was consciously refrained from, so as not to jeopardize the burgeoning alliance between the two nations.

For the first time, the government had access to a peace-time,centralized independent intelligence bureaucracy with indexed registries and defined procedures, as opposed to the more ad hoc methods used previously. Instead of a system whereby rivaldepartments and military services would work on their own priorities with little to no consultation or cooperation with each other, the newly established Secret Intelligence Service was interdepartmental, and submitted its intelligence reports to all relevant government departments.[14]

Counter-intelligence

Page 440: Tugas tik di kelas xi ips 3

The Okhrana was founded in 1880 and was tasked with counteringenemy espionage. St. Petersburg Okhrana group photo, 1905.

As espionage became more widely used, it became imperative to expand the role of existing police and internal security forces into a role of detecting and countering foreign spies. The Austro-Hungarian Evidenzbureau was entrusted with the rolefrom the late 19th century to counter the actions of the Pan-Slavist movement operating out of Serbia.

As mentioned above, after the fallout from the Dreyfus Affair in France, responsibility for military counter-espionage was passed in 1899 to the Sûreté générale - an agency originally responsible for order enforcement and public safety - and overseen by the Ministry of the Interior.[9]

The Okhrana [15] was initially formed in 1880 to combat politicalterrorism and left-wing revolutionary activity throughout the Russian Empire, but was also tasked with countering enemy espionage.[16] Its main concern was the activities of revolutionaries, who often worked and plotted subversive actions from abroad. It created an antenna in Paris run by Pyotr Rachkovsky to monitor their activities. The agency used many methods to achieve its goals, including covert operations, undercover agents, and "perlustration" — the interception and reading of private correspondence. The Okhrana became notorious for its use of agents provocateurs who often succeeded in penetrating the activities of revolutionary groups including the Bolsheviks.[17]

In Britain, the Secret Service Bureau was split into a foreignand counter intelligence domestic service in 1910. The latter

Page 441: Tugas tik di kelas xi ips 3

was headed by Sir Vernon Kell and was originally aimed at calming public fears of large scale German espionage.[18] As theService was not authorized with police powers, Kell liased extensively with the Special Branch of Scotland Yard (headed by Basil Thomson), and succeeded in disrupting the work of Indian revolutionaries collaborating with the Germans during the war.

First World War

Cover of the Petit Journal of 20 January 1895, covering the arrestof Captain Alfred Dreyfus for espionage and treason. The case convulsed France and raised public awareness of the rapidly developing world of espionage.

By the outbreak of the First World War all the major powers had highly sophisticated structures in place for the training and handling of spies and for the processing of the intelligence information obtained through espionage. The figure and mystique of the spy had also developed considerablyin the public eye. The Dreyfus Affair, which involved international espionage and treason, contributed much to public interest in espionage.[19][20]

The spy novel emerged as a distinct genre in the late 19th century, and dealt with themes such as colonial rivalry, the growing threat of conflict in Europe and the revolutionary andanarchist domestic threat. The "spy novel" was defined by The

Page 442: Tugas tik di kelas xi ips 3

Riddle of the Sands (1903) by British author Robert Erskine Childers, which played on public fears of a German plan to invade Britain (the nefarious plot is uncovered by an amateur spy). Its success was followed by a flood of imitators, including William Le Queux and E. Phillips Oppenheim.

It was during the War that modern espionage techniques were honed and refined, as all belligerent powers utilized their intelligence services to obtain military intelligence, commit acts of sabotage and carry out propaganda. As the progress of the war became static and armies dug down in trenches the utility of cavaly reconnaissance became of very limited effectiveness.[21]

Information gathered at the battlefront from the interrogationof prisoners-of-war was only capable of giving insight into local enemy actions of limited duration. To obtain high-level information on the enemy's strategic intentions, its military capabilities and deployment required undercover spy rings operating deep in enemy territory. On the Western Front the advantage lay with the Western Allies, as throughout most of the war German armies occupied Belgium and parts of northern France, thereby providing a large and dissaffected population that could be organized into collecting and transmitting vitalintelligence.[21]

British and French intelligence services recruited Belgian or French refugees and infiltrated these agents behind enemy lines via the Netherlands - a neutral country. Many collaborators were then recruited from the local population, who were mainly driven by patriotism and hatred of the harsh German occupation. By the end of the war, over 250 networks had been created, comprising more than 6,400 Belgian and French citizens. These rings concentrated on infiltrating the German railway network so that the allies could receive advance warning of strategic troop and ammunition movements.[21]

Page 443: Tugas tik di kelas xi ips 3

Mata Hari was a famous Dutch dancer who was executed on charges of espionage for Germany. Pictured at her arrest.

The most effective such ring in German-occupied Belgium, was the Dame Blanche ("White Lady") network, founded in 1916 by Walthère Dewé as an underground intelligence network. It supplied as much as 75% of the intelligence collected from occupied Belgium and northern France to the Allies. By the endof the war, its 1,300 agents covered all of occupied Belgium, northern France and, through a collaboration with Louise de Bettignies' network, occupied Luxembourg. The network was ableto provide a crucial few days warning before the launch of theGerman 1918 Spring Offensive.[22]

German intelligence was only ever able to recruit a very smallnumber of spies. These were trained at an academy run by the Kriegsnachrichtenstelle in Antwerp and headed by Elsbeth Schragmüller, known as "Fräulein Doktor". These agents were generally isolated and unable to rely on a large support network for the relaying of information. The most famous German spy was Margaretha Geertruida Zelle, an exotic Dutch dancer with the stage name Mata Hari. As a Dutch subject, she was able to cross national borders freely. In 1916, she was arrested and brought to London where she was interrogated at length by Sir Basil Thomson, Assistant Commissioner at New Scotland Yard. She eventually claimed to be working for Frenchintelligence. In fact, she had entered German service from 1915, and sent her reports to the mission in the German embassy in Madrid.[23] In January 1917, the German military

Page 444: Tugas tik di kelas xi ips 3

attaché in Madrid transmitted radio messages to Berlin describing the helpful activities of a German spy code-named H-21. French intelligence agents intercepted the messages and,from the information it contained, identified H-21 as Mata Hari. She was executed by firing squad on 15 October 1917.

German spies in Britain did not meet with much success - the German spy ring operating in Britain was successfully disrupted by MI5 under Vernon Kell on the day after the declaration of the war. Home Secretary, Reginald McKenna, announced that "within the last twenty-four hours no fewer than twenty-one spies, or suspected spies, have been arrested in various places all over the country, chiefly in important military or naval centres, some of them long known to the authorities to be spies",[24][25]

One exception was Jules C. Silber, who evaded MI5 investigations and obtained a position at the censor's office in 1914. Using mailed window envelopes that had already been stamped and cleared he was able to forward microfilm to Germany that contained increasingly important information. Silber was regularly promoted and ended up in the position of chief censor, which enabled him to analyze all suspect documents.[26]

The British economic blockade of Germany was made effective through the support of spy networks operating out of neutral Netherlands. Points of weakness in the naval blockade were determined by agents on the ground and relayed back to the Royal Navy. The blockade led to severe food deprivation in Germany and was a major cause in the collapse of the Central Powers war effort in 1918.[27]

Codebreaking

Page 445: Tugas tik di kelas xi ips 3

The interception and decryption of the Zimmermann telegram by Room 40 at the Admiralty was of pivotal importance for the outcome of the war.

Two new methods for intelligence collection were developed over the course of the war - aerial reconnaissance and photography and the interception and decryption of radio signals.[27] The British rapidly built up great expertise in thenewly emerging field of signals intelligence and codebreaking.

In 1911, a committee of the Committee of Imperial Defence on cable communications concluded that in the event of war with Germany, German-owned submarine cables should be destroyed. Onthe night of 3 August 1914, the cable ship Alert located and cutGermany's five trans-Atlantic cables, which ran down the English Channel. Soon after, the six cables running between Britain and Germany were cut.[28] As an immediate consequence, there was a significant increase in cable messages sent via cables belonging to other countries, and cables sent by wireless. These could now be intercepted, but codes and ciphers were naturally used to hide the meaning of the

Page 446: Tugas tik di kelas xi ips 3

messages, and neither Britain nor Germany had any established organisations to decode and interpret the messages. At the start of the war, the navy had only one wireless station for intercepting messages, at Stockton. However, installations belonging to the Post Office and the Marconi Company, as well as private individuals who had access to radio equipment, began recording messages from Germany.[29]

Room 40, under Director of Naval Education Alfred Ewing and formed in October 1914, was the section in the British Admiralty most identified with the British cryptoanalysis effort during the First World War. The basis of Room 40 operations evolved around a German naval codebook, the Signalbuch der Kaiserlichen Marine (SKM), and around maps (containing coded squares), which were obtained from three different sources in the early months of the war. Alfred Ewing directed Room 40 until May 1917, when direct control passed to Captain (later Admiral) Reginald 'Blinker' Hall, assisted by William Milbourne James.[30]

A similar organisation began in the Military Intelligence department of the War Office, which become known as MI1b, and Colonel Macdonagh proposed that the two organisations should work together, decoding messages concerning the Western Front.A sophisticated interception system (known as 'Y' service), together with the post office and Marconi stationsgrew rapidlyto the point it could intercept almost all official German messages.[29]

As the number of intercepted messages increased it became necessary to decide which were unimportant and should just be logged, and which should be passed on outside Room 40. The German fleet was in the habit each day of wirelessing the exact position of each ship and giving regular position reports when at sea. It was possible to build up a precise picture of the normal operation of the High Seas Fleet, indeedto infer from the routes they chose where defensive minefieldshad been place and where it was safe for ships to operate. Whenever a change to the normal pattern was seen, it immediately signalled that some operation was about to take place and a warning could be given. Detailed information aboutsubmarine movements was available.[31]

Page 447: Tugas tik di kelas xi ips 3

Both the British and German interception services began to experiment with direction finding radio equipment in the startof 1915. Captain H. J. Round working for Marconi had been carrying out experiments for the army in France and Hall instructed him to build a direction finding system for the navy. Stations were built along the coast, and by May 1915 theAdmiralty was able to track German submarines crossing the North Sea. Some of these stations also acted as 'Y' stations to collect German messages, but a new section was created within Room 40 to plot the positions of ships from the directional reports. No attempts were made by the German fleetto restrict its use of wireless until 1917, and then only in response to perceived British use of direction finding, not because it believed messages were being decoded.[32]

Room 40 played an important role in several naval engagements during the war, notably in detecting major German sorties intothe North Sea that led to the battles of Dogger Bank and Jutland as the British fleet was sent out to intercept them. However its most important contribution was probably in decrypting the Zimmermann Telegram, a telegram from the GermanForeign Office sent via Washington to its ambassador Heinrich von Eckardt in Mexico.

In the telegram's plaintext, Nigel de Grey and William Montgomery learned of the German Foreign Minister Arthur Zimmermann's offer to Mexico of United States' territories of Arizona, New Mexico, and Texas as an enticement to join the war as a German ally. The telegram was passed to the U.S. by Captain Hall, and a scheme was devised (involving a still unknown agent in Mexico and a burglary) to conceal how its plaintext had become available and also how the U.S. had gained possession of a copy. The telegram was made public by the United States, which declared war on Germany on 6 April 1917, entering the war on the Allied side.[33] This effectively demonstrated how the course of a war could be changed by effective intelligence operations.

Russian Revolution

Page 448: Tugas tik di kelas xi ips 3

From the London Evening Standard's Master Spy serial: Reilly, disguised as a member of the Cheka, bluffs his way through a Red Army checkpoint.

The outbreak of revolution in Russia and the subsequent seizure of power by the Bolsheviks, a party deeply hostile towards the capitalist powers, was an important catalyst for the development of modern international espionage techniques. A key figure was Sidney Reilly, a Russian-born adventurer and secret agent employed by Scotland Yard and the Secret Intelligence Service. He set the standard for modern espionage, turning it from a gentleman's amateurish game to a ruthless and professional methodology for the achievement of military and political ends.

Reilly's remarkable and varied career culminated in an audiacious attempt to depose the Bolshevik Government and assassinate Vladimir Ilyich Lenin.[34]

In May 1918, Robert Bruce Lockhart,[35] an agent of the British Secret Intelligence Service, and Reilly repeatedly met Boris Savinkov, head of the counter-revolutionary Union for the Defence of the Motherland and Freedom (UDMF). Lockhart and Reilly then contacted anti-Bolshevik groups linked to Savinkovand supported these factions with SIS funds.[36] In June, disillusioned members of the Latvian Riflemen began appearing in anti-Bolshevik circles in Petrograd and were eventually directed to Captain Cromie, a British naval attaché, and Mr. Constantine, a Turkish merchant who was actually Reilly. Reilly believed their participation in the pending coup to be vital and arranged their meeting with Lockhart at the British mission in Moscow. At this stage, Reilly planned a coup

Page 449: Tugas tik di kelas xi ips 3

against the Bolshevik government and drew up a list of Soviet military leaders ready to assume responsibilities on the fall of the Bolshevik government.[36]

Paul Dukes was knighted for his achievements in the Secret Intelligence Service.

On 17 August, Reilly conducted meetings between Latvian regimental leaders and liaised with Captain George Hill, another British agent operating in Russia. Hill had managed toestablish a network of 10 secure houses around Moscow, and a professional courier network that reached across northern Russia and that allowed him to smuggle top secret documents from Moscow to Stockholm to London in days. They agreed the coup would occur the first week of September during a meeting of the Council of People's Commissars and the Moscow Soviet atthe Bolshoi Theatre. However, on the eve of the coup, unexpected events thwarted the operation. Fanya Kaplan shot and wounded Lenin triggering the "Red Terror" - the Cheka implicated all malcontents in a grand conspiracy that warranted a full-scale campaign. Using lists supplied by undercover agents, the Cheka arrested those involved in Reilly's pending coup, raided the British Embassy in Petrogradand killed Francis Cromie and arrested Lockhart.[36]

Another pivotal figure was Sir Paul Dukes, arguably the first professional spy of the modern age.[37] Recruited personally by

Page 450: Tugas tik di kelas xi ips 3

Mansfield Smith-Cumming to act as a secret agent in Imperial Russia, he set up elaborate plans to help prominent White Russians escape from Soviet prisons after the Revolution and smuggled hundreds of them into Finland. Known as the "Man of aHundred Faces," Dukes continued his use of disguises, which aided him in assuming a number of identities and gained him access to numerous Bolshevik organizations. He successfully infiltrated the Communist Party of the Soviet Union, the Comintern, and the political police, or CHEKA. Dukes also learned of the inner workings of the Politburo, and passed theinformation to British intelligence.

In the course of a few months, Dukes, Hill and Reilly succeeded in infiltrating Lenin’s inner circle, and gaining access to the activities of the Cheka and the Communist International at the highest level. This helped to convince the government of the importance of a well-funded secret intelligence service in peace time as a key component in formulating foreign policy.[37] Winston Churchill argued that intercepted communications were more useful "as a means of forming a true judgement of public policy than any other source of knowledge at the disposal of the State."[38]

Today

Today, espionage agencies target the illegal drug trade and terrorists as well as state actors. Since 2008 the United States has charged at least 57 defendants for attempting to spy for China.[39]

Different intelligence services value certain intelligence collection techniques over others. The former Soviet Union, for example, preferred human sources over research in open sources, while the United States has tended to emphasize technological methods such as SIGINT and IMINT. Both Soviet political (KGB) and military intelligence (GRU [40] ) officers were judged by the number of agents they recruited.

Targets of espionageEspionage agents are usually trained experts in a specific targeted field so they can differentiate mundane information

Page 451: Tugas tik di kelas xi ips 3

from targets of intrinsic value to their own organisational development. Correct identification of the target at its execution is the sole purpose of the espionage operation.[41]

Broad areas of espionage targeting expertise include:[42]

Natural resources : strategic production identification and assessment (food, energy, materials). Agents are usually found among bureaucrats who administer these resources in their own countries

Popular sentiment towards domestic and foreign policies (popular, middle class, elites). Agents often recruited from field journalistic crews, exchange postgraduate students and sociology researchers

Strategic economic strengths (production, research, manufacture, infrastructure). Agents recruited from science and technology academia, commercial enterprises, and more rarely from among military technologists

Military capability intelligence (offensive, defensive, maneuver, naval, air, space). Agents are trained by special military espionage education facilities, and posted to an area of operation with covert identities to minimize prosecution

Counterintelligence operations specifically targeting opponents' intelligence services themselves, such as breaching confidentiality of communications, and recruiting defectors or moles

Methods and terminologyAlthough the news media may speak of "spy satellites" and the like, espionage is not a synonym for all intelligence-gathering disciplines. It is a specific form of human source intelligence (HUMINT). Codebreaking (cryptanalysis or COMINT),aircraft or satellite photography, (IMINT) and research in open publications (OSINT) are all intelligence gathering disciplines, but none of them are considered espionage. Many HUMINT activities, such as prisoner interrogation, reports from military reconnaissance patrols and from diplomats, etc.,are not considered espionage. Espionage is the disclosure of sensitive information (classified) to people who are not

Page 452: Tugas tik di kelas xi ips 3

cleared for that information or access to that sensitive information.

Unlike other forms of intelligence collection disciplines, espionage usually involves accessing the place where the desired information is stored or accessing the people who knowthe information and will divulge it through some kind of subterfuge. There are exceptions to physical meetings, such asthe Oslo Report, or the insistence of Robert Hanssen in never meeting the people who bought his information.

The US defines espionage towards itself as "The act of obtaining, delivering, transmitting, communicating, or receiving information about the national defense with an intent, or reason to believe, that the information may be usedto the injury of the United States or to the advantage of any foreign nation". Black's Law Dictionary (1990) defines espionage as:"... gathering, transmitting, or losing ... information related to the national defense". Espionage is a violation of United States law, 18 U.S.C. §§   792 –798 and Article 106a of the Uniform Code of Military Justice".[43] The United States, like most nations, conducts espionage against other nations, under the control of the National Clandestine Service. Britain's espionage activities are controlled by the Secret Intelligence Service.

Technology and techniques

See also: Tradecraft and List of intelligence gathering disciplines

Agent handling Concealment device Covert agent Covert listening device Cut-out Cyber spying Dead drop False flag operations Honeypot Interrogation Non-official cover Numbers messaging

Page 453: Tugas tik di kelas xi ips 3

Official cover One-way voice link Safe house Side channel attack Steganography Surveillance Surveillance aircraft

[44]

Organization

An intelligence officer's clothing, accessories, and behavior must be as unremarkable as possible — their lives (and others') may depend on it.

A spy is a person employed to seek out top secret information from a source. Within the United States Intelligence Community, "asset" is a more common usage. A case officer, whomay have diplomatic status (i.e., official cover or non-official cover), supports and directs the human collector. Cutouts are couriers who do not know the agent or case officerbut transfer messages. A safe house is a refuge for spies. Spies often seek to obtain secret information from another source.

In larger networks the organization can be complex with many methods to avoid detection, including clandestine cell systems. Often the players have never met. Case officers are stationed in foreign countries to recruit and to supervise intelligence agents, who in turn spy on targets in their

Page 454: Tugas tik di kelas xi ips 3

countries where they are assigned. A spy need not be a citizenof the target country—hence does not automatically commit treason when operating within it. While the more common practice is to recruit a person already trusted with access tosensitive information, sometimes a person with a well-preparedsynthetic identity (cover background), called a legend in tradecraft,[citation needed] may attempt to infiltrate a target organization.

These agents can be moles (who are recruited before they get access to secrets), defectors (who are recruited after they get access to secrets and leave their country) or defectors inplace (who get access but do not leave).

A legend is also employed for an individual who is not an illegalagent, but is an ordinary citizen who is "relocated", for example, a "protected witness". Nevertheless, such a non-agentvery likely will also have a case officer who will act as controller. As in most, if not all synthetic identity schemes,for whatever purpose (illegal or legal), the assistance of a controller is required.

Spies may also be used to spread disinformation in the organization in which they are planted, such as giving false reports about their country's military movements, or about a competing company's ability to bring a product to market. Spies may be given other roles that also require infiltration,such as sabotage.

Many governments routinely spy on their allies as well as their enemies, although they typically maintain a policy of not commenting on this. Governments also employ private companies to collect information on their behalf such as SCG International Risk, International Intelligence Limited and others.

Many organizations, both national and non-national, conduct espionage operations. It should not be assumed that espionage is always directed at the most secret operations of a target country. National and terrorist organizations and other groupsare also targets.[45] This is because governments want to retrieve information that they can use to be proactive in protecting their nation from potential terrorist attacks.

Page 455: Tugas tik di kelas xi ips 3

Communications both are necessary to espionage and clandestineoperations, and also a great vulnerability when the adversary has sophisticated SIGINT detection and interception capability. Agents must also transfer money securely.[45]

Industrial espionageMain article: Industrial espionage

Reportedly Canada is losing $12 billion[46] and German companiesare estimated to be losing about €50 billion ($87 billion) and30,000 jobs[47] to industrial espionage every year.

Agents in espionageIn espionage jargon, an "agent" is the person who does the spying; a citizen of one country who is recruited by a second country to spy on or work against his own country or a third country. In popular usage, this term is often erroneously applied to a member of an intelligence service who recruits and handles agents; in espionage such a person is referred to as an intelligence officer, intelligence operative or case officer. There are several types of agent in use today.

Double agent , "is a person who engages in clandestine activity for two intelligence or security services (or more in joint operations), who provides information aboutone or about each to the other, and who wittingly withholds significant information from one on the instructions of the other or is unwittingly manipulated by one so that significant facts are withheld from the adversary. Peddlers, fabricators, and others who work forthemselves rather than a service are not double agents because they are not agents. The fact that doubles have an agent relationship with both sides distinguishes them from penetrations, who normally are placed with the target service in a staff or officer capacity."[48]

o Re-doubled agent , an agent who gets caught as a double agent and is forced to mislead the foreign intelligence service.

Unwitting double agent, an agent who offers or is forced to recruit as a double or re-doubled

Page 456: Tugas tik di kelas xi ips 3

agent and in the process is recruited by eithera third party intelligence service or his own government without the knowledge of the intended target intelligence service or the agent. This can be useful in capturing important information from an agent that is attempting to seek allegiance with another country. The double agent usually has knowledgeof both intelligence services and can identify operational techniques of both, thus making third party recruitment difficult or impossible. The knowledge of operational techniques can also affect the relationship between the Operations Officer (or case officer) and the agent if the case is transferred by an Operational Targeting Officerto a new Operations Officer, leaving the new officer vulnerable to attack. This type of transfer may occur when an officer has completed his term of service or when his cover is blown.

Triple agent , an agent that is working forthree intelligence services.

Intelligence agent: Provides access to sensitive information through the use of special privileges. If used in corporate intelligence gathering, this may include gathering information of a corporate business venture or stock portfolio. In economic intelligence, "Economic Analysts may use their specialized skills to analyze and interpreteconomic trends and developments, assess and track foreign financial activities, and develop new econometricand modeling methodologies."[49] This may also include information of trade or tariff.

Access agent: Provides access to other potential agents by providing profiling information that can help lead to recruitment into an intelligence service.

Agent of influence: Someone who may provide political influence in an area of interest or may even provide publications needed to further an intelligence service agenda. The use of the media to print a story to mislead a foreign service into action, exposing their operations while under surveillance.

Page 457: Tugas tik di kelas xi ips 3

Agent provocateur : This type of agent instigates trouble,or may provide information to gather as many people as possible into one location for an arrest.

Facilities agent: A facilities agent may provide access to buildings such as garages or offices used for staging operations, resupply, etc.

Principal agent: This agent functions as a handler for anestablished network of agents usually "Blue Chip".

Confusion agent : May provide misleading information to anenemy intelligence service or attempt to discredit the operations of the target in an operation.

Sleeper agent : A sleeper agent is a person who is recruited to an intelligence service to wake up and perform a specific set of tasks or functions while livingunder cover in an area of interest. This type of agent isnot the same as a deep cover operative, who continually contacts a case officer to file intelligence reports. A sleeper agent is not in contact with anyone until activated.

Illegal agent: This is a person who is living in another country under false credentials that does not report to alocal station. A non official cover operative is a type of cover used by an intelligence operative and can be dubbed an "Illegal"[50] when working in another country without diplomatic protection.

LawEspionage is a crime under the legal code of many nations. In the United States it is covered by the Espionage Act of 1917. The risks of espionage vary. A spy breaking the host country'slaws may be deported, imprisoned, or even executed. A spy breaking his/her own country's laws can be imprisoned for espionage or/and treason (which in the USA and some other jurisdictions can only occur if he or she take ups arms or aids the enemy against his or her own country during wartime),or even executed, as the Rosenbergs were. For example, when Aldrich Ames handed a stack of dossiers of U.S. Central Intelligence Agency (CIA) agents in the Eastern Bloc to his KGB-officer "handler", the KGB "rolled up" several networks, and at least ten people were secretly shot. When Ames was arrested by the U.S. Federal Bureau of Investigation (FBI), he

Page 458: Tugas tik di kelas xi ips 3

faced life in prison; his contact, who had diplomatic immunity, was declared persona non grata and taken to the airport. Ames's wife was threatened with life imprisonment if her husband did not cooperate; he did, and she was given a five-year sentence. Hugh Francis Redmond, a CIA officer in China, spent nineteen years in a Chinese prison for espionage—and died there—as he was operating without diplomatic cover and immunity.[51]

In United States law, treason,[52] espionage,[53] and spying[54] are separate crimes. Treason and espionage have graduated punishment levels.

The United States in World War I passed the Espionage Act of 1917. Over the years, many spies, such as the Soble spy ring, Robert Lee Johnson, the Rosenberg ring, Aldrich Hazen Ames,[55] Robert Philip Hanssen,[56] Jonathan Pollard, John Anthony Walker, James Hall III, and others have been prosecuted under this law.

Use against non-spiesHowever, espionage laws are also used to prosecute non-spies. In the United States, the Espionage Act of 1917 was used against socialist politician Eugene V. Debs (at that time the act had much stricter guidelines and amongst other things banned speech against military recruiting). The law was later used to suppress publication of periodicals, for example of Father Coughlin in World War II. In the early 21st century, the act was used to prosecute whistleblowers such as Thomas Andrews Drake, John Kiriakou, and Edward Snowden, as well as officials who communicated with journalists for innocuous reasons, such as Stephen Jin-Woo Kim.[57][58]

As of 2012, India and Pakistan were holding several hundred prisoners of each other's country for minor violations like trespass or visa overstay, often with accusations of espionageattached. Some of these include cases where Pakistan and Indiaboth deny citizenship to these people, leaving them stateless.The BBC reported in 2012 on one such case, that of Mohammed Idrees, who was held under Indian police control for approximately 13 years for overstaying his 15-day visa by 2–3

Page 459: Tugas tik di kelas xi ips 3

days after seeing his ill parents in 1999. Much of the 13 years was spent in prison waiting for a hearing, and more timewas spent homeless or living with generous families. The Indian People's Union for Civil Liberties and Human Rights LawNetwork both decried his treatment. The BBC attributed some ofthe problems to tensions caused by the Kashmir conflict.[59]

Espionage laws in the UKEspionage is illegal in the UK under the Official Secrets Actsof 1911 and 1920. The UK law under this legislation considers espionage as actions "intend to help an enemy and deliberatelyharm the security of the nation". According to MI5, a person will be charged with the crime of espionage if they, "for any purpose prejudicial to the safety or interests of the State": approaches, enters or inspects a prohibited area; makes documents such as plans that are intended, calculated, or could directly or indirectly be of use to an enemy; or "obtains, collects, records, or publishes, or communicates to any other person any secret official code word, or pass word, or any sketch, plan, model, article, or note, or other document which is calculated to be or might be or is intended to be directly or indirectly useful to an enemy". The illegality of espionage also includes any action which may be considered 'preparatory to' spying, or encouraging or aiding another to spy.[60]

An individual convicted of espionage can be imprisoned for up to 14 years in the UK, although multiple sentences can be issued.

Government intelligence laws and its distinction from espionage

Government intelligence is very much distinct from espionage, and is not illegal in the UK, providing that the organisationsof individuals are registered, often with the ICO, and are acting within the restrictions of the Regulation of Investigatory Powers Act (RIPA). 'Intelligence' is considered legally as "information of all sorts gathered by a government or organisation to guide its decisions. It includes information that may be both public and private, obtained from

Page 460: Tugas tik di kelas xi ips 3

many different public or secret sources. It could consist entirely of information from either publicly available or secret sources, or be a combination of the two."[61]

However, espionage and intelligence can be linked. According to the MI5 website, "foreign intelligence officers acting in the UK under diplomatic cover may enjoy immunity from prosecution. Such persons can only be tried for spying (or, indeed, any criminal offence) if diplomatic immunity is waivedbeforehand. Those officers operating without diplomatic cover have no such immunity from prosecution".

There are also laws surrounding government and organisational intelligence and surveillance. Generally, the body involved should be issued with some form of warrant or permission from the government, and should be enacting their procedures in theinterest of protecting national security or the safety of public citizens. Those carrying out intelligence missions should act within not only RIPA, but also the Data Protection Act and Human Rights Act. However, there are specific spy equipment laws and legal requirements around intelligence methods that vary for each form of intelligence enacted.

Military conflicts

French spy captured during the Franco-Prussian War.

In military conflicts, espionage is considered permissible as many nations recognizes the inevitability of opposing sides seeking intelligence each about the dispositions of the other.To make the mission easier and successful, soldiers or agents wear disguises to conceal their true identity from the enemy while penetrating enemy lines for intelligence gathering. However, if they are caught behind enemy lines in disguises, they are not entitled to prisoner-of-war status and subject toprosecution and punishment—including execution.

Page 461: Tugas tik di kelas xi ips 3

The Hague Convention of 1907 addresses the status of wartime spies, specifically within "Laws and Customs of War on Land" (Hague IV); October 18, 1907: CHAPTER II Spies".[62] Article 29 states that a person is considered a spy who, acts clandestinely or on false pretenses, infiltrates enemy lines with the intention of acquiring intelligence about the enemy and communicate it to the belligerent during times of war. Soldiers who penetrates enemy lines in proper uniforms for thepurpose of acquiring intelligence are not considered spies butare lawful combatants entitled to be treated as prisoners of war upon capture by the enemy. Article 30 states that a spy captured behind enemy lines may only be punished following a trial. However, Article 31 provides that if a spy successfullyrejoined his own military and is then captured by the enemy asa lawful combatant, he cannot be punished for his previous acts of espionage and must be treated as a prisoner of war. Note that this provision does not apply to citizens who committed treason against their own country or co-belligerentsof that country and may be captured and prosecuted at any place or any time regardless whether he rejoined the military to which he belongs or not or during or after the war.[63][64]

The ones that are excluded from being treated as spies while behind enemy lines are escaping prisoners of war and downed airmen as international law distinguishes between a disguised spy and a disguised escaper.[44] It is permissible for these groups to wear enemy uniforms or civilian clothes in order to facilitate their escape back to friendly lines so long as theydo not attack enemy forces, collect military intelligence, or engage in similar military operations while so disguised.[65][66] Soldiers who are wearing enemy uniforms or civilian clothes simply for the sake of warmth along with other purposes ratherthan engaging in espionage or similar military operations while so attired is also excluded from being treated as unlawful combatants.[44]

Saboteurs are treated as spies as they too wear disguises behind enemy lines for the purpose of waging destruction on enemy's vital targets in addition to intelligence gathering.[67]

[68] For example, during World War II, eight German agents entered the U.S. in June 1942 as part of Operation Pastorius, a sabotage mission against U.S. economic targets. Two weeks

Page 462: Tugas tik di kelas xi ips 3

later, all were arrested in civilian clothes by the FBI thanksto two German agents betraying the mission to the U.S. Under the Hague Convention of 1907, these Germans were classified asspies and tried by a military tribunal in Washington D.C. [69] OnAugust 3, 1942, all eight were found guilty and sentenced to death. Five days later, six were executed by electric chair atthe District of Columbia jail. Two who had given evidence against the others had their sentences reduced by President Franklin D. Roosevelt to prison terms. In 1948, they were released by President Harry S. Truman and deported to the American Zone of occupied Germany.

The U.S. codification of enemy spies is Article 106 of the Uniform Code of Military Justice. This provides a mandatory death sentence if a person captured in the act is proven to be"lurking as a spy or acting as a spy in or about any place, vessel, or aircraft, within the control or jurisdiction of anyof the armed forces, or in or about any shipyard, any manufacturing or industrial plant, or any other place or institution engaged in work in aid of the prosecution of the war by the United States, or elsewhere".[70]

List of famous spiesThis article is in a list format that may be better presented using prose. You can help by converting this article to prose, if appropriate. Editing help is available. (March 2010)

See also: Intelligence agency, Special Operations Executive and United States government security breaches

Howard Burnham (1915)

Page 463: Tugas tik di kelas xi ips 3

FBI file photo of the leader of the Duquesne Spy Ring (1941)

Reign of Elizabeth I of England

Sir Francis WalsinghamChristopher Marlowe

American Revolution

Thomas Knowlton, The First American SpyNathan HaleJohn AndreJames ArmisteadBenjamin Tallmadge, Case agent who organized of the Culper SpyRing in New York City

Napoleonic Wars

Charles-Louis SchulmeisterWilliam Wickham

American Civil War

One of the innovations in the American Civil War was the use of proprietary companies for intelligence collection by the Union; see Allan Pinkerton.Confederate Secret ServiceBelle Boyd [71]

Aceh War

Dutch professor Snouck Hurgronje world leading authority on Islam was a proponent of espionage to quell Muslim resistance in Aceh in the Dutch East Indies. In his role as Colonial Advisor on Oriental Affairs, he gathered intelligence under the name "Haji Abdul Ghaffar".He used his knowledge of Islamic and Aceh culture to devise strategies that significantly helped crush the resistance of

Page 464: Tugas tik di kelas xi ips 3

the Aceh inhabitants and impose Dutch colonial rule, ending the 40 year Aceh War. Casualty estimates ranged between 50,000and 100,000 inhabitants dead and about a million wounded.Christiaan Snouck Hurgronje

Second Boer War

Fritz Joubert DuquesneSidney Reilly

Russo-Japanese War

Sidney ReillyHo Liang-ShungAkashi Motojiro

World War I

See also: Espionage in Norway during World War I

Fritz Joubert Duquesne Jules C. Silber Mata Hari Howard Burnham T.E. Lawrence Sidney Reilly Maria de Victorica

11 German spies were executed in the Tower of London during WW1.[72]

Executed :- Carl Hans Lody on 6 November 1914, in the Miniature Rifle Range.

Executed :- Carl Frederick Muller on 23 June 1915, in Miniature Rifle Range. Prepared bullets were used by the execution party.

Executed :- Haicke Marinus Janssen & Willem Johannes Roosboth executed on 30 July 1915, both in the Tower ditch.

Executed :- Ernst Waldemar Melin on 10 September 1915, Miniature Rifle Range.

Executed :- Augusto Alfredo Roggen on 17 September 1915, in Miniature Rifle Range.

Page 465: Tugas tik di kelas xi ips 3

Executed :- Fernando Buschman on 19 October 1915, in Miniature Rifle Range.

Executed :- George Traugott Breeckow, otherwise known as Reginald Rowland or George T. Parker on 26 October 1915, in Miniature Rifle Range. Worked with a lady called Lizzie Louise Wertheim who was sentenced to ten years penal servitude. Later on 17 January 1918 was certified as insane and died in Broadmoor criminal lunatic asylum on 29 July 1920.

Executed :- Irving Guy Ries on 27 October 1915, in Miniature Rifle Range.

Executed :- Albert Mayer on 2 December 1915, in MiniatureRifle Range.

Executed :- Ludovico Hurwitz-y-Zender on 11 April 1916 inMiniature Rifle Range.

Carl Hans Lody has his own grave and black headstone in the East London Cemetery, Plaistow. The others are buried about 150 yards away under a small memorial stone alongside a pathway.

World War II

Imagined German Intelligence Officer thanks British Forces forgiving away details of operations, (Graham & Gillies Advertising)

Page 466: Tugas tik di kelas xi ips 3

Informants were common in World War II. In November 1939, the German Hans Ferdinand Mayer sent what is called the Oslo Report to inform the British of German technology and projectsin an effort to undermine the Nazi regime. The Réseau AGIR wasa French network developed after the fall of France that reported the start of construction of V-weapon installations in Occupied France to the British.

Counterespionage included the use of turned Double Cross agents to misinform Nazi Germany of impact points during the Blitz and internment of Japanese in the US against "Japan's wartime spy program". Additional WWII espionage examples include Soviet spying on the US Manhattan project, the German Duquesne Spy Ring convicted in the US, and the Soviet Red Orchestra spying on Nazi Germany. The US lacked a specific agency at the start of the war, but quickly formed the Office of Strategic Services (OSS).

Spying has sometimes been considered a gentlemanly pursuit, with recruiting focused on military officers, or at least on persons of the class from whom officers are recruited. However, the demand for male soldiers, an increase in women's rights, and the tactical advantages of female spies led the British Special Operations Executive (SOE) to set aside any lingering Victorian Era prejudices and begin employing them inApril 1942.[73] Their task was to transmit information from Nazioccupied France back to Allied Forces. The main strategic reason was that men in France faced a high risk of being interrogated by Nazi troops but women were less likely to arouse suspicion. In this way they made good couriers and proved equal to, if not more effective than, their male counterparts. Their participation in Organization and Radio Operation was also vital to the success of many operations, including the main network between Paris and London.

See also: Clandestine HUMINT asset recruiting § Love, honeypots and recruitment

Post World War II

Further information: Cold War espionage

Page 467: Tugas tik di kelas xi ips 3

In the United States, there are seventeen[74] federal agencies that form the United States Intelligence Community. The Central Intelligence Agency operates the National Clandestine Service (NCS)[75] to collect human intelligence and perform Covert operations.[76] The National Security Agency collects Signals Intelligence. Originally the CIA spearheaded the US-IC. Pursuant to the September 11 attacks the Office of the Director of National Intelligence (ODNI) was created to promulgate information-sharing.

Kim Philby Ray Mawby

Spy fictionMain article: Spy fiction

This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (July 2011)

An early example of espionage literature is Kim by the Englishnovelist Rudyard Kipling, with a description of the training of an intelligence agent in the Great Game between the UK and Russia in 19th century Central Asia. An even earlier work was James Fenimore Cooper's classic novel, The Spy, written in 1821,about an American spy in New York during the Revolutionary War.

During the many 20th century spy scandals, much information became publicly known about national spy agencies and dozens of real-life secret agents. These sensational stories piqued public interest in a profession largely off-limits to human interest news reporting, a natural consequence of the secrecy inherent to their work. To fill in the blanks, the popular conception of the secret agent has been formed largely by 20thand 21st century literature and cinema. Attractive and sociable real-life agents such as Valerie Plame find little employment in serious fiction, however. The fictional secret agent is more often a loner, sometimes amoral—an existential hero operating outside the everyday constraints of society. Loner spy personalities may have been a stereotype of

Page 468: Tugas tik di kelas xi ips 3

convenience for authors who already knew how to write loner private investigator characters that sold well from the 1920s to the present.

Johnny Fedora achieved popularity as a fictional agent of early Cold War espionage, but James Bond is the most commercially successful of the many spy characters created by intelligence insiders during that struggle. His less fantasticrivals include Le Carre's George Smiley and Harry Palmer as played by Michael Caine. Most post-Vietnam era characters weremodeled after the American, C.C. Taylor, reportedly the last sanctioned "asset" of the U.S. government. Taylor, a true "Double 0 agent", worked alone and would travel as an Americanor Canadian tourist or businessman throughout Europe and Asia,he was used extensively in the Middle East toward the end of his career. Taylor received his weapons training from Carlos Hathcock, holder of a record 93 confirmed kills from WWII through the Viet Nam conflict. According to documents made available through the Freedom of Information Act, his operations were classified as "NOC" or Non-Official Cover.

Jumping on the spy bandwagon, other writers also started writing about spy fiction featuring female spies as protagonists, such as The Baroness, which has more graphic action and sex, as compared to other novels featuring male protagonists.

It also made its way into the videogame world, hence the famous creation of Hideo Kojima, the Metal Gear Solid Series.

Espionage has also made its way into comedy depictions. The 1960s TV series Get Smart portrays an inept spy, while the 1985 movie Spies Like Us depicts a pair of none-too-bright men sent to the Soviet Union to investigate a missile.

World War II: 1939–1945

Author(s) Title Publisher

Date Notes

Babington-Smith, Constance

Air Spy: The Story of Photo Intelligence in World War II — 195

7 —

Bryden, John Best-Kept Secret: Canadian Secret Lester 199 —

Page 469: Tugas tik di kelas xi ips 3

Intelligence in the Second World War 3

Hinsley, F. H. and Alan Stripp

Codebreakers: The Inside Story ofBletchley Park — 200

1 —

Hinsley, F. H.

British Intelligence in the Second World War — 199

6

Abridged version of multivolume official history.

Hohne, Heinz Canaris: Hitler's Master Spy — 1979 —

Jones, R. V. The Wizard War: British Scientific Intelligence 1939–1945 — 197

8 —

Kahn, David Hitler's Spies: German Military Intelligence in World War II' — 197

8 —

Kahn, DavidSeizing the Enigma: The Race to Break the German U-Boat Codes, 1939–1943

— 1991 FACE

Kitson, Simon

The Hunt for Nazi Spies: Fighting Espionage in Vichy France - 200

8Lewin, Ronald

The American Magic: Codes, Ciphers and the Defeat of Japan — 198

2 —

Masterman, J. C.

The Double Cross System in the War of 1935 to 1945 Yale 197

2 —

Persico, Joseph

Roosevelt's Secret War: FDR and World War II Espionage — 200

1 —

Persico, Joseph

Casey: The Lives and Secrets of William J. Casey-From the OSS to the CIA

— 1991 —

Ronnie, Art Counterfeit Hero: Fritz Duquesne,Adventurer and Spy — 199

5ISBN 1-55750-733-3

Sayers, Michael & Albert E. Kahn

Sabotage! The Secret War Against America — 194

2 —

Smith, Richard Harris

OSS: The Secret History of America's First Central Intelligence Agency

— 2005 —

Stanley, RoyM. World War II Photo Intelligence — 198

1 —

Page 470: Tugas tik di kelas xi ips 3

Wark, WesleyThe Ultimate Enemy: British Intelligence and Nazi Germany, 1933–1939

— 1985 —

Wark, Wesley

"Cryptographic Innocence: The Origins of Signals Intelligence in Canada in the Second World War" in Journal of Contemporary History 22

— 1987 —

West, NigelSecret War: The Story of SOE, Britain's Wartime Sabotage Organization

— 1992 —

Winterbotham, F. W. The Ultra Secret Harper &

Row1974 —

Winterbotham, F. W. The Nazi Connection Harper &

Row1978 —

Cowburn, B. No Cloak No DaggerBrown, Watson, Ltd.

1960 —

Wohlstetter,Roberta.

Pearl Harbor: Warning and Decision — 196

2 —

Cold War era: 1945–1991

Author(s) Title Publisher Date NotesAmbrose, Stephen E.

Ike's Spies: Eisenhower and the Intelligence Establishment — 1981

- —

Andrew, Christopher and Vasili Mitrokhin

The Sword and the Shield: The Mitrokhin Archive and the Secret History of the KGB

Basic Books1991, 2005

ISBN 0-465-00311-7

Andrew, Christopher, and Oleg Gordievsky

KGB: The Inside Story of Its Foreign Operations from Leninto Gorbachev

— 1990 —

Aronoff, MyronJ.

The Spy Novels of John Le Carré: Balancing Ethics and Politics

— 1999 —

Bissell, Richard

Reflections of a Cold Warrior: From Yalta to the Bay of Pigs' — 1996 —

Bogle, Lori, Cold War Espionage and — 2001 essays

Page 471: Tugas tik di kelas xi ips 3

ed. Spying -Christopher Andrew and Vasili Mitrokhin

The World Was Going Our Way: The KGB and the Battle for the Third World

— — —

Christopher Andrew and Vasili Mitrokhin

The Mitrokhin Archive: The KGB in Europe and the West

Gardners Books 2000

ISBN 978-0-14-028487-4

Colella, Jim My Life as an Italian Mafioso Spy — 2000 —

Craig, R. Bruce

Treasonable Doubt: The Harry Dexter Spy Case

University Press of Kansas

2004

ISBN 978-0-7006-1311-3

Dorril, Stephen

MI6: Inside the Covert World of Her Majesty's Secret Intelligence Service

— 2000 —

Dziak, John J. Chekisty: A History of the KGB — 1988 —

Gates, Robert M.

From The Shadows: The Ultimate Insider's Story Of FivePresidents And How They Won The Cold War'

— 1997 —

Frost, Mike and Michel Gratton

Spyworld: Inside the Canadianand American Intelligence Establishments

Doubleday Canada 1994 —

Haynes, John Earl, and Harvey Klehr

Venona: Decoding Soviet Espionage in America — 1999 —

Helms, RichardA Look over My Shoulder: A Life in the Central Intelligence Agency

— 2003 —

Koehler, John O.

Stasi: The Untold Story of the East German Secret Police' — 1999 —

Persico, Joseph

Casey: The Lives and Secrets ofWilliam J. Casey-From the OSS to the CIA

— 1991 —

Murphy, David E., Sergei A. Kondrashev,

Battleground Berlin: CIA vs. KGB in the Cold War

— 1997 —

Page 472: Tugas tik di kelas xi ips 3

and George Bailey

Prados, JohnPresidents' Secret Wars: CIA and Pentagon Covert Operations Since World War II

— 1996 —

Rositzke, Harry.

The CIA's Secret Operations: Espionage, Counterespionage,and Covert Action

— 1988 —

Srodes, James Allen Dulles: Master of Spies Regnery 2000CIA head to1961

Sontag Sherry,and Christopher Drew

Blind Man's Bluff: The Untold Story of American Submarine Espinonage

Harper 1998 —

Encyclopedia of Cold War Espionage, Spies and Secret Operations

Greenwood Press/Questia[77]

2004 —

use of keys offered much hope of cryptanalysis. Beginning in the late 1940s, US and UK intelligence agencies were able to break some of the Soviet one-time pad traffic to Moscow during WWII as a result of errors made in generating and distributing the key material. One suggestion is that Moscow Centre personnel were somewhat rushed by the presence of German troops just outside Moscowin late 1941 and early 1942, and they produced more than one copy ofthe same key material during that period. This decades-long effort was finally codenamed VENONA (BRIDE had been an earlier name); it produced a considerable amount of information, including more than alittle about some of the Soviet atom spies. Even so, only a small percentage of the intercepted messages were either fully or partially decrypted (a few thousand out of several hundred thousand).[31]

Sedition

From Wikipedia, the free encyclopedia

This article is about the legal term. For other uses, see Sedition (disambiguation).

Not to be confused with sedation or seduction.

Part of a series on

Page 473: Tugas tik di kelas xi ips 3

Censorship by country

A censorship symbol

Countries

Algeria

Armenia

Australia

Azerbaijan

Bangladesh

Belarus

Bhutan

Brazil

Burma

Canada

China (PRC)

Cuba

Czech Republic

Denmark

Eritrea

Finland

France

Germany

Hong Kong

India

Iran

Iraq

Page 474: Tugas tik di kelas xi ips 3

Ireland

Israel

Italy

Japan

Malaysia

Maldives

Morocco

New Zealand

North Korea

Pakistan

Paraguay

Poland

Portugal

Romania

Russia

Samoa

Saudi Arabia

Singapore

Somalia

South Korea

Soviet Union

Sri Lanka

Sweden

Page 475: Tugas tik di kelas xi ips 3

Taiwan (ROC)

Thailand

Tunisia

Turkey

United Kingdom

United States

Uzbekistan

Venezuela

Zimbabwe

See also

Freedom of speech by country

Internet censorship by country

v

t

e

In law, sedition is overt conduct, such as speech and organization, that tends toward insurrection against the establishedorder. Sedition often includes subversion of a constitution and incitement of discontent (or resistance) to lawful authority. Sedition may include any commotion, though not aimed at direct and open violence against the laws. Seditious words in writing are seditious libel. A seditionist is one who engages in or promotes theinterests of sedition.

Page 476: Tugas tik di kelas xi ips 3

Typically, sedition is considered a subversive act, and the overt acts that may be prosecutable under sedition laws vary from one legal code to another. Where the history of these legal codes has been traced, there is also a record of the change in the definition of the elements constituting sedition at certain points in history. This overview has served to develop a sociological definition of sedition as well, within the study of state persecution.

Contents

1 History in common law jurisdictions

1.1 Australia

1.2 Canada

1.3 Hong Kong

1.4 India

1.5 Malaysia

1.6 New Zealand

1.7 Singapore

1.8 United Kingdom

1.9 United States

1.9.1 Civilian

1.9.2 Military

2 Civil law jurisdictions

2.1 Germany

3 See also

4 Notes

5 References

Page 477: Tugas tik di kelas xi ips 3

6 External links

7 Further reading

History in common law jurisdictions

The term sedition in its modern meaning first appeared in the Elizabethan Era (c. 1590) as the "notion of inciting by words or writings disaffection towards the state or constituted authority".[citation needed] "Sedition complements treason and martial law: while treason controls primarily the privileged, ecclesiastical opponents, priests, and Jesuits, as well as certain commoners; and martial law frightens commoners, sedition frightens intellectuals."[citation needed]

Australia

Main article: Australian sedition law

Australia's sedition laws were amended in anti-terrorism legislation passed on 6 December 2005, updating definitions and increasing penalties.

In late 2006, the Commonwealth Government, under the Prime-Ministership of John Howard proposed plans to amend Australia's Crimes Act 1914, introducing laws that mean artists and writers may be jailed for up to seven years if their work was considered seditious or inspired sedition either deliberately or accidentally.[1] Opponents of these laws have suggested that they could be used against legitimate dissent.

In 2006, the then Australian attorney-general Philip Ruddock had rejected calls by two reports — from a Senate committee and the Australian Law Reform Commission — to limit the sedition provisions in the Anti-Terrorism Act 2005 by requiring proof of intention to cause disaffection or violence. He had also brushed

Page 478: Tugas tik di kelas xi ips 3

aside recommendations to curtail new clauses outlawing “urging conduct” that “assists” an “organisation or country engaged in armedhostilities” against the Australian military.

The new laws, inserted into the legislation December 2005,allow for the criminalization of basic expressions of political opposition, including supporting resistance to Australian military interventions, such as those in Afghanistan, Iraq and the Asia-Pacific region.[2]

These laws were amended in Australia on September 19, 2011. The ‘sedition’ clauses were repealed and replaced with ‘urging violence’. [1]

Canada

During World War II former Mayor of Montreal Camillien Houde campaigned against conscription in Canada. On August 2, 1940, Houde publicly urged the men of Quebec to ignore the National RegistrationAct. Three days later, he was placed under arrest by the Royal Canadian Mounted Police on charges of sedition. After being found guilty, he was confined in internment camps in Petawawa, Ontario, and Gagetown, New Brunswick, until 1944. Upon his release on August 18, 1944, he was greeted by a cheering crowd of 50,000 Montrealers and won back his position as the Mayor of Montreal in the election in 1944.[citation needed]

Hong Kong

A Sedition Ordinance had existed in the territory since 1970, which was subsequently consolidated into the Crime Ordinance in 1972.[3] According to the Crime Ordinance, a seditious intention is an intention to bring into hatred or contempt or to excite disaffection against the person of government, to excite inhabitantsof Hong Kong to attempt to procure the alteration, otherwise than bylawful means, of any other matter in Hong Kong as by law

Page 479: Tugas tik di kelas xi ips 3

established, to bring into hatred or contempt or to excite disaffection against the administration of justice in Hong Kong, to raise discontent or disaffection amongst inhabitants of Hong Kong, to promote feelings of ill-will and enmity between different classesof the population of Hong Kong, to incite persons to violence, or tocounsel disobedience to law or to any lawful order.[4][5]

Article 23 of the Basic Law requires the special administrative region to enact laws prohibiting any act of treason, secession, sedition, subversion against the Central People's Government of the People's Republic of China.[6] The National Security (Legislative Provisions) Bill was tabled in early 2003 to replace the existing laws regarding treason and sedition, and to introduce new laws to prohibit secessionist and subversive acts and theft of state secrets, and to prohibit political organisations fromestablishing overseas ties. The bill was shelved following massive opposition from the public.

India

Sedition is defined by Section 124A of the Indian Penal code. The section had been inserted into the IPC by Imperial Legislative Council Act No. 27 of 1870. The original section was substituted with a new one by Act 4 of 1898. The section currently reads:

124A. Sedition Whoever, by words, either spoken or written, or by signs, or by visible representation, or otherwise, brings or attempts to bring into hatred or contempt, or excites or attempts to excite disaffection towards,[a] the Government established by law in[b] India,[c] shall be punished with imprisonment for life,[d] to which fine may be added, or with imprisonment which may extend to three years, to which fine may be added, or with fine.

Page 480: Tugas tik di kelas xi ips 3

Explanation 1. — The expression “disaffection” includes disloyalty and all feelings of enmity.

Explanation 2. — Comments expressing disapprobation ofthe measures of the Government with a view to obtain their alteration by lawful means, without exciting or attempting to excitehatred, contempt or disaffection, do not constitute an offence underthis section.

Explanation 3. — Comments expressing disapprobation ofthe administrative or other action of the Government without exciting or attempting to excite hatred, contempt or disaffection, do not constitute an offence under this section.[8]

Mahatma Gandhi, then serving as editor of Young India, and printer and publisher Shankarlal Ghelabhai Banker, were arrested andtried under charges of sedition on 18 March 1922. During his trial Gandhi stated, "Section 124 A, under which I am happily charged, is perhaps the prince among the political sections of the Indian Penal Code designed to suppress the liberty of the citizen. Affection cannot be manufactured or regulated by law. If one has no affection for a person or system, one should be free to give the fullest expression to his disaffection, so long as he does not contemplate, promote, or incite to violence. But the section under which mere promotion of disaffection is a crime. I have studied some of the cases tried under it; I know that some of the most loved of India’s patriots have been convicted under it. I consider it a privilege, therefore, to be charged under that section. I have endeavored to give in their briefest outline the reasons for my disaffection. I have no personal ill-will against any single administrator, much less can I have any disaffection towards the King's person. But I hold it to be a virtue to be disaffected towards a Government which in its totality has done more harm to India than any previous system. India is less manly under the British rule than she ever wasbefore. Holding such a belief, I consider it to be a sin to have affection for the system."[9][10]

Page 481: Tugas tik di kelas xi ips 3

Dr Manohar Garg MBBS MOB NO 8968694599 Was Charged Three TimesU/S 124- A IPC ON 29 /7 / 1999, 2000, 2004, Address Dr M.G, House NO2200, Sandhu Nagar, LUDHIANA

In 2010, writer Arundhati Roy was sought to be charged with sedition for her comments on Kashmir and Maoists.[11] Two individuals have been charged with sedition since 2007.[12] Binayak Sen, an Indian pediatrician, public health specialist and activist, was found guilty of sedition.[13] He is national Vice-President of the People's Union for Civil Liberties (PUCL). On 24 December 2010, the Additional Sessions and District Court Judge B.P Varma Raipur found Binayak Sen, Naxal ideologue Narayan Sanyal and Kolkata businessman Piyush Guha, guilty of sedition for helping the Maoists in their fight against the state. They were sentenced to life imprisonment, but he got bail in Supreme Court on 16 April 2011.[14]

On 10 September 2012, Aseem Trivedi, a political cartoonist, was sent to judicial custody till 24 September 2012 on charges of sedition over a series of cartoons against corruption. Trivedi was accused of uploading "ugly and obscene" content to his website, alsoaccused of insulting the Constitution during an anti-corruption protest in Mumbai in 2011. Trivedi's arrest under sedition has been heavily criticized in India. The Press Council of India (PCI) termedit a "stupid" move.[15]

Malaysia

See also: Sedition Act (Malaysia)

New Zealand

Sedition charges were not uncommon in New Zealand early in the20th Century. For instance, the future Prime Minister Peter Fraser had been convicted of sedition in his youth for arguing against conscription during World War I, and was imprisoned for a year. Perhaps ironically, Fraser re-introduced the conscription of troops as the Prime Minister during World War II.[16]

Page 482: Tugas tik di kelas xi ips 3

In New Zealand's first sedition trial in decades, Tim Selwyn was convicted of sedition (section 83 of the Crimes Act 1961) on 8 June 2006. Shortly after, in September 2006, the New Zealand Police laid a sedition charge against a Rotorua youth, Christopher Russell,17, who was also charged with threatening to kill.[17] The Police withdrew the sedition charge when Russell agreed to plead guilty on the other charge.[18]

In March 2007, Mark Paul Deason, the manager of a tavern near the University of Otago, was charged with seditious intent[19] although he was later granted diversion when he pleaded guilty to publishing a document which encourages public disorder[20] Deason ran a promotion for his tavern that offered one litre of beer for one litre of petrol. At the end of the promotion, the prize would have been a couch soaked in the petrol. It is presumed the intent was for the couch to be burned — a popular university student prank.Police also applied for Deason's liquor license to be revoked.

Following a recommendation from the New Zealand Law Commission,[21] the New Zealand government announced on 7 May 2007 that the sedition law would be repealed.[22] The Crimes (Repeal of Seditious Offences) Amendment Act 2007 was passed on 24 October 2007, and entered into force on 1 January 2008.[23]

Russell Campbell made a documentary regarding conscientious objectors in New Zealand called Sedition.

Singapore

See also: Sedition Act (Singapore)

United Kingdom

See also: Sedition Act 1661

Page 483: Tugas tik di kelas xi ips 3

Sedition was a common law offence in the UK. James Fitzjames Stephen's "Digest of the Criminal Law" stated that "a seditious intention is an intention to bring into hatred or contempt, or to exite disaffection against the person of His Majesty, his heirs or successors, or the government and constitution of the United Kingdom, as by law established, or either House of Parliament, or the administration of justice, or to excite His Majesty's subjects to attempt otherwise than by lawful means, the alteration of any matter in Church or State by law established, or to incite any person to commit any crime in disturbance of the peace, or to raise discontent or disaffection amongst His Majesty's subjects, or to promote feelings of ill-will and hostility between different classesof such subjects.

An intention to show that His Majesty has been misled or mistaken in his measures, or to point out errors or defects in the government or constitution as by law established, with a view to their reformation, or to excite His Majesty's subjects to attempt bylawful means the alteration of any matter in Church or State by law established, or to point out, in order to secure their removal, matters which are producing, or have a tendency to produce, feelingsof hatred and ill-will between classes of His Majesty's subjects, isnot a seditious intention."

Stephen in his "History of the Criminal Law of England" accepted the view that a seditious libel was nothing short of a direct incitement to disorder and violence. He stated that the modern view of the law was plainly and fully set out by Littledale J. in Collins. In that case the jury were instructed that they couldconvict of seditious libel only if they were satisfied that the defendant "meant that the people should make use of physical force as their own resource to obtain justice, and meant to excite the people to take the power in to their own hands, and meant to excite them to tumult and disorder."

The last prosecution for sedition in the United Kingdom was in1972, when three people were charged with seditious conspiracy and

Page 484: Tugas tik di kelas xi ips 3

uttering seditious words for attempting to recruit people to travel to Northern Ireland to fight in support of Republicans. The seditious conspiracy charge was dropped, but the men received suspended sentences for uttering seditious words and for offences against the Public Order Act.[24]

In 1977, a Law Commission working paper recommended that the common law offence of sedition in England and Wales be abolished. They said that they thought that this offence was redundant and thatit was not necessary to have any offence of sedition.[24] However this proposal was not implemented until 2009, when sedition and seditious libel (as common law offences) were abolished by section 73 of the Coroners and Justice Act 2009 (with effect on 12 January 2010).[25] Sedition by an alien is still an offence under section 3 of the Aliens Restriction (Amendment) Act 1919.[26]

In Scotland, section 51 of the Criminal Justice and Licensing (Scotland) Act 2010 abolished the common law offences of sedition and leasing-making[27] with effect from 28 March 2011.[28]

United States

See also: Seditious conspiracy

Civilian

In 1798, President John Adams signed into law the Alien and Sedition Acts, the fourth of which, the Sedition Act or "An Act for the Punishment of Certain Crimes against the United States" set out punishments of up to two years of imprisonment for "opposing or resisting any law of the United States" or writing or publishing "false, scandalous, and malicious writing" about the President or the U.S. Congress (though not the office of the Vice-President, thenoccupied by Adams' political opponent Thomas Jefferson). This Act ofCongress was allowed to expire in 1801 after Jefferson's election tothe Presidency.[citation needed]

Political cartoon by Art Young, The Masses, 1917.

Page 485: Tugas tik di kelas xi ips 3

In the Espionage Act of 1917, Section 3 made it a federal crime, punishable by up to 20 years of imprisonment and a fine of upto $10,000, to willfully spread false news of the American army and navy with an intent to disrupt their operations, to foment mutiny intheir ranks, or to obstruct recruiting. This Act of Congress was amended Sedition Act of 1918, which expanded the scope of the Espionage Act to any statement criticizing the Government of the United States. These Acts were upheld in 1919 in the case of Schenckv. United States, but they were largely repealed in 1921, leaving laws forbidding foreign espionage in the United States and allowing military censorship of sensitive material.

In 1940, the Alien Registration Act, or "Smith Act", was passed, which made it a federal crime to advocate or to teach the desirability of overthrowing the United States Government, or to be a member of any organization which does the same. It was often used against Communist Party organizations. This Act was invoked in threemajor cases, one of which against the Socialist Worker's Party in Minneapolis in 1941, resulting in 23 convictions, and again in what became known as the Great Sedition Trial of 1944 in which a number of pro-Nazi figures were indicted but released when the prosecution ended in a mistrial. Also, a series of trials of 140 leaders of the Communist Party USA also relied upon the terms of the "Smith Act"—beginning in 1949—and lasting until 1957. Although the U.S. Supreme Court upheld the convictions of 11 CPUSA leaders in 1951 in Dennis v. United States, that same Court reversed itself in 1957 in the case of Yates v. United States, by ruling that teaching an ideal, nomatter how harmful it may seem, does not equal advocating or planning its implementation. Although unused since at least 1961, the "Smith Act" remains a Federal law.

There was, however, a brief attempt to use the sedition laws against protesters of the Vietnam War. On October 17, 1967, two demonstrators, including then Marin County resident Al Wasserman, while engaged in a 'sit in' at the Army Induction Center in Oakland,Ca., were arrested and charged with sedition by deputy US. Marshall

Page 486: Tugas tik di kelas xi ips 3

Richard St. Germain. U.S. Attorney Cecil Poole changed the charge totrespassing. Poole said, "three guys (according to Mr. Wasserman there were only 2) reaching up and touching the leg of an inductee, and that's conspiracy to commit sedition? That's ridiculous!" The inductees were in the process of physically stepping on the demonstrators as they attempted to enter the building, and the demonstrators were trying to protect themselves from the inductees' feet. Attorney Poole later added, "We'll decide what to prosecute, not marshals."[29]

In 1981, Oscar López Rivera, a Puerto Rican Nationalist and Vietnam war veteran, was convicted and sentenced to 70 years in prison for seditious conspiracy and various other offenses. He was among the 16 Puerto Rican nationalists offered conditional clemency by U.S. President Bill Clinton in 1999, but he rejected the offer. His sister, Zenaida López, said he refused the offer because on parole, he would be in "prison outside prison." He has been jailed for 34 years, 3 months and 2 days.[30] The clemency agreement required him to renounce the use of terrorism, including use or advocacy of the use of violence, to achieve their aim of independence for Puerto Rico, by that means refraining from the for any purpose.[31] Congressman Pedro Pierluisi, has stated that "the primary reason that López Rivera did not accept the clemency offer extended to him in 1999 was because it had not also been extended tocertain fellow [ independence ] prisoner, including Mr. Torres", andwho was subsequently released from prison in July 2010."[32]

In 1987, fourteen white supremacists were indicted by a federal grand jury on charges filed by the U.S. Department of Justice against a seditious conspiracy between July 1983 and March 1985. Some alleged conspirators were serving time for overt acts, such as the crimes committed by The Order. Others such as Louis Beamand Richard Butler were charged for their speech seen as spurring onthe overt acts by the others. In April 1988, a federal jury in Arkansas acquitted all the accused of charges of seditious conspiracy.[33]

Page 487: Tugas tik di kelas xi ips 3

On October 1, 1995, Omar Abdel-Rahman and nine others were convicted of seditious conspiracy.[34]

Laura Berg, a nurse at a U.S. Department of Veterans Affairs hospital in New Mexico was investigated for sedition in September 2005[35] after writing a letter[36][37] to the editor of a local newspaper, accusing several national leaders of criminal negligence.Though their action was later deemed unwarranted by the director of Veteran Affairs, local human resources personnel took it upon themselves to request an FBI investigation. Ms. Berg was representedby the ACLU.[38] Charges were dropped in 2006.[39]

On March 28, 2010, nine members of the Hutaree militia were arrested and charged with crimes including seditious conspiracy.[40]

Military

Sedition is a punishable offense under Article 94 of the Uniform Code of Military Justice.[41]

Civil law jurisdictions

Germany

Volksverhetzung ("incitement of the people") is a legal concept in Germany and some Nordic countries. It is sometimes loosely translated as sedition,[42] although the law bans the incitement of hatred against a segment of the population. Segment ofthe population meaning, for example, a race or religion.

Key disclosure law

From Wikipedia, the free encyclopedia

This article includes inline citations, but they are not properly formatted. Please improve this article by correcting them. (June 2014)

Page 488: Tugas tik di kelas xi ips 3

Key disclosure laws, also known as mandatory key disclosure, is legislation that requires individuals to surrender cryptographic keys to law enforcement. The purpose is to allow access to material for confiscation or digital forensics purposes and use it either as evidence in a court of law or to enforce national security interests. Similarly, mandatory decryption laws force owners of encrypted data to supply decrypted data to law enforcement.

Nations vary widely in the specifics of how they implement keydisclosure laws. Some, such as Australia, give law enforcement wide-ranging power to compel assistance in decrypting data from any party. Some, such as Belgium, concerned with self-incrimination, only allow law enforcement to compel assistance from non-suspects. Some require only specific third parties such as telecommunications carriers, certification providers, or maintainers of encryption services to provide assistance with decryption. In all cases, a warrant is generally required.

Contents

1 Theory and countermeasures

2 Criticism and alternatives

3 Legislation by nation

3.1 Antigua and Barbuda

3.2 Australia

3.3 Belgium

3.4 Canada

3.5 Finland

3.6 France

3.7 India

Page 489: Tugas tik di kelas xi ips 3

3.8 New Zealand

3.9 Poland

3.10 South Africa

3.11 Sweden

3.12 The Netherlands

3.13 United Kingdom

3.14 United States

4 See also

5 References

6 Further reading

Theory and countermeasures

Mandatory decryption is technically a weaker requirement than key disclosure, since it is possible in some cryptosystems to prove thata message has been decrypted correctly without revealing the key. For example, using RSA public-key encryption, one can verify given the message (plaintext), the encrypted message (ciphertext), and thepublic key of the recipient that the message is correct by merely re-encrypting it and comparing the result to the encrypted message. Such a scheme is called undeniable, since once the government has validated the message they cannot deny that it is the correct decrypted message.[1]

As a countermeasure to key disclosure laws, some personal privacy products such as BestCrypt, FreeOTFE, and TrueCrypt have begun incorporating deniable encryption technology, which enable a single piece of encrypted data to be decrypted in two or more different ways, creating plausible deniability.[2][3] Another alternative is steganography, which hides encrypted data inside of benign data so that it is more difficult to identify in the first place.

Page 490: Tugas tik di kelas xi ips 3

A problematic aspect of key disclosure is that it leads to a total compromise of all data encrypted using that key in the past or future; time-limited encryption schemes such as those of Desmedt et al.[1] allow decryption only for a limited time period.

Criticism and alternatives

Critics of key disclosure laws view them as compromising informationprivacy, by revealing personal information that may not be pertinentto the crime under investigation, as well as violating the right against self-incrimination and more generally the right to silence, in nations which respect these rights. In some cases, it may be impossible to decrypt the data because the key has been lost, forgotten or revoked, or because the data is actually random data which cannot be effectively distinguished from encrypted data.

A proactive alternative to key disclosure law is key escrow law, where the government holds in escrow a copy of all cryptographic keys in use, but is only permitted to use them if an appropriate warrant is issued. Key escrow systems face difficult technical issues and are subject to many of the same criticisms as key disclosure law; they avoid some issues like lost keys, while introducing new issues such as the risk of accidental disclosure of large numbers of keys, theft of the keys by hackers or abuse of power by government employees with access to the keys. It would alsobe nearly impossible to prevent the government from secretly using the key database to aid mass surveillance efforts such as those exposed by Edward Snowden. The ambiguous term key recovery is applied to both types of systems.

Legislation by nation

Antigua and Barbuda

The Computer Misuse Bill, 2006, Article 21(5)(c), if enacted, would allow police with a warrant to demand and use decryption keys.

Page 491: Tugas tik di kelas xi ips 3

Failure to comply may incur "a fine of fifteen thousand [East Caribbean] dollars" and/or "imprisonment for two years."[4]

Australia

The Cybercrime Act 2001 No. 161, Items 12 and 28 grant police with amagistrate's order the wide-ranging power to require "a specified person to provide any information or assistance that is reasonable and necessary to allow the officer to" access computer data that is "evidential material"; this is understood to include mandatory decryption. Failing to comply carries a penalty of 6 months imprisonment. Electronic Frontiers Australia calls the provision "alarming" and "contrary to the common law privilege against self-incrimination."[5]

The Crimes Act 1914, 3LA(5) A person commits an offence if the person fails to comply with the order. Penalty for contravention of this subsection: Imprisonment for 2 years [6]

Belgium

The Loi du 28 novembre 2000 relative à la criminalité informatique (Law on computer crime of 28 November 2000), Article 9 allows a judge to order both operators of computer systems and telecommunications providers to provide assistance to law enforcement, including mandatory decryption, and to keep their assistance secret; but this action cannot be taken against suspects or their families.[7][8] Failure to comply is punishable by 6 monthsto 1 year in jail and/or a fine of 130 to 100,000 Euros.

Canada

Canada implements key disclosure by broad interpretation of "existing interception, search and seizure and assistance procedures";[9] in a 1998 statement, Cabinet Minister John Manley explained, "warrants and assistance orders also apply to situations

Page 492: Tugas tik di kelas xi ips 3

where encryption is encountered — to obtain the decrypted material or decryption keys."[10]

Finland

The Coercive Measures Act (Pakkokeinolaki) 2011/806 section 8 paragraph 23[11] requires the system owner, its administrator, or a specified person to surrender the necessary "passwords and other such information" in order to provide access to information stored on an information system. The suspect and some other persons specified in section 7 paragraph 3 that cannot otherwise be called as witnesses are exempt of this requirement.

France

Loi no 2001-1062 du 15 novembre 2001 relative à la sécurité quotidienne, article 30 (Law #2001-1062 of 15 November 2001 on Community Safety) allows a judge or prosecutor to compel any qualified person to decrypt or surrender keys to make available any information encountered in the course of an investigation. Failure to comply incurs three years of jail time and a fine of €45,000; if the compliance would have prevented or mitigated a crime, the penalty increases to five years of jail time and €75,000.[12]

India

Section 69 of the Information Technology Act, as amended by the Information Technology (Amendment) Act, 2008, empowers the central and state governments to compel assistance from any "subscriber or intermediary or any person in charge of the computer resource" in decrypting information.[13][14] Failure to comply is punishable by up to seven years imprisonment and/or a fine.

New Zealand

New Zealand Customs is seeking Power to compel Key disclosure.[15]

Poland

Page 493: Tugas tik di kelas xi ips 3

In relatively few known cases in which police or prosecutor requested cryptographic keys from those formally accused and these requests were not fulfilled, no further consequences were imposed onthe accused. There's no specific law in this matter, as e.g. in the UK. It is generally assumed that the Polish Criminal Procedure Code (Kodeks Postępowania Karnego Dz.U. 1997 nr 89 poz. 555.) provides means of protecting against self-incrimination, including lack of penalization for refusing to answer any question which would enable law enforcement agencies to obtain access to potential evidence, which could be used against testifying person.[16]

South Africa

Under the RICA Act of 2002, refusal to disclose a cryptographic key in your possession could result in a fine up to ZAR 2 Million or up to 10 years imprisonment. This requires a judge to issue a decryption direction to a person believed to hold the key.[citation needed]

Sweden

There are currently no laws that force the disclosure of cryptographic keys. However, there is legislation proposed on the basis that the Council of Europe has already adopted a convention oncyber-crime related to this issue. The proposed legislation would allow police to require an individual to disclose information, such as passwords and cryptographic keys, during searches. The proposal has been introduced to make it easier for police and prosecutors. The proposal has been criticized by The Swedish Data Inspection Board.[17][18]

The Netherlands

Article 125k of the Wetboek van Strafvordering allows investigators with a warrant to access information carriers and networked systems.The same article allows the district attorney and similar officers

Page 494: Tugas tik di kelas xi ips 3

of the court to order persons who know how to access those systems to share their knowledge in the investigation, including any knowledge of encryption of data on information carriers. However, such an order may not be given to the suspect under investigation.[19]

United Kingdom

The Regulation of Investigatory Powers Act 2000 (RIPA), Part III, activated by ministerial order in October 2007,[20] requires personsto supply decrypted information and/or keys to government representatives with a court order. Failure to disclose carries a maximum penalty of two years in jail. The provision was first used against animal rights activists in November 2007,[21] and at least three people have been prosecuted and convicted for refusing to surrender their encryption keys,[22] one of whom was sentenced to 13months' imprisonment.[23]

United States

The Fifth Amendment to the United States Constitution protects witnesses from being forced to incriminate themselves, and there is currently no law regarding key disclosure in the United States.[24] However, the federal case In re Boucher may be influential as case law. In this case, a man's laptop was inspected by customs agents and child pornography was discovered. The device was seized and powered-down, at which point disk encryption technology made the evidence unavailable. The judge held that it was a foregone conclusion that the content exists since it had already been seen bythe customs agents, Boucher's encryption password "adds little or nothing to the sum total of the Government's information about the existence and location of files that may contain incriminating information."[25][26]

In another case, a district court judge ordered a Colorado woman to decrypt her laptop so prosecutors can use the files against her in acriminal case: "I conclude that the Fifth Amendment is not

Page 495: Tugas tik di kelas xi ips 3

implicated by requiring production of the unencrypted contents of the Toshiba Satellite M305 laptop computer," Colorado U.S. District Judge Robert Blackburn ruled on January 23, 2012.[27] In Commonwealth v. Gelfgatt,[28] the court ordered a suspect to decrypthis computer, citing exception to Fifth Amendment can be invoked because "an act of production does not involve testimonial communication where the facts conveyed already are known to the government...".[29]

However, in United States v. Doe, the United States Court of Appealsfor the Eleventh Circuit ruled on 24 February 2012 that forcing the decryption of one's laptop violates the Fifth Amendment.[30][31]

The Federal Bureau of Investigation may also issue national securityletters that require the disclosure of keys for investigative purposes.[32] One company, Lavabit, chose to shut down rather than surrender its master private keys.

Digital rights management

From Wikipedia, the free encyclopedia

Digital rights management (DRM) is a term referring to various access control technologies that are used by software and hardware manufacturers, publishers, copyright holders, and individuals with the intent to control the use of digital content and devices.[1] Digital rights management includes technologies that control the use, modification, and distribution of works, as well as systems within devices that enforce these policies. The term is also sometimes referred to as "copy protection", copy prevention, and copy control, although the correctness of doing so is disputed.[2]

The use of digital rights management is not universally accepted. Proponents of DRM argue that it is necessary to prevent intellectualproperty from being copied freely, just as physical locks are neededto prevent personal property from being stolen,[3] that it can help

Page 496: Tugas tik di kelas xi ips 3

the copyright holder maintain artistic control,[4] and that it can ensure continued revenue streams.[5] Those opposed to DRM contend there is no evidence that DRM helps prevent copyright infringement, arguing instead that it serves only to inconvenience legitimate customers, and that DRM helps big business stifle innovation and competition.[6] Furthermore, works can become permanently inaccessible if the DRM scheme changes or if the service is discontinued.[7] DRM can also restrict users from exercising their legal rights under copyright law, such as backing up copies of CDs or DVDs, lending materials out through a library, accessing works inthe public domain, or using copyrighted materials for research and education under the the fair use doctrine,[3] and under French law.[8] The Electronic Frontier Foundation (EFF) and the Free Software Foundation (FSF) consider the use of DRM systems to be an anti-competitive practice.[9][10]

Worldwide, many laws have been created which criminalize the circumvention of DRM, communication about such circumvention, and the creation and distribution of tools used for such circumvention. Such laws are part of the Copyright Directive,[11] the Digital Millennium Copyright Act,[12] and DADVSI.[13]

Contents

1 Introduction

1.1 Common DRM techniques

2 Technologies

2.1 DRM and computer games

2.1.1 Limited install activations

2.1.2 Persistent online authentication

2.1.3 Software tampering

2.1.4 CD Keys

Page 497: Tugas tik di kelas xi ips 3

2.2 DRM and documents

2.3 DRM and e-books

2.4 DRM and film

2.5 DRM and music

2.5.1 Audio CDs

2.5.2 Internet music

2.5.2.1 Mobile ring tones

2.6 DRM and television

2.7 Metadata

2.8 Watermarks

2.9 Streaming media services

3 Laws regarding DRM

3.1 Digital Millennium Copyright Act

3.2 European Union

3.3 International issues

3.4 Israel

4 Opposition to DRM

4.1 DRM-free works

5 Shortcomings

5.1 DRM server and Internet outages

5.2 Methods to bypass DRM

5.2.1 DRM bypass methods for audio and video content

5.3 Analog hole

5.4 DRM on general computing platforms

5.5 DRM on purpose-built hardware

5.6 Watermarks

Page 498: Tugas tik di kelas xi ips 3

5.7 Undecrypted copying failure

5.8 Obsolescence

5.9 Environmental issues

5.10 Moral and legitimacy implications

5.11 Relaxing some forms of DRM can be beneficial

5.12 Can increase piracy

6 Alternatives to DRM

6.1 "Easy and cheap"

6.2 Crowdfunding or pre-order model

6.3 Digital content as promotion for traditional products

6.4 Artistic Freedom Voucher

7 Historical note

8 See also

8.1 Related concepts

8.2 Lawsuits

8.3 Organizations

9 References

10 Further reading

11 External links

Introduction

The advent of digital media and analog-to-digital conversion technologies (especially those that are usable on mass-market general-purpose personal computers) has vastly increased the concerns of copyright-owning individuals and organizations. These concerns are particularly prevalent within the music and movie

Page 499: Tugas tik di kelas xi ips 3

industries, because these sectors are partly or wholly dependent on the revenue generated from such works. While analog media inevitablyloses quality with each copy generation, and in some cases even during normal use, digital media files may be duplicated an unlimited number of times with no degradation in the quality of subsequent copies.

The advent of personal computers as household appliances has made itconvenient for consumers to convert media (which may or may not be copyrighted) originally in a physical, analog or broadcast form intoa universal, digital form (this process is called ripping) for portability or viewing later. This, combined with the Internet and popular file-sharing tools, has made unauthorized distribution of copies of copyrighted digital media (also called digital piracy) much easier.

DRM technologies enable content publishers to enforce their own access policies on content, such as restrictions on copying or viewing. In cases where copying or some other use of the content is prohibited, regardless of whether such copying or other use is legally considered a "fair use", DRM technologies have come under fire. DRM is in common use by the entertainment industry (e.g., audio and video publishers).[14] Many online music stores, such as Apple's iTunes Store, and e-book publishers also use DRM, as do cable and satellite service operators, to prevent unauthorized use of content or services. However, Apple quietly dropped DRM from all iTunes music files in about 2009.[15]

In recent years the industry expanded the usage of DRM to even traditional hardware products, examples are Keurig's coffeemakers[16][17] and John Deere's tractors.[18] For instance, tractor companies try to prevent the DIY repairing by the owning farmers under usage of DRM-laws as DMCA.[19]

Common DRM techniques

Page 500: Tugas tik di kelas xi ips 3

Digital Rights Management Techniques include:

Restrictive Licensing Agreements: The access to digital materials, copyright and public domain is controlled. Some restrictive licenses are imposed on consumers as a condition of entering a website or when downloading software.[20]

Encryption, Scrambling of expressive material and embedding of atag: This technology is designed to control access and reproduction of information. This includes backup copies for personal use.[21]

Technologies

DRM and computer games

Limited install activations

Computer games sometimes use DRM technologies to limit the number ofsystems the game can be installed on by requiring authentication with an online server. Most games with this restriction allow three or five installs, although some allow an installation to be 'recovered' when the game is uninstalled. This not only limits userswho have more than three or five computers in their homes (seeing asthe rights of the software developers allow them to limit the numberof installations), but can also prove to be a problem if the user has to unexpectedly perform certain tasks like upgrading operating systems or reformatting the computer's hard drive, tasks which, depending on how the DRM is implemented, count a game's subsequent reinstall as a new installation, making the game potentially unusable after a certain period even if it is only used on a single computer.

In mid-2008, the publication of Mass Effect marked the start of a wave of titles primarily making use of SecuROM for DRM and requiringauthentication with a server. The use of the DRM scheme in 2008's Spore backfired and there were protests, resulting in a considerablenumber of users seeking a pirated version instead. This backlash

Page 501: Tugas tik di kelas xi ips 3

against the 3× activation limit was a significant factor in Spore becoming the most pirated game in 2008, with TorrentFreak compiling a "top 10" list with Spore topping the list.[22][23] However, Tweakguides concluded that the presence of intrusive DRM does not appear to increase piracy of a game, noting that other games on the list such as Call of Duty 4, Assassin's Creed and Crysis use SafeDisc DRM, which has no install limits and no online activation. Additionally, other video games that do use intrusive DRM such as BioShock, Crysis Warhead, and Mass Effect, do not appear on the list.[24]

Persistent online authentication

Main article: Always-on DRM

Many mainstream publishers continued to rely on online DRM throughout the later half of 2008 and early 2009, including Electronic Arts, Ubisoft, Valve, and Atari, The Sims 3 being a notable exception in the case of Electronic Arts.[25] Ubisoft broke with the tendency to use online DRM in late 2008, with the release of Prince of Persia as an experiment to "see how truthful people really are" regarding the claim that DRM was inciting people to use pirated copies.[26] Although Ubisoft has not commented on the results of the "experiment", Tweakguides noted that two torrents on Mininova had over 23,000 people downloading the game within 24 hoursof its release.[27]

Ubisoft formally announced a return to online authentication on 9 February 2010, through its Uplay online gaming platform, starting with Silent Hunter 5, The Settlers 7, and Assassin's Creed II.[28] Silent Hunter 5 was first reported to have been compromised within 24 hours of release,[29] but users of the cracked version soon foundout that only early parts of the game were playable.[30] The Uplay system works by having the installed game on the local PCs incomplete and then continuously downloading parts of the game-code from Ubisoft's servers as the game progresses.[31] It was more than a month after the PC release in the first week of April that software was released that could bypass Ubisoft's DRM in Assassin's Creed II. The software did this by emulating a Ubisoft server for

Page 502: Tugas tik di kelas xi ips 3

the game. Later that month, a real crack was released that was able to remove the connection requirement altogether.[32][33]

In early March 2010, Uplay servers suffered a period of inaccessibility due to a large-scale DDoS attack, causing around 5% of game owners to become locked out of playing their game.[34] The company later credited owners of the affected games with a free download, and there has been no further downtime.[35]

Other developers, such as Blizzard Entertainment are also shifting to a strategy where most of the game logic is on the "side" or takencare of by the servers of the game maker. Blizzard uses this strategy for its game Diablo III and Electronic Arts used this same strategy with their reboot of SimCity, the necessity of which has been questioned.[36]

Software tampering

Bohemia Interactive have used a form of technology since Operation Flashpoint: Cold War Crisis, wherein if the game is suspected of being pirated, annoyances like guns losing their accuracy or the players being turned into a bird are introduced.[37]

Croteam, the company that released Serious Sam 3: BFE in November 2011, implemented a different form of DRM wherein, instead of displaying error messages that stop the pirated version of the game from running, it causes a special invincible foe in the game to appear and constantly attack the player until they are killed.[38][39]

CD Keys

One of the oldest and least complicated DRM protection method for the computer games is a CD Key. CD Keys are a series of numbers and letters included with copies of the game, usually printed somewhere

Page 503: Tugas tik di kelas xi ips 3

on the CD or the software package. During installation the program will request that the user enter CD Key to authenticate the product.Without the CD Key, installation is impossible. There are sizable disadvantages to this protection method for both consumers and producers. For example, if a consumer loses his CD Key, he is unableto install and legitimize his purchased product without contacting customer service. CD Keys were made notable by Microsoft Windows, anoperating system which, if not provided as an OEM copy, will requirea 25 digit key code. For the producers and developers piracy is a significant issue. Many websites are offering “cd key cracks” and “cd key generators” for several games and products. This will generate a series of characters that the software interprets as a valid CD Key and bypasses this form of DRM protection.[40][41]

DRM and documents

Enterprise digital rights management (E-DRM or ERM) is the application of DRM technology to the control of access to corporate documents such as Microsoft Word, PDF, and AutoCAD files, emails, and intranet web pages rather than to the control of consumer media.[42] E-DRM, now more commonly referenced as IRM (Information Rights Management), is generally intended to prevent the unauthorized use (such as industrial or corporate espionage or inadvertent release) of proprietary documents. IRM typically integrates with content management system software but corporations such as Samsung Electronics also develop their own custom DRM systems.[43]

DRM has been used by organizations such as the British Library in its secure electronic delivery service to permit worldwide access tosubstantial numbers of rare (and in many cases unique) documents which, for legal reasons, were previously only available to authorized individuals actually visiting the Library's document centre at Boston Spa in England.[citation needed]

DRM and e-books

Page 504: Tugas tik di kelas xi ips 3

Electronic books read on a personal computer, or an e-book reader ore-reader app typically use DRM technology to limit copying, printing, and sharing of e-books. E-books (alternatively spelled "ebooks") are usually limited to be used on a limited number of reading devices, and some e-publishers prevent any copying or printing. Some commentors believe DRM makes e-book publishing complex.[44]

As of August 2012, there are five main ebook formats: EPUB, KF8, Mobipocket, PDF, and Topaz,.[45] The Amazon Kindle uses KF8, Mobipocket, and Topaz; it also supports native PDF format ebooks andnative PDF files. Other ebook readers mostly use EPUB format ebooks,but with differing DRM schemes.[citation needed]

There are four main ebook DRM schemes in common use today, one each from Adobe, Amazon, Apple, and the Marlin Trust Management Organization (MTMO).

Adobe's ADEPT DRM is applied to EPUBs and PDFs, and can be read by several third-party ebook readers, as well as Adobe Digital Editions (ADE) software. Barnes & Noble uses a DRM technology provided by Adobe, applied to EPUBs and the older PDB (Palm OS) format ebooks. In 2014, Adobe announced a new DRM scheme to replace the old one, to become available as soon as March 2014.[citation needed]

Amazon's DRM is an adaption of the original Mobipocket encryption and is applied to Amazon's .azw4, KF8, and Mobipocket format ebooks. Topaz format ebooks have their own encryption system.[citation needed]

Apple's FairPlay DRM is applied to EPUBs and can currently only be read by Apple's iBooks app on iOS devices.

The Marlin DRM was developed and is maintained in an open industry group known as the Marlin Developer Community (MDC) and is licensed by MTMO. (Marlin was founded by five companies, Intertrust,Panasonic, Philips, Samsung, and Sony.) The Kno online textbook

Page 505: Tugas tik di kelas xi ips 3

publisher uses Marlin to protect ebooks it sells in the EPUB format.These books can be read on the Kno App for iOS and Android.

In one instance of DRM that caused a rift with consumers, Amazon.comin July 2009, remotely deleted purchased copies of George Orwell's Animal Farm (1945) and Nineteen Eighty-Four (1949) from customers' Amazon Kindles after providing them a refund for the purchased products.[46] Commentors have widely described these actions as Orwellian, and have alluded to Big Brother from Orwell's Nineteen Eighty-Four.[47][48][49][50] After Amazon CEO Jeff Bezos issued a public apology, the Free Software Foundation wrote that this was just one more example of the excessive power Amazon has to remotely censor what people read through its software, and called upon Amazonto free its e-book reader and drop DRM.[51] Amazon then revealed thereason behind its deletion: the ebooks in question were unauthorizedreproductions of Orwell's works, which were not within the public domain and to which the company that published and sold them on Amazon's service had no rights.[52]

Websites - including library.nu (shut down by court order on February 15, 2012), BookFinder, and Library Genesis - have emerged which allow downloading ebooks by violating copyright.[53]

DRM and film

An early example of a DRM system is the Content Scrambling System (CSS) employed by the DVD Forum on film DVDs c. 1996. CSS uses an encryption algorithm to encrypt content on the DVD disc. Manufacturers of DVD players must license this technology and implement it in their devices so that they can decrypt the encryptedcontent to play it. The CSS license agreement includes restrictions on how the DVD content is played, including what outputs are permitted and how such permitted outputs are made available. This keeps the encryption intact as the video material is played out to aTV.

Page 506: Tugas tik di kelas xi ips 3

In 1999, Jon Lech Johansen released an application called DeCSS which allowed a CSS-encrypted DVD to play on a computer running the Linux operating system, at a time when no licensed DVD player application for Linux had yet been created. The legality of DeCSS isquestionable, one of the authors has been the subject to a lawsuit and reproduction of the keys themselves are subject to restrictions as illegal numbers.[54]

Also in 1999, Windows came out with Windows Media DRM, which read instructions from media files in a rights management language that stated what the user may do with the media.[55] The language can define how many times the media file can be played, if it can be burned to a CD, if it can be printed, forwarded, or saved to the local disk.[56] Later versions of Windows Media DRM also allow producers to declare whether the user may transfer the media file toother devices,[57] to implement music subscription services that make downloaded files unplayable after canceled subscription, and implement regional lockout.[58]

Microsoft's Windows Vista contains a DRM system called the ProtectedMedia Path, which contains the Protected Video Path (PVP). PVP triesto stop DRM-restricted content from playing while unsigned software is running in order to prevent the unsigned software from accessing the content. Additionally, PVP can encrypt information during transmission to the monitor or the graphics card, which makes it more difficult to make unauthorized recordings.

Advanced Access Content System (AACS) is a DRM system for HD DVD andBlu-ray Discs developed by the AACS Licensing Administrator, LLC (AACS LA), a consortium that includes Disney, Intel, Microsoft, Matsushita (Panasonic), Warner Brothers, IBM, Toshiba and Sony. In December 2006, a process key was published on the internet by hackers, enabling unrestricted access to AACS-protected HD DVD content.[59] After the cracked keys were revoked, further cracked keys were released.[60]

Page 507: Tugas tik di kelas xi ips 3

Marlin (DRM) is a technology that is developed and maintained in an open industry group known as the Marlin Developer Community (MDC) and licensed by the Marlin Trust Management Organization (MTMO). Founded in 2005, by five companies: Intertrust, Panasonic, Philips, Samsung, and Sony, Marlin DRM has been deployed in multiple places around the world. In Japan the acTVila IPTV service uses Marlin to encrypt video streams, which are permitted to be recorded on a DVR in the home. In Europe, Philips NetTVs implement Marlin DRM. Also inEurope, Marlin DRM is required in such industry groups as the Open IPTV Forum and national initiatives such as YouView in the UK, Tivu in Italy, and HDForum in France, which are starting to see broad deployments.

OMA DRM is a system invented by the Open Mobile Alliance, whose members represent mobile phone manufacturers (e.g. LG, Motorola, Samsung, Sony), mobile system manufacturers (e.g. Ericsson, Openwave), mobile phone network operators (e.g. Vodafone, O2, Cingular, Deutsche Telekom, Orange), and information technology companies (e.g. Microsoft, IBM).

DRM and music

Audio CDs

Discs with DRM schemes are not standards-compliant Compact Discs (CDs) but are rather CD-ROM media. Therefore, they all lack the CD logotype found on discs which follow the standard (known as Red Book). These CDs cannot be played on all CD players or personal computers. Personal computers running Microsoft Windows sometimes even crash when attempting to play the CDs.[61]

In 2005, Sony BMG introduced new DRM technology which installed DRM software on users' computers without clearly notifying the user or requiring confirmation. Among other things, the installed software included a rootkit, which created a severe security vulnerability others could exploit. When the nature of the DRM involved was made public much later, Sony BMG initially minimized the significance of

Page 508: Tugas tik di kelas xi ips 3

the vulnerabilities its software had created, but was eventually compelled to recall millions of CDs, and released several attempts to patch the surreptitiously included software to at least remove the rootkit. Several class action lawsuits were filed, which were ultimately settled by agreements to provide affected consumers with a cash payout or album downloads free of DRM.[62]

Sony BMG's DRM software actually had only a limited ability to prevent copying, as it affected only playback on Windows computers, not on other equipment. Even on the Windows platform, users regularly bypassed the restrictions. And, while the Sony BMG DRM technology created fundamental vulnerabilities in customers' computers, parts of it could be trivially bypassed by holding down the "shift" key while inserting the CD, or by disabling the autorun feature. In addition, audio tracks could simply be played and re-recorded, thus completely bypassing all the DRM (this is known as the analog hole). Sony BMG's first two attempts at releasing a patchwhich would remove the DRM software from users' computers failed.

In January 2007, EMI stopped publishing audio CDs with DRM, stating that "the costs of DRM do not measure up to the results."[63] Following EMI, Sony BMG was the last publisher to abolish DRM completely, and audio CDs containing DRM are no longer released by the four largest commercial record label companies.[64]

Internet music

Many internet music stores employ DRM to restrict usage of music purchased and downloaded.

Prior to 2009, Apple's iTunes Store utilized the FairPlay DRM system for music. Apple did not license its DRM to other companies, so only Apple devices and Apple’s QuickTime media player could play iTunes music.[7][58] In May 2007, EMI tracks became available in iTunes Plus format at a higher price point. These tracks were higherquality (256 kbit/s) and DRM free. In October 2007, the cost of

Page 509: Tugas tik di kelas xi ips 3

iTunes Plus tracks was lowered to US$0.99.[65] In April 2009, all iTunes music became available completely DRM-free. (Videos sold and rented through iTunes, as well as iOS Apps, however, were to continue using Apple's FairPlay DRM.)

Napster music store offers a subscription-based approach to DRM alongside permanent purchases. Users of the subscription service candownload and stream an unlimited amount of music transcoded to Windows Media Audio (WMA) while subscribed to the service. But when the subscription period lapses, all the downloaded music is unplayable until the user renews his or her subscription. Napster also charges users who wish to use the music on their portable device an additional $5 per month. In addition, Napster gives users the option of paying an additional $0.99 per track to burn it to CD or for the song to never expire. Music bought through Napster can beplayed on players carrying the Microsoft PlaysForSure logo (which, notably, do not include iPods or even Microsoft's own Zune). As of June 2009, Napster is offering DRM free MP3 music, which can be played on iPhones and iPods.

Wal-Mart Music Downloads, another music download store, charges $0.94 per track for all non-sale downloads. All Wal-Mart downloads are able to be played on any Windows PlaysForSure marked product. The music does play on the SanDisk's Sansa mp3 player, for example, but must be copied to the player's internal memory. It cannot be played through the player's microSD card slot, which is a problem that many users of the mp3 player experience.

Sony operated a music download service called "Connect" which used Sony's proprietary OpenMG DRM technology. Music downloaded fromthis store (usually via Sony's SonicStage software) was only playable on computers running Microsoft Windows and Sony hardware (including the PSP and some Sony Ericsson phones).

Kazaa is one of a few services offering a subscription-based pricing model. However, music downloads from the Kazaa website are DRM-protected, and can only be played on computers or portable devices running Windows Media Player, and only as long as the customer remains subscribed to Kazaa.

Page 510: Tugas tik di kelas xi ips 3

The various services are currently not interoperable, though those that use the same DRM system (for instance the several Windows MediaDRM format stores, including Napster, Kazaa and Yahoo Music) all provide songs that can be played side-by-side through the same player program. Almost all stores require client software of some sort to be downloaded, and some also need plug-ins. Several collegesand universities, such as Rensselaer Polytechnic Institute, have made arrangements with assorted Internet music suppliers to provide access (typically DRM-restricted) to music files for their students,to less than universal popularity, sometimes making payments from student activity fee funds.[66] One of the problems is that the music becomes unplayable after leaving school unless the student continues to pay individually. Another is that few of these vendors are compatible with the most common portable music player, the AppleiPod. The Gowers Review of Intellectual Property (to HMG in the UK; 141 pages, 40+ specific recommendations) has taken note of the incompatibilities, and suggests (Recommendations 8—12) that there beexplicit fair dealing exceptions to copyright allowing libraries to copy and format-shift between DRM schemes, and further allowing end users to do the same privately. If adopted, some acrimony may decrease.

Although DRM is prevalent for Internet music, some online music stores such as eMusic, Dogmazic, Amazon, and Beatport, do not use DRM despite encouraging users to avoid sharing music. Major labels have begun releasing more music without DRM. Eric Bangeman suggests in Ars Technica that this is because the record labels are "slowly beginning to realize that they can't have DRMed music and complete control over the online music market at the same time... One way to break the cycle is to sell music that is playable on any digital audio player. eMusic does exactly that, and their surprisingly extensive catalog of non-DRMed music has vaulted it into the number two online music store position behind the iTunes Store."[67] Apple's Steve Jobs called on the music industry to eliminate DRM in an open letter titled Thoughts on Music.[68] Apple's iTunes Store will start to sell DRM-free 256 kbit/s (up from 128 kbit/s) AAC encoded music from EMI for a premium price (this has since reverted to the standard price).

Page 511: Tugas tik di kelas xi ips 3

In March 2007, Musicload.de, one of Europe's largest internet music retailers, announced their position strongly against DRM. In an openletter, Musicload stated that three out of every four calls to theircustomer support phone service are as a result of consumer frustration with DRM.[69]

Mobile ring tones

The Open Mobile Alliance created a standard for interoperable DRM onmobile devices. The first version of OMA DRM consisted of a simple rights management language and was widely used to protect mobile phone ringtones from being copied from the phone to other devices. Later versions expanded the rights management language to similar expressiveness as Fairplay, but did not become widely used.[58]

DRM and television

The CableCard standard is used by cable television providers in the United States to restrict content to services to which the customer has subscribed.

The broadcast flag concept was developed by Fox Broadcasting in 2001, and was supported by the MPAA and the U.S. Federal Communications Commission (FCC). A ruling in May 2005, by a United States courts of appeals held that the FCC lacked authority to impose it on the TV industry in the US. It required that all HDTVs obey a stream specification determining whether a stream can be recorded. This could block instances of fair use, such as time-shifting. It achieved more success elsewhere when it was adopted by the Digital Video Broadcasting Project (DVB), a consortium of about 250 broadcasters, manufacturers, network operators, software developers, and regulatory bodies from about 35 countries involved in attempting to develop new digital TV standards.

Page 512: Tugas tik di kelas xi ips 3

An updated variant of the broadcast flag has been developed in the Content Protection and Copy Management group under DVB (DVB-CPCM). Upon publication by DVB, the technical specification was submitted to European governments in March 2007. As with much DRM, the CPCM system is intended to control use of copyrighted material by the end-user, at the direction of the copyright holder. According to RenBucholz of the EFF, which paid to be a member of the consortium, "You won't even know ahead of time whether and how you will be able to record and make use of particular programs or devices".[70] The DVB claims that the system will harmonize copyright holders' controlacross different technologies, thereby making things easier for end users.[citation needed] The normative sections have now all been approved for publication by the DVB Steering Board, and will be published by ETSI as a formal European Standard as ETSI TS 102 825-Xwhere X refers to the Part number of specification. Nobody has yet stepped forward to provide a Compliance and Robustness regime for the standard (though several are rumoured to be in development), so it is not presently possible to fully implement a system, as there is nowhere to obtain the necessary device certificates.

Copyright infringement

From Wikipedia, the free encyclopedia

For information on handling copyright concerns in Wikipedia, see Wikipedia:Copyright violations.

"Illegal Music" redirects here. For the record label, see Illegal Musik.

"Pirated" and "Internet piracy" redirect here. For other uses, see Piracy (disambiguation).

An advertisement for copyright and patent preparation services from 1906, when copyright registration formalities were still required inthe US.

Copyright infringement is the use of works protected by copyright law without permission, infringing certain exclusive rights granted to the copyright holder, such as the right to reproduce, distribute,display or perform the protected work, or to make derivative works.

Page 513: Tugas tik di kelas xi ips 3

The copyright holder is typically the work's creator, or a publisheror other business to whom copyright has been assigned. Copyright holders routinely invoke legal and technological measures to preventand penalize copyright infringement.

Copyright infringement disputes are usually resolved through direct negotiation, a notice and take down process, or litigation in civil court. Egregious or large-scale commercial infringement, especially when it involves counterfeiting, is sometimes prosecuted via the criminal justice system. Shifting public expectations, advances in digital technology, and the increasing reach of the Internet have led to such widespread, anonymous infringement that copyright-dependent industries now focus less on pursuing individuals who seekand share copyright-protected content online, and more on expanding copyright law to recognize and penalize—as "indirect" infringers—theservice providers and software distributors which are said to facilitate and encourage individual acts of infringement by others.

Estimates of the actual economic impact of copyright infringement vary widely and depend on many factors. Nevertheless, copyright holders, industry representatives, and legislators have long characterized copyright infringement as piracy or theft—language which some U.S. courts now regard as pejorative or otherwise contentious.[1][2][3]

Contents

1 Terminology

1.1 "Piracy"

1.2 "Theft"

2 Motivation

2.1 Developing world

2.2 Motivations due to censorship

Page 514: Tugas tik di kelas xi ips 3

3 Existing and proposed laws

3.1 Civil law

3.2 Criminal law

3.3 Noncommercial file sharing

3.3.1 Legality of downloading

3.3.2 Legality of uploading

3.3.3 Relaxed penalties

3.4 The DMCA and anti-circumvention laws

3.5 Online intermediary liability

3.5.1 Definition of intermediary

3.5.2 Litigation and legislation concerning intermediaries

3.5.3 Peer-to-peer issues

4 Limitations

4.1 Non-infringing uses

4.2 Non-infringing types of works

5 Preventative measures

5.1 Legal

5.2 Protected distribution

6 Economic impact of copyright infringement

6.1 Motion picture industry estimates

6.2 Software industry estimates

6.3 Music industry estimates

6.4 Criticism of industry estimates

6.5 Economic impact of infringement in emerging markets

7 Pro-open culture organizations

Page 515: Tugas tik di kelas xi ips 3

8 Anti-copyright infringement organizations

9 See also

10 References

11 Further reading

Terminology

The terms piracy and theft are often associated with copyright infringement.[4][5] The original meaning of piracy is "robbery or illegal violence at sea",[6] but the term has been in use for centuries as a synonym for acts of copyright infringement.[7][8] Theft, meanwhile, emphasizes the potential commercial harm of infringement to copyright holders. However, copyright is a type of intellectual property, an area of law distinct from that which covers robbery or theft, offenses related only to tangible property.Not all copyright infringement results in commercial loss, and the U.S. Supreme Court ruled in 1985 that infringement does not easily equate with theft.[1]

In the case MPAA v. Hotfile, Judge Kathleen Williams granted a motion to deny the prosecution the usage of pejorative words in the copyright infringement case.[3] This list included the words "piracy," "theft," "stealing," and their derivatives—the use of which, even if the defendants had been found to have directly infringed on the Plaintiffs’ copyrights, the defense asserted, wouldserve no purpose but to misguide and inflame the jury.[2] The plaintiff argued the common use of the terms when referring to copyright infringement should invalidate the motion, but the judge did not concur.[3] (The case was settled shortly before it reached the jury phase of the trial.[9])

"Piracy"

Pirated edition of German philosopher Alfred Schmidt (Amsterdam, ca.1970).

Page 516: Tugas tik di kelas xi ips 3

The practice of labelling the infringement of exclusive rights in creative works as "piracy" predates statutory copyright law. Prior to the Statute of Anne in 1710, the Stationers' Company of London in1557 received a Royal Charter giving the company a monopoly on publication and tasking it with enforcing the charter. Those who violated the charter were labelled pirates as early as 1603.[7] The term "piracy" has been used to refer to the unauthorized copying, distribution and selling of works in copyright.[8] Article 12 of the1886 Berne Convention for the Protection of Literary and Artistic Works uses the term "piracy" in relation to copyright infringement, stating "Pirated works may be seized on importation into those countries of the Union where the original work enjoys legal protection."[8] Article 61 of the 1994 Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPs) requires criminal procedures and penalties in cases of "willful trademark counterfeiting or copyright piracy on a commercial scale."[10] Piracy traditionally refers to acts of copyright infringement intentionally committed for financial gain, though more recently, copyright holders have described online copyright infringement, particularly in relation to peer-to-peer file sharing networks, as "piracy."[8]

Richard Stallman and the GNU Project have criticized the use of the word "piracy" in these situations, saying that publishers use the word to refer to "copying they don't approve of" and that "they [publishers] imply that it is ethically equivalent to attacking ships on the high seas, kidnapping and murdering the people on them."[11]

"Theft"

Copyright holders frequently refer to copyright infringement as theft. In copyright law, infringement does not refer to theft of physical objects that take away the owner's possession, but an instance where a person exercises one of the exclusive rights of thecopyright holder without authorization.[12] Courts have distinguished between copyright infringement and theft. For

Page 517: Tugas tik di kelas xi ips 3

instance, the United States Supreme Court held in Dowling v. United States (1985) that bootleg phonorecords did not constitute stolen property. Instead, "interference with copyright does not easily equate with theft, conversion, or fraud. The Copyright Act even employs a separate term of art to define one who misappropriates a copyright: '[...] an infringer of the copyright.'" The court said that in the case of copyright infringement, the province guaranteed to the copyright holder by copyright law—certain exclusive rights—isinvaded, but no control, physical or otherwise, is taken over the copyright, nor is the copyright holder wholly deprived of using the copyrighted work or exercising the exclusive rights held.[1]

Motivation

Some of the motives for engaging in copyright infringement are the following:[13]

Pricing – unwillingness or inability to pay the price requested by the legitimate sellers

Unavailability – no legitimate sellers providing the product in the country of the end-user: not yet launched there, already withdrawn from sales, never to be sold there, geographical restrictions on online distribution and international shipping

Usefulness – the legitimate product comes with various means (DRM, region lock, DVD region code, Blu-ray region code) of restricting legitimate use (backups, usage on devices of different vendors, offline usage) or comes with annoying non-skippable advertisements and anti-piracy disclaimers, which are removed in thepirated product making it more desirable for the end-user

Shopping experience – no legitimate sellers providing the product with the required quality through online distribution and through a shopping system with the required level of user-friendliness

Anonymity – Downloading works does not require identification whereas downloads directly from the website of the copyright owner often require a valid email address and/ or other credentials

Page 518: Tugas tik di kelas xi ips 3

Sometimes only partial compliance with license agreements is the cause. For example, in 2013 the US Army settled a lawsuit with Texas-based company Apptricity, which makes software that allows thearmy to track their soldiers in real time. In 2004, the US Army paidUS$4.5 million for a license of 500 users, while allegedly installing the software for more than 9000 users; the case was settled for US$50 million.[14][15] Major anti-piracy organizations, like the BSA, conduct software licensing audits regularly to ensure full compliance.[16]

Cara Cusumano, director of the Tribeca Film Festival, stated in April 2014: "Piracy is less about people not wanting to pay and moreabout just wanting the immediacy — people saying, 'I want to watch Spiderman right now' and downloading it". The statement occurred during the third year that the festival used the Internet to presentits content, while it was the first year that it featured a showcaseof content producers who work exclusively online. Cusumano further explained that downloading behavior is not merely conducted by people who merely want to obtain content for free:

I think that if companies were willing to put that material out there, moving forward, consumers will follow. It's just that they [consumers] want to consume films online and they're ready to consume films that way and we're not necessarily offering them in that way. So it's the distribution models that need to catch up. People will pay for the content.[4]

In response to Cusumano's perspective, Screen Producers Australia executive director Matt Deaner clarified the motivation of the film industry: "Distributors are usually wanting to encourage cinema-going as part of this process [monetizing through returns] and restrict the immediate access to online so as to encourage the maximum number of people to go to the cinema." Deaner further explained the matter in terms of the Australian film industry, stating: "there are currently restrictions on quantities of tax

Page 519: Tugas tik di kelas xi ips 3

support that a film can receive unless the film has a traditional cinema release."[4]

In a study published in the Journal of Behavioural and Experimental Economics, and reported on in early May 2014, researchers from the University of Portsmouth in the UK discussed findings from examiningthe illegal downloading behavior of 6,000 Finnish people, aged sevento 84. The list of reasons for downloading given by the study respondents included money saving; the ability to access material not on general release, or before it was released; and assisting artists to avoid involvement with record companies and movie studios.[17]

In a public talk between Bill Gates, Warren Buffett, and Brent Schlender at the University of Washington in 1998, Bill Gates commented on piracy as a means to an end, whereby people who use Microsoft software illegally will eventually pay for it, out of familiarity, as a country's economy develops and legitimate productsbecome more affordable to businesses and consumers:

Although about three million computers get sold every year in China, people don't pay for the software. Someday they will, though.And as long as they're going to steal it, we want them to steal ours. They'll get sort of addicted, and then we'll somehow figure out how to collect sometime in the next decade.[18]

Developing world

In Media Piracy in Emerging Economies, the first independent international comparative study of media piracy with center on Brazil, India, Russia, South Africa, Mexico, Turkey and Bolivia, “high prices for media goods, low incomes, and cheap digital technologies” are the chief factors that lead to the global spread of media piracy, especially in emerging markets.[19]

Page 520: Tugas tik di kelas xi ips 3

According to the same study, even though digital piracy inflicts additional costs on the production side of media, it also offers themain access to media goods in developing countries. The strong tradeoffs that favor using digital piracy in developing economies dictate the current neglected law enforcements toward digital piracy.[20] In China, the issue of digital piracy is not merely legal, but social – originating from the high demand for cheap and affordable pirated goods as well as the governmental connections of the businesses which produce such goods.[21]

Motivations due to censorship

There have been instances where a country's government bans a movie,resulting in the spread of pirated videos and DVDs. Romanian-born documentary maker Ilinca Calugareanu wrote a New York Times article telling the story of Irina Margareta Nistor, a narrator for state TVunder Nicolae Ceauşescu's regime. A visitor from the west gave her bootlegged copies of American movies, which she dubbed for secret viewings through Romania. According to the article, she dubbed more than 3,000 movies and became the country's second-most famous voice after Ceauşescu, even though no one knew her name until many years later.[22]

Existing and proposed laws

Main articles: History of copyright law, Digital Millennium Copyright Act, Protect IP Act, Stop Online Piracy Act and Software copyright

Demonstration in Sweden in support of file sharing, 2006.

The Pirate Bay logo, a retaliation to the stereotypical image of piracy

Most countries extend copyright protections to authors of works. In countries with copyright legislation, enforcement of copyright is generally the responsibility of the copyright holder.[23] However,

Page 521: Tugas tik di kelas xi ips 3

in several jurisdictions there are also criminal penalties for copyright infringement.[24]

Civil law

In the U.S., copyright infringement is sometimes confronted via lawsuits in civil court, against alleged infringers directly, or against providers of services and software that support unauthorizedcopying. For example, major motion-picture corporation MGM Studios filed suit against P2P file-sharing services Grokster and Streamcastfor their contributory role in copyright infringement.[25] In 2005, the Supreme Court ruled in favor of MGM, holding that such services could be held liable for copyright infringement since they functioned and, indeed, willfully marketed themselves as venues for acquiring copyrighted movies. The MGM v. Grokster case did not overturn the earlier Sony decision, but rather clouded the legal waters; future designers of software capable of being used for copyright infringement were warned.[26]

In the United States, copyright term has been extended many times over[27] from the original term of 14 years with a single renewal allowance of 14 years, to the current term of the life of the authorplus 70 years. If the work was produced under corporate authorship it may last 120 years after creation or 95 years after publication, whichever is less.

Article 50 of the Agreement on Trade-Related Aspects of IntellectualProperty Rights (TRIPs) requires that signatory countries enable courts to remedy copyright infringement with injunctions and the destruction of infringing products, and award damages.[10] Some jurisdictions only allow actual, provable damages, and some, like the U.S., allow for large statutory damage awards intended to deter would-be infringers and allow for compensation in situations where actual damages are difficult to prove.

Page 522: Tugas tik di kelas xi ips 3

In some jurisdictions, copyright or the right to enforce it can be contractually assigned to a third party which did not have a role inproducing the work. When this outsourced litigator appears to have no intention of taking any copyright infringement cases to trial, but rather only takes them just far enough through the legal system to identify and exact settlements from suspected infringers, criticscommonly refer to the party as a "copyright troll." Such practices have had mixed results in the U.S.[28]

Criminal law

Main article: Criminal Copyright Law in the United States

Punishment of copyright infringement varies case-by-case across countries. Convictions may include jail time and/or severe fines foreach instance of copyright infringement. In the United States, willful copyright infringement carries a maximum penalty of $150,000per instance.[29]

Article 61 of the Agreement on Trade-Related Aspects of IntellectualProperty Rights (TRIPs) requires that signatory countries establish criminal procedures and penalties in cases of "willful trademark counterfeiting or copyright piracy on a commercial scale".[10] Copyright holders have demanded that states provide criminal sanctions for all types of copyright infringement.[23]

The first criminal provision in U.S. copyright law was added in 1897, which established a misdemeanor penalty for “unlawful performances and representations of copyrighted dramatic and musicalcompositions” if the violation had been “willful and for profit.”[30] Criminal copyright infringement requires that the infringer acted “for the purpose of commercial advantage or private financial gain." 17 U.S.C. § 506(a). To establish criminal liability, the prosecutor must first show the basic elements of copyright infringement: ownership of a valid copyright, and the violation of one or more of the copyright holder’s exclusive rights.

Page 523: Tugas tik di kelas xi ips 3

The government must then establish that defendant willfully infringed or, in other words, possessed the necessary mens rea. Misdemeanor infringement has a very low threshold in terms of numberof copies and the value of the infringed works.

The ACTA trade agreement, signed in May 2011 by the United States, Japan, Switzerland, and the EU, requires that its parties add criminal penalties, including incarceration and fines, for copyrightand trademark infringement, and obligated the parties to actively police for infringement.[23][31][32]

United States v. LaMacchia 871 F.Supp. 535 (1994) was a case decidedby the United States District Court for the District of Massachusetts which ruled that, under the copyright and cybercrime laws effective at the time, committing copyright infringement for non-commercial motives could not be prosecuted under criminal copyright law. The ruling gave rise to what became known as the "LaMacchia Loophole," wherein criminal charges of fraud or copyrightinfringement would be dismissed under current legal standards, so long as there was no profit motive involved.[33]

The United States No Electronic Theft Act (NET Act), a federal law passed in 1997 in response to LaMacchia, provides for criminal prosecution of individuals who engage in copyright infringement under certain circumstances, even when there is no monetary profit or commercial benefit from the infringement. Maximum penalties can be five years in prison and up to $250,000 in fines. The NET Act also raised statutory damages by 50%. The court's ruling explicitly drew attention to the shortcomings of current law that allowed people to facilitate mass copyright infringement while being immune to prosecution under the Copyright Act.

Proposed laws such as the Stop Online Piracy Act broaden the definition of "willful infringement", and introduce felony charges for unauthorized media streaming. These bills are aimed towards

Page 524: Tugas tik di kelas xi ips 3

defeating websites that carry or contain links to infringing content, but have raised concerns about domestic abuse and internet censorship.

Noncommercial file sharing

Legality of downloading

To an extent, copyright law in some countries permits downloading copyright-protected content for personal, noncommercial use. Examples include Canada[34] and European Union (EU) member states like Poland,[35] The Netherlands,[36] and Spain.[37]

The personal copying exemption in the copyright law of EU member states stems from the EU Copyright Directive of 2001, which is generally devised to allow EU members to enact laws sanctioning making copies without authorization, as long as they are for personal, noncommerical use. The Copyright Directive was not intended to legitimize file-sharing, but rather the common practice of space shifting copyright-protected content from a legally purchased CD (for example) to certain kinds of devices and media, provided rights holders are compensated and no copy protection measures are circumvented. Rights-holder compensation takes various forms, depending on the country, but is generally either a levy on "recording" devices and media, or a tax on the content itself. In some countries, such as Canada, the applicability of such laws to copying onto general-purpose storage devices like computer hard drives, portable media players, and phones, for which no levies are collected, has been the subject of debate and further efforts to reform copyright law.

In some countries, the personal copying exemption explicitly requires that the content being copied was obtained legitimately—i.e., from authorized sources, not file-sharing networks. Other countries, such as the Netherlands, make no such distinction; the exemption there had been assumed, even by the government, to apply to any such copying, even from file-sharing networks. However, in

Page 525: Tugas tik di kelas xi ips 3

April 2014, the Court of Justice of the European Union ruled that "national legislation which makes no distinction between private copies made from lawful sources and those made from counterfeited orpirated sources cannot be tolerated."[38] Thus, in the Netherlands, for example, downloading from file-sharing networks is no longer legal.

Legality of uploading

Although downloading or other private copying is sometimes permitted, public distribution—by uploading or otherwise offering toshare copyright-protected content—remains illegal in most, if not all countries. For example, in Canada, even though it is legal to download any copyrighted file as long as it is for noncommercial use, it is illegal to distribute the copyrighted files (e.g. by uploading them to a P2P network).[39]

Relaxed penalties

Some countries, like Canada and Germany, have limited the penalties for non-commercial copyright infringement. For example, statutory damages in Canada for non-commercial copyright infringement is capped at $5,000.[40] Germany has even passed a bill to limit the fine for individuals accused of sharing music and movies to $200.[41]

On September 20, 2013, the Spanish government approved new laws thatwill take effect at the beginning of 2014. The approved legislation will mean that website owners who are earning “direct or indirect profit,” such as via advertising links, from pirated content can be imprisoned for up to six years. However, peer-to-peer file-sharing platforms and search engines are exempt from the laws.[42]

The DMCA and anti-circumvention laws

Title I of the U.S. DMCA, the WIPO Copyright and Performances and Phonograms Treaties Implementation Act has provisions that prevent

Page 526: Tugas tik di kelas xi ips 3

persons from "circumvent[ing] a technological measure that effectively controls access to a work". Thus if a distributor of copyrighted works has some kind of software, dongle or password access device installed in instances of the work, any attempt to bypass such a copy protection scheme may be actionable — though the US Copyright Office is currently reviewing anticircumvention rulemaking under DMCA — anticircumvention exemptions that have been in place under the DMCA include those in software designed to filterwebsites that are generally seen to be inefficient (child safety andpublic library website filtering software) and the circumvention of copy protection mechanisms that have malfunctioned, have caused the instance of the work to become inoperable or which are no longer supported by their manufacturers.[43]

Online intermediary liability

Whether Internet intermediaries are liable for copyright infringement by their users is a subject of debate and court cases in a number of countries.[44]

Definition of intermediary

Internet intermediaries were formerly understood to be internet service providers (ISPs). However, questions of liability have also emerged in relation to other Internet infrastructure intermediaries,including Internet backbone providers, cable companies and mobile communications providers.[45]

In addition, intermediaries are now also generally understood to include Internet portals, software and games providers, those providing virtual information such as interactive forums and commentfacilities with or without a moderation system, aggregators of various kinds, such as news aggregators, universities, libraries andarchives, web search engines, chat rooms, web blogs, mailing lists, and any website which provides access to third party content through, for example, hyperlinks, a crucial element of the World Wide Web.

Page 527: Tugas tik di kelas xi ips 3

Litigation and legislation concerning intermediaries

Early court cases focused on the liability of Internet service providers (ISPs) for hosting, transmitting or publishing user-supplied content that could be actioned under civil or criminal law,such as libel, defamation, or pornography.[46] As different content was considered in different legal systems, and in the absence of common definitions for "ISPs," "bulletin boards" or "online publishers," early law on online intermediaries' liability varied widely from country to country. The first laws on online intermediaries' liability were passed from the mid-1990s onwards.[citation needed]

The debate has shifted away from questions about liability for specific content, including that which may infringe copyright, towards whether online intermediaries should be generally responsible for content accessible through their services or infrastructure.[47]

The U.S. Digital Millennium Copyright Act (1998) and the European E-Commerce Directive (2000) provide online intermediaries with limitedstatutory immunity from liability for copyright infringement. Onlineintermediaries hosting content that infringes copyright are not liable, so long as they do not know about it and take actions once the infringing content is brought to their attention. In U.S. law this is characterized as "safe harbor" provisions. Under European law, the governing principles for Internet Service Providers are "mere conduit", meaning that they are neutral 'pipes' with no knowledge of what they are carrying; and 'no obligation to monitor' meaning that they cannot be given a general mandate by governments to monitor content. These two principles are a barrier for certain forms of online copyright enforcement and they were the reason behind an attempt to amend the European Telecoms Package in 2009 to support new measures against copyright infringement.[48]

Peer-to-peer issues

Page 528: Tugas tik di kelas xi ips 3

Peer-to-peer file sharing intermediaries have been denied access to safe harbor provisions in relation to copyright infringement. Legal action against such intermediaries, such as Napster, are generally brought in relation to principles of secondary liability for copyright infringement, such as contributory liability and vicariousliability.[49]

Animation showing 7 remote computers exchanging data with an 8th (local) computer over a network.

The BitTorrent protocol: In this animation, the colored bars beneathall of the 7 clients in the upper region above represent the file, with each color representing an individual piece of the file. After the initial pieces transfer from the seed (large system at the bottom), the pieces are individually transferred from client to client. The original seeder only needs to send out one copy of the file for all the clients to receive a copy.

These types of intermediaries do not host or transmit infringing content, themselves, but may be regarded in some courts as encouraging, enabling or facilitating infringement by users. These intermediaries may include the author, publishers and marketers of peer-to-peer networking software, and the websites that allow users to download such software. In the case of the BitTorrent protocol, intermediaries may include the torrent tracker and any websites or search engines which facilitate access to torrent files. Torrent files don't contain copyrighted content, but they may make referenceto files that do, and they may point to trackers which coordinate the sharing of those files. Some torrent indexing and search sites, such as The Pirate Bay, now encourage the use of magnet links, instead of direct links to torrent files, creating another layer of indirection; using such links, torrent files are obtained from otherpeers, rather than from a particular website.

Since the late 1990s, copyright holders have taken legal actions against a number of peer-to-peer intermediaries, such as pir, Grokster, eMule, SoulSeek, BitTorrent and Limewire, and case law on

Page 529: Tugas tik di kelas xi ips 3

the liability of Internet service providers (ISPs) in relation to copyright infringement has emerged primarily in relation to these cases.[50]

Nevertheless, whether and to what degree any of these types of intermediaries have secondary liability is the subject of ongoing litigation. The decentralised structure of peer-to-peer networks, inparticular, does not sit easily with existing laws on online intermediaries' liability. The BitTorrent protocol established an entirely decentralised network architecture in order to distribute large files effectively. Recent developments in peer-to-peer technology towards more complex network configurations are said to have been driven by a desire to avoid liability as intermediaries under existing laws.[51]

Limitations

Copyright law does not grant authors and publishers absolute controlover the use of their work. Only certain types of works and certain kinds of uses are protected;[52] only unauthorized uses of protectedworks can be said to be infringing.

Non-infringing uses

Article 10 of the Berne Convention mandates that national laws provide for limitations to copyright, so that copyright protection does not extend to certain kinds of uses that fall under what the treaty calls "fair practice," including but not limited to minimal quotations used in journalism and education.[53] The laws implementing these limitations and exceptions for uses that would otherwise be infringing broadly fall into the categories of either fair use or fair dealing. In common law systems, these fair practicestatutes typically enshrine principles underlying many earlier judicial precedents, and are considered essential to freedom of speech.[54]

Page 530: Tugas tik di kelas xi ips 3

Another example is the practice of compulsory licensing, which is where the law forbids copyright owners from denying a license for certain uses of certain kinds of works, such as compilations and live performances of music. Compulsory licensing laws generally say that for certain uses of certain works, no infringement occurs as long as a royalty, at a rate determined by law rather than private negotiation, is paid to the copyright owner or representative copyright collective. Some fair dealing laws, such as Canada's, include similar royalty requirements.[55]

In Europe, the copyright infringement case Public Relations Consultants Association Ltd v Newspaper Licensing Agency Ltd had twoprongs; one concerned whether a news aggregator service infringed the copyright of the news generators; the other concerned whether the temporary web cache created by the web browser of a consumer of the aggregator's service, also infringed the copyright of the news generators.[56] The first prong was decided in favor of the news generators; in June 2014 the second prong was decided by the Court of Justice of the European Union (CJEU), which ruled that the temporary web cache of consumers of the aggregator did not infringe the copyright of the news generators.[56][57][58]

Non-infringing types of works

In order to qualify for protection, a work must be an expression with a degree of originality, and it must be in a fixed medium, suchas written down on paper or recorded digitally.[59][60] The idea itself is not protected. That is, a copy of someone else's original idea is not infringing unless it copies that person's unique, tangible expression of the idea. Some of these limitations, especially regarding what qualifies as original, are embodied only in case law (judicial precedent), rather than in statutes.

In the U.S., for example, copyright case law contains a substantial similarity requirement to determine whether the work falls under thefair use clause. Likewise, courts may require computer software to pass an Abstraction-Filtration-Comparison test (AFC Test)[61][62] to

Page 531: Tugas tik di kelas xi ips 3

determine if it is too abstract to qualify for protection, or too dissimilar to an original work to be considered infringing. Software-related case law has also clarified that the amount of R&D,effort and expense put into a work's creation doesn't affect copyright protection.[63]

Evaluation of alleged copyright infringement in a court of law may be substantial; the time and costs required to apply these tests vary based on the size and complexity of the copyrighted material. Furthermore, there is no standard or universally accepted test; somecourts have rejected the AFC Test, for example, in favor of narrowercriteria.

The POSAR test,[64] a recently devised forensic procedure for establishing software copyright infringement cases, is an extension or an enhancement of the AFC test. POSAR, with its added features and additional facilities, offers something more to the legal and the judicial domain than what the AFC test offers. These additional features and facilities make the test more sensitive to the technical and legal requirements of software copyright infringement.

Preventative measures

The BSA outlined four strategies that governments can adopt to reduce software piracy rates in its 2011 piracy study results:

"Increase public education and raise awareness about software piracy and IP rights in cooperation with industry and law enforcement."

"Modernize protections for software and other copyrighted materials to keep pace with new innovations such as cloud computing and the proliferation of networked mobile devices."

"Strengthen enforcement of IP laws with dedicated resources, including specialized enforcement units, training for law enforcement and judiciary officials, improved cross-border

Page 532: Tugas tik di kelas xi ips 3

cooperation among law enforcement agencies, and fulfillment of obligations under the World Trade Organization’s Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS)."

"Lead by example by using only fully licensed software, implementing software asset management (SAM) programs, and promotingthe use of legal software in state-owned enterprises, and among all contractors and suppliers."[65]

Legal

Corporations and legislatures take different types of preventative measures to deter copyright infringement, with much of the focus since the early 1990s being on preventing or reducing digital methods of infringement. Strategies include education, civil & criminal legislation, and international agreements,[66] as well as publicizing anti-piracy litigation successes and imposing forms of digital media copy protection, such as controversial digital rights management (DRM) technology and anti-circumvention laws, which limitthe amount of control consumers have over the use of products and content they have purchased.

Legislatures have reduced infringement by narrowing the scope of what is considered infringing. Aside from upholding international copyright treaty obligations to provide general limitations and exceptions,[53] nations have enacted compulsory licensing laws applying specifically to digital works and uses. For example, in theU.S., the DMCA, an implementation of the 1996 WIPO Copyright Treaty,considers digital transmissions of audio recordings to be licensed as long as a designated copyright collective's royalty and reportingrequirements are met.[67] The DMCA also provides safe harbor for digital service providers whose users are suspected of copyright infringement, thus reducing the likelihood that the providers themselves will be considered directly infringing.[68]

Page 533: Tugas tik di kelas xi ips 3

Some copyright owners voluntarily reduce the scope of what is considered infringement by employing relatively permissive, "open" licensing strategies: rather than privately negotiating license terms with individual users who must first seek out the copyright owner and ask for permission, the copyright owner publishes and distributes the work with a prepared license that anyone can use, aslong as they adhere to certain conditions. This has the effect of reducing infringement—and the burden on courts—by simply permitting certain types of uses under terms that the copyright owner considersreasonable. Examples include free software licenses, like the GNU General Public License (GPL), and the Creative Commons licenses, which are predominantly applied to visual and literary works.[69]

Protected distribution

To prevent piracy of films, the standard drill of film distribution is to have a movie first released through movie theaters (theatricalwindow), on average approximately 16 and a half weeks,[70] before having it released to Blu-Ray and DVD (entering its video window). During the theatrical window, digital versions of films are often transported in data storage devices by couriers rather than by data transmission.[71] The data can be encrypted, with the key being madeto work only at specific times in order to prevent leakage between screens.[71] Coded Anti-Piracy marks can be added to films to identify the source of illegal copies and shut them down. As a result of these measures, the only versions of films available for piracy during the theatrical window are usually "cams" made by videorecordings of the movie screens, which are of inferior quality compared to the original film version.

Economic impact of copyright infringement

Organizations disagree on the scope and magnitude of copyright infringement's economic effects and public support for the copyrightregime.

Page 534: Tugas tik di kelas xi ips 3

In relation to computer software, the Business Software Alliance (BSA) claimed in its 2011 piracy study: "Public opinion continues tosupport intellectual property (IP) rights: Seven PC users in 10 support paying innovators to promote more technological advances."[65]

Following consultation with experts on copyright infringement, the United States Government Accountability Office (GAO) clarified in 2010 that "estimating the economic impact of IP [intellectual property] infringements is extremely difficult, and assumptions mustbe used due to the absence of data," while "it is difficult, if not impossible, to quantify the net effect of counterfeiting and piracy on the economy as a whole."[72]

The U.S. GAO's 2010 findings regarding the great difficulty of accurately gauging the economic impact of copyright infringement wasreinforced within the same report by the body's research into three commonly cited estimates that had previously been provided to U.S. agencies. The GAO report explained that the sources—a Federal Bureauof Investigation (FBI) estimate, a Customs and Border Protection (CBP) press release and a Motor and Equipment Manufacturers Association estimate—"cannot be substantiated or traced back to an underlying data source or methodology."[72]

Deaner explained the importance of rewarding the "investment risk" taken by motion picture studios in 2014:

Usually movies are hot because a distributor has spent hundreds of thousands of dollars promoting the product in print and TV and other forms of advertising. The major Hollywood studios spend millions on this process with marketing costs rivalling the costs ofproduction. They are attempting then to monetise through returns that can justify the investment in both the costs of promotion and production.[4]

Page 535: Tugas tik di kelas xi ips 3

Motion picture industry estimates

In 2008 the Motion Picture Association of America (MPAA) reported that its six major member companies lost US$6.1 billion to piracy.[73] A 2009 Los Angeles Daily News article then cited a loss figure of "roughly $20 billion a year" for Hollywood studios.[74]

In an early May 2014 Guardian article, an annual loss figure of US$20.5 billion was cited for the movie industry. The article's basis is the results of a University of Portsmouth study that only involved Finnish participants, aged between seven and 84. The researchers, who worked with 6,000 participants, stated: "Movie pirates are also more likely to cut down their piracy if they feel they are harming the industry compared with people who illegally download music".[17]

Software industry estimates

According to a 2007 BSA and International Data Corporation (IDC) study, the five countries with the highest rates of software piracy were: 1. Armenia (93%); 2. Bangladesh (92%); 3. Azerbaijan (92%); 4.Moldova (92%); and 5. Zimbabwe (91%). According to the study's results, the five countries with the lowest piracy rates were: 1. U.S. (20%); 2. Luxembourg (21%); 3. New Zealand (22%); 4. Japan (23%); and 5. Austria (25%). The 2007 report showed that the Asia-Pacific region was associated with the highest amount of loss, in terms of U.S. dollars, with $14,090,000, followed by the European Union, with a loss of $12,383,000; the lowest amount of U.S. dollarswas lost in the Middle East/Africa region, where $2,446,000 was documented.[75]

In its 2011 report, conducted in partnership with IDC and Ipsos Public Affairs, the BSA stated: "Over half of the world’s personal computer users—57 percent—admit to pirating software." The ninth annual "BSA Global Software Piracy Study" claims that the "commercial value of this shadow market of pirated software" was

Page 536: Tugas tik di kelas xi ips 3

worth US$63.4 billion in 2011, with the highest commercial value of pirated PC software existent in the U.S. during that time period (US$9,773,000). According to the 2011 study, Zimbabwe was the nationwith the highest piracy rate, at 92%, while the lowest piracy rate was present in the U.S., at 19%.[65]

The GAO noted in 2010 that the BSA's research up until that year defined "piracy as the difference between total installed software and legitimate software sold, and its scope involved only packaged physical software."[72]

Music industry estimates

[icon] This section requires expansion. (May 2014)

In 2007, the Institute for Policy Innovation (IPI) reported that music piracy took $12.5 billion from the U.S. economy. According to the study, musicians and those involved in the recording industry are not the only ones who experience losses attributed to music piracy. Retailers have lost over a billion dollars, while piracy hasresulted in 46,000 less production-level jobs and almost 25,000 retail jobs. The U.S. government was also reported to suffer from music piracy, losing $422 million in tax revenue.[76]

A recent report from 2013 released by the European Commission Joint Research Centre suggests that illegal music downloads have almost noeffect on the amount of legal music downloads. The study analyzed the behavior of 16,000 European music consumers and found that although music piracy negatively affects offline music sales, illegal music downloads had a positive affect on legal music purchases. Without illegal downloading, legal purchases were about two percent lower.[77]

The study has received criticism, particularly from The International Federation of the Phonographic Industry, who believes the study is flawed and misleading. One argument against the

Page 537: Tugas tik di kelas xi ips 3

research is that many music consumers only download music illegally.The IFPI also points out that music piracy does not only affect online music sales but multiple facets of the music industry, which is not addressed in the study.[78]

Criticism of industry estimates

The methodology of studies utilized by industry spokespeople has been heavily criticized. Inflated claims for damages and allegationsof economic harm are common in copyright disputes.[79][80] Some studies and figures, including those cited by the MPAA and RIAA withregards to the economic effects of film and music downloads, have been widely disputed as based on questionable assumptions which resulted in statistically unsound numbers.[81][82]

In one extreme example, the RIAA claimed damages against LimeWire totaling $75 trillion – more than the global GDP – and "respectfully" disagreed with the judges ruling that such claims were "absurd".[83]

However, this $75 trillion figure is obtained through one specific interpretation of copyright law that would count each song downloaded as an infringement of copyright. After the conclusion of the case, LimeWire agreed to pay $105 million to RIAA.[84]

Economic impact of infringement in emerging markets

The 2011 Business Software Alliance Piracy Study Standard estimates the total commercial value of pirated software to be at $59 billion in 2010, with emerging markets accounting for $31.9 billion, over half of the total. Furthermore, mature markets for the first time received less PC shipments than emerging economies in 2010, making emerging markets now responsible for more than half of all computersin use worldwide. In addition with software piracy rates of 68 percent comparing to 24 percent of mature markets, emerging markets thus possess the majority of the global increase in the commercial

Page 538: Tugas tik di kelas xi ips 3

value of pirated software. China continues to have the highest commercial value of pirated software at $8.9 billion among developing countries and second in the world behind the US at $9.7 billion in 2011.[85][86] In 2011, the Business Software Alliance announced that 83 percent of software deployed on PCs in Africa has been pirated (excluding South Africa).[87]

Some countries distinguish corporate piracy from private use, which is tolerated as a welfare service.[citation needed] This is the leading reason developing countries refuse to accept or respect copyright laws. Traian Băsescu, the president of Romania, stated that "piracy helped the young generation discover computers. It set off the development of the IT industry in Romania."[88]

Plaintext

From Wikipedia, the free encyclopedia

This article is about cryptography. For the computing term meaning the storage of textual material that is (largely) unformatted, see plain text.

This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (June 2011)

In cryptography, plaintext is information a sender wishes to transmit to a receiver. Cleartext is often used as a synonym. Plaintext has reference to the operation of cryptographic algorithms, usually encryption algorithms, and is the input upon which they operate. Cleartext, by contrast, refers to data that is transmitted or stored unencrypted (that is, 'in the clear').

Contents

1 Overview

Page 539: Tugas tik di kelas xi ips 3

2 Secure handling of plaintext

2.1 Web browser saved password security controversy

3 See also

4 References

Overview

Before the computer era, plaintext most commonly meant message text in the language of the communicating parties. Since computers becamecommonly available, the definition has expanded to include:

messages (for example, email messages)

document content such as word processor and spreadsheet files

audio and video files, digital photographs and any other multimedia

files containing other files such as Zip files or ISO images

ATM, credit card and other banking information

sensor data

any other data that a person wishes to keep private

Much of this data is not directly meaningful to humans, being already transformed into computer manipulable forms. While the original definition implied that the message could be read by a human being, the modern definition emphasizes that a person using a computer could easily interpret the data.

Any information which the communicating parties wish to conceal fromothers can now be treated, and referred to, as plaintext. Thus, in asignificant sense, plaintext is the 'normal' representation of data

Page 540: Tugas tik di kelas xi ips 3

before any action has been taken to conceal, compress, or 'digest' it. It need not represent text, and even if it does, the text may not be "plain".

Plaintext is used as input to an encryption algorithm; the output isusually termed ciphertext particularly when the algorithm is a cipher. Codetext is less often used, and almost always only when thealgorithm involved is actually a code. In some systems, however, multiple layers of encryption are used, in which case the output of one encryption algorithm becomes plaintext input for the next.

Secure handling of plaintext

In a cryptosystem, weaknesses can be introduced through insecure handling of plaintext, allowing an attacker to bypass the cryptography altogether. Plaintext is vulnerable in use and in storage, whether in electronic or paper format. Physical security deals with methods of securing information and its storage media from local, physical, attacks. For instance, an attacker might entera poorly secured building and attempt to open locked desk drawers orsafes. An attacker can also engage in dumpster diving, and may be able to reconstruct shredded information if it is sufficiently valuable to be worth the effort. One countermeasure is to burn or thoroughly crosscut shred discarded printed plaintexts or storage media; NSA is infamous for its disposal security precautions.

If plaintext is stored in a computer file (and the situation of automatically made backup files generated during program execution must be included here, even if invisible to the user), the storage media along with the entire computer and its components must be secure. Sensitive data is sometimes processed on computers whose mass storage is removable, in which case physical security of the removed disk is separately vital. In the case of securing a computer, useful (as opposed to handwaving) security must be physical (e.g., against burglary, brazen removal under cover of supposed repair, installation of covert monitoring devices, etc.), as well as virtual (e.g., operating system modification, illicit

Page 541: Tugas tik di kelas xi ips 3

network access, Trojan programs, ...). The wide availability of keydrives, which can plug into most modern computers and store largequantities of data, poses another severe security headache. A spy (perhaps posing as a cleaning person) could easily conceal one and even swallow it, if necessary.

Discarded computers, disk drives and media are also a potential source of plaintexts. Most operating systems do not actually erase anything — they simply mark the disk space occupied by a deleted file as 'available for use', and remove its entry from the file system directory. The information in a file deleted in this way remains fully present until overwritten at some later time when the operating system reuses the disk space. With even low-end computers commonly sold with many gigabytes of disk space and rising monthly, this 'later time' may be months later, or never. Even overwriting the portion of a disk surface occupied by a deleted file is insufficient in many cases. Peter Gutmann of the University of Auckland wrote a celebrated 1996 paper on the recovery of overwritten information from magnetic disks; areal storage densitieshave gotten much higher since then, so this sort of recovery is likely to be more difficult than it was when Gutmann wrote.

Also, independently, modern hard drives automatically remap sectors that are starting to fail; those sectors no longer in use will contain information that is entirely invisible to the file system (and all software which uses it for access to disk data), but is nonetheless still present on the physical drive platter. It may, of course, be sensitive plaintext. Some government agencies (e.g., NSA)require that all disk drives be physically pulverized when they are discarded, and in some cases, chemically treated with corrosives before or after. This practice is not widespread outside of the government, however. For example, Garfinkel and Shelat (2003) analyzed 158 second-hand hard drives acquired at garage sales and the like and found that less than 10% had been sufficiently sanitized. A wide variety of personal and confidential information was found readable from the others. See data remanence.

Page 542: Tugas tik di kelas xi ips 3

Laptop computers are a special problem. The US State Department, theBritish Secret Service, and the US Department of Defense have all had laptops containing secret information,some perhaps in plaintext form, 'vanish' in recent years. Announcements of similar losses are becoming a common item in news reports. Disk encryption techniques can provide protection against such loss or theft — if properly chosen and used.

On occasion, even when the data on the host systems is itself encrypted, the media used to transfer data between such systems is nevertheless plaintext due to poorly designed data policy. An incident in October 2007 in which HM Revenue and Customs lost CDs containing no less than the records of 25 million child benefit recipients in the United Kingdom — the data apparently being entirely unencrypted — is a case in point.

Modern cryptographic systems are designed to resist known plaintext or even chosen plaintext attacks and so may not be entirely compromised when plaintext is lost or stolen. Older systems used techniques such as padding and Russian copulation to obscure information in plaintext that could be easily guessed, and to resistthe effects of loss of plaintext on the security of the cryptosystem.

Web browser saved password security controversy

Several popular web browsers which offer to store a user's passwordsdo so in plaintext form. Even though most of them initially hide thesaved passwords, it is possible for anyone to view all passwords in cleartext with a few clicks of the mouse, by going into the browsers' security settings options menus. In 2010, it emerged that this is the case with Firefox (still the case as of end-2014), and in Aug 2013 it emerged that Google Chrome does so as well.[1] When asoftware developer raised the issue with the Chrome security team,[2] a company representative responded that Google would not change the feature, and justified the refusal by saying that hiding the

Page 543: Tugas tik di kelas xi ips 3

passwords would "provide users with a false sense of security" and "that's just not how we approach security on Chrome".[3]

Cipher

From Wikipedia, the free encyclopedia

For other uses, see Cipher (disambiguation).

Edward Larsson's rune cipher resembling that found on the KensingtonRunestone. Also includes runically unrelated blackletter writing style and pigpen cipher.

In cryptography, a cipher (or cypher) is an algorithm for performingencryption or decryption—a series of well-defined steps that can be followed as a procedure. An alternative, less common term is encipherment. To encipher or encode is to convert information into cipher or code. In non-technical usage, a 'cipher' is the same thingas a 'code'; however, the concepts are distinct in cryptography. In classical cryptography, ciphers were distinguished from codes.

Codes generally substitute different length strings of characters inthe output, while ciphers generally substitute the same number of characters as are input. There are exceptions and some cipher systems may use slightly more, or fewer, characters when output versus the number that were input.

Codes operated by substituting according to a large codebook which linked a random string of characters or numbers to a word or phrase.For example, "UQJHSE" could be the code for "Proceed to the following coordinates." When using a cipher the original informationis known as plaintext, and the encrypted form as ciphertext. The ciphertext message contains all the information of the plaintext message, but is not in a format readable by a human or computer without the proper mechanism to decrypt it.

Page 544: Tugas tik di kelas xi ips 3

The operation of a cipher usually depends on a piece of auxiliary information, called a key (or, in traditional NSA parlance, a cryptovariable). The encrypting procedure is varied depending on thekey, which changes the detailed operation of the algorithm. A key must be selected before using a cipher to encrypt a message. Withoutknowledge of the key, it should be extremely difficult, if not impossible, to decrypt the resulting ciphertext into readable plaintext.

Most modern ciphers can be categorized in several ways

By whether they work on blocks of symbols usually of a fixed size (block ciphers), or on a continuous stream of symbols (stream ciphers).

By whether the same key is used for both encryption and decryption (symmetric key algorithms), or if a different key is usedfor each (asymmetric key algorithms). If the algorithm is symmetric,the key must be known to the recipient and sender and to no one else. If the algorithm is an asymmetric one, the enciphering key is different from, but closely related to, the deciphering key. If one key cannot be deduced from the other, the asymmetric key algorithm has the public/private key property and one of the keys may be made public without loss of confidentiality.

Contents

1 Etymology

2 Versus codes

3 Types

3.1 Historical

3.2 Modern

4 Key size and vulnerability

Page 545: Tugas tik di kelas xi ips 3

5 See also

6 Notes

7 References

8 External links

Etymology

"Cipher" is alternatively spelled "cypher"; similarly "ciphertext" and "cyphertext", and so forth.

The word "cipher" in former times meant "zero" and had the same origin: Middle French as cifre and Medieval Latin as cifra, from the

Arabic ص�ف�ر ṣifr = zero (see Zero—Etymology). "Cipher" was later used for any decimal digit, even any number. There are many theories about how the word "cipher" may have come to mean "encoding":It was firstly introduced by Abū ʿAbdallāh Muḥammad ibn Mūsā al-Khwārizmī.

Encoding often involved numbers.

The Roman number system was very cumbersome because there was noconcept of zero (or empty space). The concept of zero (which was also called "cipher"), which is now common knowledge, was alien to medieval Europe, so confusing and ambiguous to common Europeans thatin arguments people would say "talk clearly and not so far fetched as a cipher". Cipher came to mean concealment of clear messages or encryption.

The French formed the word "chiffre" and adopted the Italianword "zero".

The English used "zero" for "0", and "cipher" from the word "ciphering" as a means of computing.

The Germans used the words "Ziffer" (digit) and "Chiffre".

Page 546: Tugas tik di kelas xi ips 3

The Dutch still use the word "cijfer" to refer to a numerical digit.

The Serbians use the word "cifra", which refers to a digit, or in some cases, any number. Besides "cifra", they use word "broj" for a number.

The Italians and the Spanish also use the word "cifra" to refer to a number.

The Swedes use the word "siffra" which refers to a digit and"nummer" to refer to a combination of "siffror".

Ibrahim Al-Kadi concluded that the Arabic word sifr, for the digit zero, developed into the European technical term for encryption.[1]

As the decimal zero and its new mathematics spread from the Arabic world to Europe in the Middle Ages, words derived from ṣifr and zephyrus came to refer to calculation, as well as to privileged knowledge and secret codes. According to Ifrah, "in thirteenth-century Paris, a 'worthless fellow' was called a '... cifre en algorisme', i.e., an 'arithmetical nothing'."[2] Cipher was the European pronunciation of sifr, and cipher came to mean a message orcommunication not easily understood.[3]

Versus codes

Main article: Code (cryptography)

In non-technical usage, a "(secret) code" typically means a "cipher". Within technical discussions, however, the words "code" and "cipher" refer to two different concepts. Codes work at the level of meaning—that is, words or phrases are converted into something else and this chunking generally shortens the message.

An example of this is the Telegraph Code which was used to shorten long telegraph messages which resulted from entering into commercialcontracts using exchanges of Telegrams.

Page 547: Tugas tik di kelas xi ips 3

Another example is given by whole words cipher s, which allow the user to replace an entire word with a symbol or character, much likethe way Japanese utilize Kanji (Chinese) characters to supplement their language. ex "The quick brown fox jumps over the lazy dog" becomes "The quick brown 狐 jumps 狐 the lazy 狐".

Ciphers, on the other hand, work at a lower level: the level of individual letters, small groups of letters, or, in modern schemes, individual bits and blocks of bits. Some systems used both codes andciphers in one system, using superencipherment to increase the security. In some cases the terms codes and ciphers are also used synonymously to substitution and transposition.

Historically, cryptography was split into a dichotomy of codes and ciphers; and coding had its own terminology, analogous to that for ciphers: "encoding, codetext, decoding" and so on.

However, codes have a variety of drawbacks, including susceptibilityto cryptanalysis and the difficulty of managing a cumbersome codebook. Because of this, codes have fallen into disuse in modern cryptography, and ciphers are the dominant technique.

Types

There are a variety of different types of encryption. Algorithms used earlier in the history of cryptography are substantially different from modern methods, and modern ciphers can be classified according to how they operate and whether they use one or two keys.

Historical

Historical pen and paper ciphers used in the past are sometimes known as classical ciphers. They include simple substitution ciphers(such as Rot 13) and transposition ciphers (such as a Rail Fence

Page 548: Tugas tik di kelas xi ips 3

Cipher). For example, "GOOD DOG" can be encrypted as "PLLX XLP" where "L" substitutes for "O", "P" for "G", and "X" for "D" in the message. Transposition of the letters "GOOD DOG" can result in "DGOGDOO". These simple ciphers and examples are easy to crack, evenwithout plaintext-ciphertext pairs.

Simple ciphers were replaced by polyalphabetic substitution ciphers (such as the Vigenère) which changed the substitution alphabet for every letter. For example, "GOOD DOG" can be encrypted as "PLSX TWF"where "L", "S", and "W" substitute for "O". With even a small amountof known or estimated plaintext, simple polyalphabetic substitution ciphers and letter transposition ciphers designed for pen and paper encryption are easy to crack.[4] It is possible to create a secure pen and paper cipher based on a one-time pad though, but the usual disadvantages of one-time pads apply.

During the early twentieth century, electro-mechanical machines wereinvented to do encryption and decryption using transposition, polyalphabetic substitution, and a kind of "additive" substitution. In rotor machines, several rotor disks provided polyalphabetic substitution, while plug boards provided another substitution. Keys were easily changed by changing the rotor disks and the plugboard wires. Although these encryption methods were more complex than previous schemes and required machines to encrypt and decrypt, othermachines such as the British Bombe were invented to crack these encryption methods.

Modern

Modern encryption methods can be divided by two criteria: by type ofkey used, and by type of input data.

By type of key used ciphers are divided into:

Page 549: Tugas tik di kelas xi ips 3

symmetric key algorithms (Private-key cryptography), where the same key is used for encryption and decryption, and

asymmetric key algorithms (Public-key cryptography), where two different keys are used for encryption and decryption.

In a symmetric key algorithm (e.g., DES and AES), the sender and receiver must have a shared key set up in advance and kept secret from all other parties; the sender uses this key for encryption, andthe receiver uses the same key for decryption. The Feistel cipher uses a combination of substitution and transposition techniques. Most block cipher algorithms are based on this structure. In an asymmetric key algorithm (e.g., RSA), there are two separate keys: apublic key is published and enables any sender to perform encryption, while a private key is kept secret by the receiver and enables only him to perform correct decryption.

Ciphers can be distinguished into two types by the type of input data:

block ciphers, which encrypt block of data of fixed size, and

stream ciphers, which encrypt continuous streams of data

Key size and vulnerability

In a pure mathematical attack, (i.e., lacking any other information to help break a cipher) two factors above all count:

Computational power available, i.e., the computing power which can be brought to bear on the problem. It is important to note that average performance/capacity of a single computer is not the only factor to consider. An adversary can use multiple computers at once,

Page 550: Tugas tik di kelas xi ips 3

for instance, to increase the speed of exhaustive search for a key (i.e., "brute force" attack) substantially.

Key size, i.e., the size of key used to encrypt a message. As the key size increases, so does the complexity of exhaustive search to the point where it becomes impracticable to crack encryption directly.

Since the desired effect is computational difficulty, in theory one would choose an algorithm and desired difficulty level, thus decide the key length accordingly.

An example of this process can be found at Key Length which uses multiple reports to suggest that a symmetric cipher with 128 bits, an asymmetric cipher with 3072 bit keys, and an elliptic curve cipher with 512 bits, all have similar difficulty at present.

Claude Shannon proved, using information theory considerations, thatany theoretically unbreakable cipher must have keys which are at least as long as the plaintext, and used only once: one-time pad.

Algorithm

From Wikipedia, the free encyclopedia

"Algorithms" redirects here. For the journal, see Algorithms (journal).

"Rule set" redirects here. For other uses, see Rule.

"Algorhythm" redirects here. For the album, see Boxcar (band).

For the French musical project, see The Algorithm.

Flow chart of an algorithm (Euclid's algorithm) for calculating the greatest common divisor (g.c.d.) of two numbers a and b in locationsnamed A and B. The algorithm proceeds by successive subtractions in two loops: IF the test B ≥ A yields "yes" (or true) (more accuratelythe number b in location B is greater than or equal to the number a in location A) THEN, the algorithm specifies B ← B − A (meaning the

Page 551: Tugas tik di kelas xi ips 3

number b − a replaces the old b). Similarly, IF A > B, THEN A ← A − B. The process terminates when (the contents of) B is 0, yielding the g.c.d. in A. (Algorithm derived from Scott 2009:13; symbols and drawing style from Tausworthe 1977).

In mathematics and computer science, an algorithm (Listeni/ˈælɡərɪðəm/ AL-gə-ri-dhəm) is a self-contained step-by-stepset of operations to be performed. Algorithms exist that perform calculation, data processing, and automated reasoning.

An algorithm is an effective method that can be expressed within a finite amount of space and time[1] and in a well-defined formal language[2] for calculating a function.[3] Starting from an initial state and initial input (perhaps empty),[4] the instructions describe a computation that, when executed, proceeds through a finite[5] number of well-defined successive states, eventually producing "output"[6] and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input.[7]

The concept of algorithm has existed for centuries, however a partial formalization of what would become the modern algorithm began with attempts to solve the Entscheidungsproblem (the "decisionproblem") posed by David Hilbert in 1928. Subsequent formalizations were framed as attempts to define "effective calculability"[8] or "effective method";[9] those formalizations included the Gödel–Herbrand–Kleene recursive functions of 1930, 1934 and 1935, Alonzo Church's lambda calculus of 1936, Emil Post's "Formulation 1" of 1936, and Alan Turing's Turing machines of 1936–7 and 1939. Giving aformal definition of algorithms, corresponding to the intuitive notion, remains a challenging problem.[10]

Contents

Page 552: Tugas tik di kelas xi ips 3

1 Word origin

2 Informal definition

3 Formalization

3.1 Expressing algorithms

4 Implementation

5 Computer algorithms

6 Examples

6.1 Algorithm example

6.2 Euclid’s algorithm

6.2.1 Computer language for Euclid's algorithm

6.2.2 An inelegant program for Euclid's algorithm

6.2.3 An elegant program for Euclid's algorithm

6.3 Testing the Euclid algorithms

6.4 Measuring and improving the Euclid algorithms

7 Algorithmic analysis

7.1 Formal versus empirical

7.2 Execution efficiency

8 Classification

8.1 By implementation

8.2 By design paradigm

8.3 Optimization problems

8.4 By field of study

8.5 By complexity

9 Continuous algorithms

10 Legal issues

11 Etymology

Page 553: Tugas tik di kelas xi ips 3

12 History: Development of the notion of "algorithm"

12.1 Ancient Greece

12.2 Origin

12.3 Discrete and distinguishable symbols

12.4 Manipulation of symbols as "place holders" for numbers:algebra

12.5 Mechanical contrivances with discrete states

12.6 Mathematics during the 19th century up to the mid-20th century

12.7 Emil Post (1936) and Alan Turing (1936–37, 1939)

12.8 J. B. Rosser (1939) and S. C. Kleene (1943)

12.9 History after 1950

13 See also

14 Notes

15 References

15.1 Secondary references

16 Further reading

17 External links

Word origin

'Algorithm' stems from the name of a Latin translation of a book written by al-Khwārizmī, a Persian[11][12] mathematician, astronomerand geographer. Al-Khwarizmi wrote a book titled On the Calculation with Hindu Numerals in about 825 AD, and was principally responsiblefor spreading the Indian system of numeration throughout the Middle East and Europe. It was translated into Latin as Algoritmi de numeroIndorum (in English, "Al-Khwarizmi on the Hindu Art of Reckoning").

Page 554: Tugas tik di kelas xi ips 3

The term "Algoritmi" in the title of the book led to the term "algorithm".[13]

Informal definition

For a detailed presentation of the various points of view on the definition of "algorithm", see Algorithm characterizations.

An informal definition could be "a set of rules that precisely defines a sequence of operations."[14] which would include all computer programs, including programs that do not perform numeric calculations. Generally, a program is only an algorithm if it stops eventually.[15]

A prototypical example of an algorithm is Euclid's algorithm to determine the maximum common divisor of two integers; an example (there are others) is described by the flow chart above and as an example in a later section.

Boolos & Jeffrey (1974, 1999) offer an informal meaning of the word in the following quotation:

No human being can write fast enough, or long enough, or small enough† ( †"smaller and smaller without limit ...you'd be trying to write on molecules, on atoms, on electrons") to list all members of an enumerably infinite set by writing out their names, one after another, in some notation. But humans can do something equally useful, in the case of certain enumerably infinite sets: They can give explicit instructions for determining the nth member of the set, for arbitrary finite n. Such instructions are to be given quiteexplicitly, in a form in which they could be followed by a computingmachine, or by a human who is capable of carrying out only very elementary operations on symbols.[16]

Page 555: Tugas tik di kelas xi ips 3

An "enumerably infinite set" is one whose elements can be put into one-to-one correspondence with the integers. Thus, Boolos and Jeffrey are saying that an algorithm implies instructions for a process that "creates" output integers from an arbitrary "input" integer or integers that, in theory, can be arbitrarily large. Thus an algorithm can be an algebraic equation such as y = m + n—two arbitrary "input variables" m and n that produce an output y. But various authors' attempts to define the notion indicate that the word implies much more than this, something on the order of (for theaddition example):

Precise instructions (in language understood by "the computer")[17] for a fast, efficient, "good"[18] process that specifies the "moves" of "the computer" (machine or human, equipped with the necessary internally contained information and capabilities)[19] to find, decode, and then process arbitrary input integers/symbols m and n, symbols + and = ... and "effectively"[20] produce, in a "reasonable" time,[21] output-integer y at a specified place and in a specified format.

The concept of algorithm is also used to define the notion of decidability. That notion is central for explaining how formal systems come into being starting from a small set of axioms and rules. In logic, the time that an algorithm requires to complete cannot be measured, as it is not apparently related with our customary physical dimension. From such uncertainties, that characterize ongoing work, stems the unavailability of a definition of algorithm that suits both concrete (in some sense) and abstract usage of the term.

Formalization

Algorithms are essential to the way computers process data. Many computer programs contain algorithms that detail the specific instructions a computer should perform (in a specific order) to carry out a specified task, such as calculating employees' paychecksor printing students' report cards. Thus, an algorithm can be

Page 556: Tugas tik di kelas xi ips 3

considered to be any sequence of operations that can be simulated bya Turing-complete system. Authors who assert this thesis include Minsky (1967), Savage (1987) and Gurevich (2000):

Minsky: "But we will also maintain, with Turing . . . that any procedure which could "naturally" be called effective, can in fact be realized by a (simple) machine. Although this may seem extreme, the arguments . . . in its favor are hard to refute".[22]

Gurevich: "...Turing's informal argument in favor of his thesis justifies a stronger thesis: every algorithm can be simulated by a Turing machine ... according to Savage [1987], an algorithm is a computational process defined by a Turing machine".[23]

Typically, when an algorithm is associated with processing information, data is read from an input source, written to an outputdevice, and/or stored for further processing. Stored data is regarded as part of the internal state of the entity performing the algorithm. In practice, the state is stored in one or more data structures.

For some such computational process, the algorithm must be rigorously defined: specified in the way it applies in all possible circumstances that could arise. That is, any conditional steps must be systematically dealt with, case-by-case; the criteria for each case must be clear (and computable).

Because an algorithm is a precise list of precise steps, the order of computation is always critical to the functioning of the algorithm. Instructions are usually assumed to be listed explicitly,and are described as starting "from the top" and going "down to the bottom", an idea that is described more formally by flow of control.

Page 557: Tugas tik di kelas xi ips 3

So far, this discussion of the formalization of an algorithm has assumed the premises of imperative programming. This is the most common conception, and it attempts to describe a task in discrete, "mechanical" means. Unique to this conception of formalized algorithms is the assignment operation, setting the value of a variable. It derives from the intuition of "memory" as a scratchpad.There is an example below of such an assignment.

For some alternate conceptions of what constitutes an algorithm see functional programming and logic programming.

Expressing algorithms

Algorithms can be expressed in many kinds of notation, including natural languages, pseudocode, flowcharts, drakon-charts, programming languages or control tables (processed by interpreters).Natural language expressions of algorithms tend to be verbose and ambiguous, and are rarely used for complex or technical algorithms. Pseudocode, flowcharts, drakon-charts and control tables are structured ways to express algorithms that avoid many of the ambiguities common in natural language statements. Programming languages are primarily intended for expressing algorithms in a formthat can be executed by a computer, but are often used as a way to define or document algorithms.

There is a wide variety of representations possible and one can express a given Turing machine program as a sequence of machine tables (see more at finite state machine, state transition table andcontrol table), as flowcharts and drakon-charts (see more at state diagram), or as a form of rudimentary machine code or assembly code called "sets of quadruples" (see more at Turing machine).

Representations of algorithms can be classed into three accepted levels of Turing machine description:[24]

Page 558: Tugas tik di kelas xi ips 3

1 High-level description

"...prose to describe an algorithm, ignoring the implementation details. At this level we do not need to mention how the machine manages its tape or head."

2 Implementation description

"...prose used to define the way the Turing machine uses its head and the way that it stores data on its tape. At this level we do not give details of states or transition function."

3 Formal description

Most detailed, "lowest level", gives the Turing machine's "statetable".

For an example of the simple algorithm "Add m+n" described in all three levels, see Algorithm#Examples.

Implementation

Logical NAND algorithm implemented electronically in 7400 chip

Most algorithms are intended to be implemented as computer programs.However, algorithms are also implemented by other means, such as in a biological neural network (for example, the human brain implementing arithmetic or an insect looking for food), in an electrical circuit, or in a mechanical device.

Computer algorithms

Flowchart examples of the canonical Böhm-Jacopini structures: the SEQUENCE (rectangles descending the page), the WHILE-DO and the IF-THEN-ELSE. The three structures are made of the primitive conditional GOTO (IF test=true THEN GOTO step xxx) (a diamond), the unconditional GOTO (rectangle), various assignment operators (rectangle), and HALT (rectangle). Nesting of these structures inside assignment-blocks result in complex diagrams (cf Tausworthe 1977:100,114).

Page 559: Tugas tik di kelas xi ips 3

In computer systems, an algorithm is basically an instance of logic written in software by software developers to be effective for the intended "target" computer(s) to produce output from given input (perhaps null). An optimal algorithm, even running in old hardware, would produce faster results than a non optimal (higher time complexity) algorithm for the same purpose, running in more efficient hardware; that is why the algorithms, like computer hardware, are considered technology.

"Elegant" (compact) programs, "good" (fast) programs : The notion of"simplicity and elegance" appears informally in Knuth and precisely in Chaitin:

Knuth: ". . .we want good algorithms in some loosely defined aesthetic sense. One criterion . . . is the length of time taken to perform the algorithm . . .. Other criteria are adaptability of the algorithm to computers, its simplicity and elegance, etc"[25]

Chaitin: " . . . a program is 'elegant,' by which I mean that it's the smallest possible program for producing the output that it does"[26]

Chaitin prefaces his definition with: "I'll show you can't prove that a program is 'elegant'"—such a proof would solve the Halting problem (ibid).

Algorithm versus function computable by an algorithm: For a given function multiple algorithms may exist. This is true, even without expanding the available instruction set available to the programmer.Rogers observes that "It is . . . important to distinguish between the notion of algorithm, i.e. procedure and the notion of function computable by algorithm, i.e. mapping yielded by procedure. The samefunction may have several different algorithms".[27]

Page 560: Tugas tik di kelas xi ips 3

Unfortunately there may be a tradeoff between goodness (speed) and elegance (compactness)—an elegant program may take more steps to complete a computation than one less elegant. An example that uses Euclid's algorithm appears below.

Computers (and computors), models of computation: A computer (or human "computor"[28]) is a restricted type of machine, a "discrete deterministic mechanical device"[29] that blindly follows its instructions.[30] Melzak's and Lambek's primitive models[31] reducedthis notion to four elements: (i) discrete, distinguishable locations, (ii) discrete, indistinguishable counters[32] (iii) an agent, and (iv) a list of instructions that are effective relative to the capability of the agent.[33]

Minsky describes a more congenial variation of Lambek's "abacus" model in his "Very Simple Bases for Computability".[34] Minsky's machine proceeds sequentially through its five (or six depending on how one counts) instructions unless either a conditional IF–THEN GOTO or an unconditional GOTO changes program flow out of sequence. Besides HALT, Minsky's machine includes three assignment (replacement, substitution)[35] operations: ZERO (e.g. the contents of location replaced by 0: L ← 0), SUCCESSOR (e.g. L ← L+1), and DECREMENT (e.g. L ← L − 1).[36] Rarely must a programmer write "code" with such a limited instruction set. But Minsky shows (as do Melzak and Lambek) that his machine is Turing complete with only four general types of instructions: conditional GOTO, unconditional GOTO, assignment/replacement/substitution, and HALT.[37]

Simulation of an algorithm: computer (computor) language: Knuth advises the reader that "the best way to learn an algorithm is to try it . . . immediately take pen and paper and work through an example".[38] But what about a simulation or execution of the real thing? The programmer must translate the algorithm into a language that the simulator/computer/computor can effectively execute. Stone gives an example of this: when computing the roots of a quadratic equation the computor must know how to take a square root. If they

Page 561: Tugas tik di kelas xi ips 3

don't then for the algorithm to be effective it must provide a set of rules for extracting a square root.[39]

This means that the programmer must know a "language" that is effective relative to the target computing agent (computer/computor).

But what model should be used for the simulation? Van Emde Boas observes "even if we base complexity theory on abstract instead of concrete machines, arbitrariness of the choice of a model remains. It is at this point that the notion of simulation enters".[40] When speed is being measured, the instruction set matters. For example, the subprogram in Euclid's algorithm to compute the remainder would execute much faster if the programmer had a "modulus" (division) instruction available rather than just subtraction (or worse: just Minsky's "decrement").

Structured programming, canonical structures: Per the Church–Turing thesis any algorithm can be computed by a model known to be Turing complete, and per Minsky's demonstrations Turing completeness requires only four instruction types—conditional GOTO, unconditionalGOTO, assignment, HALT. Kemeny and Kurtz observe that while "undisciplined" use of unconditional GOTOs and conditional IF-THEN GOTOs can result in "spaghetti code" a programmer can write structured programs using these instructions; on the other hand "it is also possible, and not too hard, to write badly structured programs in a structured language".[41] Tausworthe augments the three Böhm-Jacopini canonical structures:[42] SEQUENCE, IF-THEN-ELSE, and WHILE-DO, with two more: DO-WHILE and CASE.[43] An additional benefit of a structured program is that it lends itself to proofs of correctness using mathematical induction.[44]

Canonical flowchart symbols[45]: The graphical aide called a flowchart offers a way to describe and document an algorithm (and a computer program of one). Like program flow of a Minsky machine, a

Page 562: Tugas tik di kelas xi ips 3

flowchart always starts at the top of a page and proceeds down. Its primary symbols are only 4: the directed arrow showing program flow,the rectangle (SEQUENCE, GOTO), the diamond (IF-THEN-ELSE), and the dot (OR-tie). The Böhm-Jacopini canonical structures are made of these primitive shapes. Sub-structures can "nest" in rectangles but only if a single exit occurs from the superstructure. The symbols and their use to build the canonical structures are shown in the diagram.

Examples

Further information: List of algorithms

Algorithm example

An animation of the quicksort algorithm sorting an array of randomized values. The red bars mark the pivot element; at the startof the animation, the element farthest to the right hand side is chosen as the pivot.

One of the simplest algorithms is to find the largest number in a list of numbers of random order. Finding solution requires looking at every number in the list. From this follows a simple algorithm, which can be stated in a high-level description English prose, as:

High-level description:

If there are no numbers in the set then there is no highest number.

Assume the first number in the set is the largest number in the set.

For each remaining number in the set: if this number is larger than the current largest number, consider this number to be the largest number in the set.

When there are no numbers left in the set to iterate over, consider the current largest number to be the largest number of the set.

Page 563: Tugas tik di kelas xi ips 3

(Quasi-)formal description: Written in prose but much closer to the high-level language of a computer program, the following is the moreformal coding of the algorithm in pseudocode or pidgin code:

Algorithm LargestNumber

Input: A list of numbers L.

Output: The largest number in the list L.

if L.size = 0 return null

largest ← L[0]

for each item in L, do

if item > largest, then

largest ← item

return largest

"←" is a shorthand for "changes to". For instance, "largest ← item" means that the value of largest changes to the value of item.

"return" terminates the algorithm and outputs the value that follows.

Euclid’s algorithm

Further information: Euclid algorithm

The example-diagram of Euclid's algorithm from T.L. Heath 1908 with more detail added. Euclid does not go beyond a third measuring and gives no numerical examples. Nicomachus gives the example of 49 and 21: "I subtract the less from the greater; 28 is left; then again I subtract from this the same 21 (for this is possible); 7 is left; I subtract this from 21, 14 is left; from which I again subtract 7

Page 564: Tugas tik di kelas xi ips 3

(for this is possible); 7 is left, but 7 cannot be subtracted from 7." Heath comments that, "The last phrase is curious, but the meaning of it is obvious enough, as also the meaning of the phrase about ending 'at one and the same number'."(Heath 1908:300).

Euclid’s algorithm appears as Proposition II in Book VII ("Elementary Number Theory") of his Elements.[46] Euclid poses the problem: "Given two numbers not prime to one another, to find their greatest common measure". He defines "A number [to be] a multitude composed of units": a counting number, a positive integer not including 0. And to "measure" is to place a shorter measuring lengths successively (q times) along longer length l until the remaining portion r is less than the shorter length s.[47] In modern words, remainder r = l − q*s, q being the quotient, or remainder r is the "modulus", the integer-fractional part left over after the division.[48]

For Euclid’s method to succeed, the starting lengths must satisfy two requirements: (i) the lengths must not be 0, AND (ii) the subtraction must be “proper”, a test must guarantee that the smallerof the two numbers is subtracted from the larger (alternately, the two can be equal so their subtraction yields 0).

Euclid's original proof adds a third: the two lengths are not prime to one another. Euclid stipulated this so that he could construct a reductio ad absurdum proof that the two numbers' common measure is in fact the greatest.[49] While Nicomachus' algorithm is the same asEuclid's, when the numbers are prime to one another it yields the number "1" for their common measure. So to be precise the following is really Nicomachus' algorithm.

A graphical expression on Euclid's algorithm using example with 1599and 650.

1599 = 650*2 + 299

Page 565: Tugas tik di kelas xi ips 3

650 = 299*2 + 52

299 = 52*5 + 39

52 = 39*1 + 13

39 = 13*3 + 0

Computer language for Euclid's algorithm

Only a few instruction types are required to execute Euclid's algorithm—some logical tests (conditional GOTO), unconditional GOTO,assignment (replacement), and subtraction.

A location is symbolized by upper case letter(s), e.g. S, A, etc.

The varying quantity (number) in a location is written in lower case letter(s) and (usually) associated with the location's name. For example, location L at the start might contain the number l = 3009.

An inelegant program for Euclid's algorithm

"Inelegant" is a translation of Knuth's version of the algorithm with a subtraction-based remainder-loop replacing his use of division (or a "modulus" instruction). Derived from Knuth 1973:2–4. Depending on the two numbers "Inelegant" may compute the g.c.d. in fewer steps than "Elegant".

The following algorithm is framed as Knuth's 4-step version of Euclid's and Nichomachus', but rather than using division to find the remainder it uses successive subtractions of the shorter length s from the remaining length r until r is less than s. The high-leveldescription, shown in boldface, is adapted from Knuth 1973:2–4:

Page 566: Tugas tik di kelas xi ips 3

INPUT:

1 [Into two locations L and S put the numbers l and s that representthe two lengths]:

INPUT L, S

2 [Initialize R: make the remaining length r equal to the starting/initial/input length l]:

R ← L

E0: [Ensure r ≥ s.]

3 [Ensure the smaller of the two numbers is in S and the larger in R]:

IF R > S THEN

the contents of L is the larger number so skip over the exchange-steps 4, 5 and 6:

GOTO step 6

ELSE

swap the contents of R and S.

4 L ← R (this first step is redundant, but is useful for later discussion).

5 R ← S

6 S ← L

E1: [Find remainder]: Until the remaining length r in R is less thanthe shorter length s in S, repeatedly subtract the measuring number s in S from the remaining length r in R.

Page 567: Tugas tik di kelas xi ips 3

7 IF S > R THEN

done measuring so

GOTO 10

ELSE

measure again,

8 R ← R − S

9 [Remainder-loop]:

GOTO 7.

E2: [Is the remainder 0?]: EITHER (i) the last measure was exact andthe remainder in R is 0 program can halt, OR (ii) the algorithm mustcontinue: the last measure left a remainder in R less than measuringnumber in S.

10 IF R = 0 THEN

done so

GOTO step 15

ELSE

CONTINUE TO step 11,

E3: [Interchange s and r]: The nut of Euclid's algorithm. Use remainder r to measure what was previously smaller number s:; L serves as a temporary location.

11 L ← R

12 R ← S

13 S ← L

Page 568: Tugas tik di kelas xi ips 3

14 [Repeat the measuring process]:

GOTO 7

OUTPUT:

15 [Done. S contains the greatest common divisor]:

PRINT S

DONE:

16 HALT, END, STOP.

An elegant program for Euclid's algorithm

The following version of Euclid's algorithm requires only 6 core instructions to do what 13 are required to do by "Inelegant"; worse,"Inelegant" requires more types of instructions. The flowchart of "Elegant" can be found at the top of this article. In the (unstructured) Basic language the steps are numbered, and the instruction LET [] = [] is the assignment instruction symbolized by ←.

5 REM Euclid's algorithm for greatest common divisor

6 PRINT "Type two integers greater than 0"

10 INPUT A,B

20 IF B=0 THEN GOTO 80

30 IF A > B THEN GOTO 60

40 LET B=B-A

Page 569: Tugas tik di kelas xi ips 3

50 GOTO 20

60 LET A=A-B

70 GOTO 20

80 PRINT A

90 END

How "Elegant" works: In place of an outer "Euclid loop", "Elegant" shifts back and forth between two "co-loops", an A > B loop that computes A ← A − B, and a B ≤ A loop that computes B ← B − A. This works because, when at last the minuend M is less than or equal to the subtrahend S ( Difference = Minuend − Subtrahend), the minuend can become s (the new measuring length) and the subtrahend can become the new r (the length to be measured); in other words the "sense" of the subtraction reverses.

Testing the Euclid algorithms

Does an algorithm do what its author wants it to do? A few test cases usually suffice to confirm core functionality. One source[50] uses 3009 and 884. Knuth suggested 40902, 24140. Another interestingcase is the two relatively prime numbers 14157 and 5950.

But exceptional cases must be identified and tested. Will "Inelegant" perform properly when R > S, S > R, R = S? Ditto for "Elegant": B > A, A > B, A = B? (Yes to all). What happens when one number is zero, both numbers are zero? ("Inelegant" computes foreverin all cases; "Elegant" computes forever when A = 0.) What happens if negative numbers are entered? Fractional numbers? If the input numbers, i.e. the domain of the function computed by the algorithm/program, is to include only positive integers including zero, then the failures at zero indicate that the algorithm (and theprogram that instantiates it) is a partial function rather than a total function. A notable failure due to exceptions is the Ariane V rocket failure.

Page 570: Tugas tik di kelas xi ips 3

Proof of program correctness by use of mathematical induction: Knuthdemonstrates the application of mathematical induction to an "extended" version of Euclid's algorithm, and he proposes "a generalmethod applicable to proving the validity of any algorithm".[51] Tausworthe proposes that a measure of the complexity of a program bethe length of its correctness proof.[52]

Measuring and improving the Euclid algorithms

Elegance (compactness) versus goodness (speed): With only 6 core instructions, "Elegant" is the clear winner compared to "Inelegant" at 13 instructions. However, "Inelegant" is faster (it arrives at HALT in fewer steps). Algorithm analysis[53] indicates why this is the case: "Elegant" does two conditional tests in every subtraction loop, whereas "Inelegant" only does one. As the algorithm (usually) requires many loop-throughs, on average much time is wasted doing a "B = 0?" test that is needed only after the remainder is computed.

Can the algorithms be improved?: Once the programmer judges a program "fit" and "effective"—that is, it computes the function intended by its author—then the question becomes, can it be improved?

The compactness of "Inelegant" can be improved by the elimination of5 steps. But Chaitin proved that compacting an algorithm cannot be automated by a generalized algorithm;[54] rather, it can only be done heuristically, i.e. by exhaustive search (examples to be found at Busy beaver), trial and error, cleverness, insight, application of inductive reasoning, etc. Observe that steps 4, 5 and 6 are repeated in steps 11, 12 and 13. Comparison with "Elegant" provides a hint that these steps together with steps 2 and 3 can be eliminated. This reduces the number of core instructions from 13 to 8, which makes it "more elegant" than "Elegant" at 9 steps.

Page 571: Tugas tik di kelas xi ips 3

The speed of "Elegant" can be improved by moving the B=0? test outside of the two subtraction loops. This change calls for the addition of 3 instructions (B=0?, A=0?, GOTO). Now "Elegant" computes the example-numbers faster; whether for any given A, B and R, S this is always the case would require a detailed analysis.

Algorithmic analysis

Main article: Analysis of algorithms

It is frequently important to know how much of a particular resource(such as time or storage) is theoretically required for a given algorithm. Methods have been developed for the analysis of algorithms to obtain such quantitative answers (estimates); for example, the sorting algorithm above has a time requirement of O(n),using the big O notation with n as the length of the list. At all times the algorithm only needs to remember two values: the largest number found so far, and its current position in the input list. Therefore, it is said to have a space requirement of O(1), if the space required to store the input numbers is not counted, or O(n) ifit is counted.

Different algorithms may complete the same task with a different setof instructions in less or more time, space, or 'effort' than others. For example, a binary search algorithm usually outperforms abrute force sequential search when used for table lookups on sorted lists.

Formal versus empirical

Main articles: Empirical algorithmics, Profiling (computer programming) and Program optimization

The analysis and study of algorithms is a discipline of computer science, and is often practiced abstractly without the use of a specific programming language or implementation. In this sense, algorithm analysis resembles other mathematical disciplines in that it focuses on the underlying properties of the algorithm and not on

Page 572: Tugas tik di kelas xi ips 3

the specifics of any particular implementation. Usually pseudocode is used for analysis as it is the simplest and most general representation. However, ultimately, most algorithms are usually implemented on particular hardware / software platforms and their algorithmic efficiency is eventually put to the test using real code. For the solution of a "one off" problem, the efficiency of a particular algorithm may not have significant consequences (unless nis extremely large) but for algorithms designed for fast interactive, commercial or long life scientific usage it may be critical. Scaling from small n to large n frequently exposes inefficient algorithms that are otherwise benign.

Empirical testing is useful because it may uncover unexpected interactions that affect performance. Benchmarks may be used to compare before/after potential improvements to an algorithm after program optimization.

Execution efficiency

Main article: Algorithmic efficiency

To illustrate the potential improvements possible even in well established algorithms, a recent significant innovation, relating toFFT algorithms (used heavily in the field of image processing), can decrease processing time up to 1,000 times for applications like medical imaging.[55] In general, speed improvements depend on special properties of the problem, which are very common in practical applications.[56] Speedups of this magnitude enable computing devices that make extensive use of image processing (like digital cameras and medical equipment) to consume less power.

Classification

There are various ways to classify algorithms, each with its own merits.

By implementation

Page 573: Tugas tik di kelas xi ips 3

One way to classify algorithms is by implementation means.

Recursion or iteration

A recursive algorithm is one that invokes (makes reference to) itself repeatedly until a certain condition (also known as termination condition) matches, which is a method common to functional programming. Iterative algorithms use repetitive constructs like loops and sometimes additional data structures like stacks to solve the given problems. Some problems are naturally suited for one implementation or the other. For example, towers of Hanoi is well understood using recursive implementation. Every recursive version has an equivalent (but possibly more or less complex) iterative version, and vice versa.

Logical

An algorithm may be viewed as controlled logical deduction. Thisnotion may be expressed as: Algorithm = logic + control.[57] The logic component expresses the axioms that may be used in the computation and the control component determines the way in which deduction is applied to the axioms. This is the basis for the logic programming paradigm. In pure logic programming languages the control component is fixed and algorithms are specified by supplyingonly the logic component. The appeal of this approach is the elegantsemantics: a change in the axioms has a well-defined change in the algorithm.

Serial, parallel or distributed

Algorithms are usually discussed with the assumption that computers execute one instruction of an algorithm at a time. Those computers are sometimes called serial computers. An algorithm designed for such an environment is called a serial algorithm, as opposed to parallel algorithms or distributed algorithms. Parallel algorithms take advantage of computer architectures where several processors can work on a problem at the same time, whereas distributed algorithms utilize multiple machines connected with a network. Parallel or distributed algorithms divide the problem into more symmetrical or asymmetrical subproblems and collect the resultsback together. The resource consumption in such algorithms is not

Page 574: Tugas tik di kelas xi ips 3

only processor cycles on each processor but also the communication overhead between the processors. Some sorting algorithms can be parallelized efficiently, but their communication overhead is expensive. Iterative algorithms are generally parallelizable. Some problems have no parallel algorithms, and are called inherently serial problems.

Deterministic or non-deterministic

Deterministic algorithms solve the problem with exact decision at every step of the algorithm whereas non-deterministic algorithms solve problems via guessing although typical guesses are made more accurate through the use of heuristics.

Exact or approximate

While many algorithms reach an exact solution, approximation algorithms seek an approximation that is close to the true solution.Approximation may use either a deterministic or a random strategy. Such algorithms have practical value for many hard problems.

Quantum algorithm

They run on a realistic model of quantum computation. The term is usually used for those algorithms which seem inherently quantum, or use some essential feature of quantum computation such as quantumsuperposition or quantum entanglement.

By design paradigm

Another way of classifying algorithms is by their design methodologyor paradigm. There is a certain number of paradigms, each different from the other. Furthermore, each of these categories include many different types of algorithms. Some common paradigms are:

Brute-force or exhaustive search

This is the naive method of trying every possible solution to see which is best.[58]

Page 575: Tugas tik di kelas xi ips 3

Divide and conquer

A divide and conquer algorithm repeatedly reduces an instance ofa problem to one or more smaller instances of the same problem (usually recursively) until the instances are small enough to solve easily. One such example of divide and conquer is merge sorting. Sorting can be done on each segment of data after dividing data intosegments and sorting of entire data can be obtained in the conquer phase by merging the segments. A simpler variant of divide and conquer is called a decrease and conquer algorithm, that solves an identical subproblem and uses the solution of this subproblem to solve the bigger problem. Divide and conquer divides the problem into multiple subproblems and so the conquer stage is more complex than decrease and conquer algorithms. An example of decrease and conquer algorithm is the binary search algorithm.

Search and enumeration

Many problems (such as playing chess) can be modeled as problemson graphs. A graph exploration algorithm specifies rules for moving around a graph and is useful for such problems. This category also includes search algorithms, branch and bound enumeration and backtracking.

Randomized algorithm

Such algorithms make some choices randomly (or pseudo-randomly).They can be very useful in finding approximate solutions for problems where finding exact solutions can be impractical (see heuristic method below). For some of these problems, it is known that the fastest approximations must involve some randomness.[59] Whether randomized algorithms with polynomial time complexity can bethe fastest algorithms for some problems is an open question known as the P versus NP problem. There are two large classes of such algorithms:

Monte Carlo algorithms return a correct answer with high-probability. E.g. RP is the subclass of these that run in polynomialtime.

Page 576: Tugas tik di kelas xi ips 3

Las Vegas algorithms always return the correct answer, but theirrunning time is only probabilistically bound, e.g. ZPP.

Reduction of complexity

This technique involves solving a difficult problem by transforming it into a better known problem for which we have (hopefully) asymptotically optimal algorithms. The goal is to find areducing algorithm whose complexity is not dominated by the resulting reduced algorithm's. For example, one selection algorithm for finding the median in an unsorted list involves first sorting the list (the expensive portion) and then pulling out the middle element in the sorted list (the cheap portion). This technique is also known as transform and conquer.

Optimization problems

For optimization problems there is a more specific classification ofalgorithms; an algorithm for such problems may fall into one or moreof the general categories described above as well as into one of thefollowing:

Linear programming

When searching for optimal solutions to a linear function bound to linear equality and inequality constrains, the constrains of the problem can be used directly in producing the optimal solutions. There are algorithms that can solve any problem in this category, such as the popular simplex algorithm.[60] Problems that can be solved with linear programming include the maximum flow problem for directed graphs. If a problem additionally requires that one or moreof the unknowns must be an integer then it is classified in integer programming. A linear programming algorithm can solve such a problemif it can be proved that all restrictions for integer values are superficial, i.e. the solutions satisfy these restrictions anyway. In the general case, a specialized algorithm or an algorithm that

Page 577: Tugas tik di kelas xi ips 3

finds approximate solutions is used, depending on the difficulty of the problem.

Dynamic programming

When a problem shows optimal substructures — meaning the optimalsolution to a problem can be constructed from optimal solutions to subproblems — and overlapping subproblems, meaning the same subproblems are used to solve many different problem instances, a quicker approach called dynamic programming avoids recomputing solutions that have already been computed. For example, Floyd–Warshall algorithm, the shortest path to a goal from a vertex in a weighted graph can be found by using the shortest path to the goal from all adjacent vertices. Dynamic programming and memoization go together. The main difference between dynamic programming and divideand conquer is that subproblems are more or less independent in divide and conquer, whereas subproblems overlap in dynamic programming. The difference between dynamic programming and straightforward recursion is in caching or memoization of recursive calls. When subproblems are independent and there is no repetition, memoization does not help; hence dynamic programming is not a solution for all complex problems. By using memoization or maintaining a table of subproblems already solved, dynamic programming reduces the exponential nature of many problems to polynomial complexity.

The greedy method

A greedy algorithm is similar to a dynamic programming algorithmin that it works by examining substructures, in this case not of theproblem but of a given solution. Such algorithms start with some solution, which may be given or have been constructed in some way, and improve it by making small modifications. For some problems theycan find the optimal solution while for others they stop at local optima, that is at solutions that cannot be improved by the algorithm but are not optimum. The most popular use of greedy algorithms is for finding the minimal spanning tree where finding the optimal solution is possible with this method. Huffman Tree, Kruskal, Prim, Sollin are greedy algorithms that can solve this optimization problem.

The heuristic method

Page 578: Tugas tik di kelas xi ips 3

In optimization problems, heuristic algorithms can be used to find a solution close to the optimal solution in cases where findingthe optimal solution is impractical. These algorithms work by getting closer and closer to the optimal solution as they progress. In principle, if run for an infinite amount of time, they will find the optimal solution. Their merit is that they can find a solution very close to the optimal solution in a relatively short time. Such algorithms include local search, tabu search, simulated annealing, and genetic algorithms. Some of them, like simulated annealing, are non-deterministic algorithms while others, like tabu search, are deterministic. When a bound on the error of the non-optimal solutionis known, the algorithm is further categorized as an approximation algorithm.

By field of study

See also: List of algorithms

Every field of science has its own problems and needs efficient algorithms. Related problems in one field are often studied together. Some example classes are search algorithms, sorting algorithms, merge algorithms, numerical algorithms, graph algorithms, string algorithms, computational geometric algorithms, combinatorial algorithms, medical algorithms, machine learning, cryptography, data compression algorithms and parsing techniques.

Fields tend to overlap with each other, and algorithm advances in one field may improve those of other, sometimes completely unrelated, fields. For example, dynamic programming was invented foroptimization of resource consumption in industry, but is now used insolving a broad range of problems in many fields.

By complexity

See also: Complexity class and Parameterized complexity

Page 579: Tugas tik di kelas xi ips 3

Algorithms can be classified by the amount of time they need to complete compared to their input size. There is a wide variety: somealgorithms complete in linear time relative to input size, some do so in an exponential amount of time or even worse, and some never halt. Additionally, some problems may have multiple algorithms of differing complexity, while other problems might have no algorithms or no known efficient algorithms. There are also mappings from some problems to other problems. Owing to this, it was found to be more suitable to classify the problems themselves instead of the algorithms into equivalence classes based on the complexity of the best possible algorithms for them.

Burgin (2005, p. 24) uses a generalized definition of algorithms that relaxes the common requirement that the output of the algorithmthat computes a function must be determined after a finite number ofsteps. He defines a super-recursive class of algorithms as "a class of algorithms in which it is possible to compute functions not computable by any Turing machine" (Burgin 2005, p. 107). This is closely related to the study of methods of hypercomputation.

Continuous algorithms

The adjective "continuous" when applied to the word "algorithm" can mean:

An algorithm operating on data that represents continuous quantities, even though this data is represented by discrete approximations—such algorithms are studied in numerical analysis; or

An algorithm in the form of a differential equation that operates continuously on the data, running on an analog computer.[61]

Legal issues

Page 580: Tugas tik di kelas xi ips 3

See also: Software patents for a general overview of the patentability of software, including computer-implemented algorithms.

Algorithms, by themselves, are not usually patentable. In the UnitedStates, a claim consisting solely of simple manipulations of abstract concepts, numbers, or signals does not constitute "processes" (USPTO 2006), and hence algorithms are not patentable (as in Gottschalk v. Benson). However, practical applications of algorithms are sometimes patentable. For example, in Diamond v. Diehr, the application of a simple feedback algorithm to aid in the curing of synthetic rubber was deemed patentable. The patenting of software is highly controversial, and there are highly criticized patents involving algorithms, especially data compression algorithms, such as Unisys' LZW patent.

Additionally, some cryptographic algorithms have export restrictions(see export of cryptography).

Etymology

The word "Algorithm", or "Algorism" in some other writing versions, comes from the name al-Khwārizmī, pronounced in classical Arabic as

Al-Khwarithmi. Al-Khwārizmī (Persian: مي� وازز� c. 780-850) was a ,خ��Persian mathematician, astronomer, geographer and a scholar in the House of Wisdom in Baghdad, whose name means "the native of Khwarezm", a city that was part of the Greater Iran during his era and now is in modern-day Uzbekistan.[11][12] About 825, he wrote a treatise in the Arabic language, which was translated into Latin in the 12th century under the title Algoritmi de numero Indorum. This title means "Algoritmi on the numbers of the Indians", where "Algoritmi" was the translator's Latinization of Al-Khwarizmi's name.[62] Al-Khwarizmi was the most widely read mathematician in Europe in the late Middle Ages, primarily through his other book, the Algebra.[63] In late medieval Latin, algorismus, the corruption of his name, simply meant the "decimal number system" that is still

Page 581: Tugas tik di kelas xi ips 3

the meaning of modern English algorism. In 17th-century French the word's form, but not its meaning, changed to algorithme. English adopted the French very soon afterwards, but it wasn't until the late 19th century that "Algorithm" took on the meaning that it has in modern English.[64]

Alternative etymology claims origin from the terms algebra in its late medieval sense of "Arabic arithmetics" and arithmos the Greek term for number (thus literally meaning "Arabic numbers" or "Arabic calculation"). Algorithms of Al-Kharizmi's works are not meant in their modern sense but as a type of repetitive calculus (here is to mention that his fundamental work known as algebra was originally titled "The Compendious Book on Calculation by Completion and Balancing" describing types of repetitive calculation and quadratic equations). In that sense, algorithms were known in Europe long before Al-Kharizmi. The oldest algorithm known today is the Euclidean algorithm (see also Extended Euclidean algorithm). Before the coining of the term algorithm the Greeks were calling them anthyphairesis literally meaning anti-subtraction or reciprocal subtraction (further reading here and here). Algorithms were known to the Greeks centuries before[65] Euclid. Instead of the word algebra the Greeks were using the term arithmetica (ἀριθμητική, i.e.in Diophantus' works the so-called "father of Algebra" - see also Diophantine equation and Eudoxos).

History: Development of the notion of "algorithm"

Ancient Greece

Algorithms were used in ancient Greece. Two examples are the Sieve of Eratosthenes, which was described in Introduction to Arithmetic by Nicomachus,[66][67]:Ch 9.2 and the Euclidean algorithm, which wasfirst described in Euclid's Elements (c. 300 BC).[67]:Ch 9.1

Origin

The word algorithm comes from the name of the 9th century Persian mathematician Abu Abdullah Muhammad ibn Musa Al-Khwarizmi, whose

Page 582: Tugas tik di kelas xi ips 3

work built upon that of the 7th-century Indian mathematician Brahmagupta. The word algorism originally referred only to the rulesof performing arithmetic using Hindu–Arabic numerals but evolved viaEuropean Latin translation of Al-Khwarizmi's name into algorithm by the 18th century. The use of the word evolved to include all definite procedures for solving problems or performing tasks.[68]

Discrete and distinguishable symbols

Tally-marks: To keep track of their flocks, their sacks of grain andtheir money the ancients used tallying: accumulating stones or marksscratched on sticks, or making discrete symbols in clay. Through theBabylonian and Egyptian use of marks and symbols, eventually Roman numerals and the abacus evolved (Dilson, p. 16–41). Tally marks appear prominently in unary numeral system arithmetic used in Turingmachine and Post–Turing machine computations.

Manipulation of symbols as "place holders" for numbers: algebra

The work of the ancient Greek geometers (Euclidean algorithm), the Indian mathematician Brahmagupta, and the Persian mathematician Al-Khwarizmi (from whose name the terms "algorism" and "algorithm" are derived), and Western European mathematicians culminated in Leibniz's notion of the calculus ratiocinator (ca 1680):

A good century and a half ahead of his time, Leibniz proposed analgebra of logic, an algebra that would specify the rules for manipulating logical concepts in the manner that ordinary algebra specifies the rules for manipulating numbers.[69]

Mechanical contrivances with discrete states

The clock: Bolter credits the invention of the weight-driven clock as "The key invention [of Europe in the Middle Ages]", in particularthe verge escapement[70] that provides us with the tick and tock of

Page 583: Tugas tik di kelas xi ips 3

a mechanical clock. "The accurate automatic machine"[71] led immediately to "mechanical automata" beginning in the 13th century and finally to "computational machines"—the difference engine and analytical engines of Charles Babbage and Countess Ada Lovelace, mid-19th century.[72] Lovelace is credited with the first creation of an algorithm intended for processing on a computer - Babbage's analytical engine, the first device considered a real Turing-complete computer instead of just a calculator - and is sometimes called "history's first programmer" as a result, though a full implementation of Babbage's second device would not be realized until decades after her lifetime.

Logical machines 1870—Stanley Jevons' "logical abacus" and "logical machine": The technical problem was to reduce Boolean equations whenpresented in a form similar to what are now known as Karnaugh maps. Jevons (1880) describes first a simple "abacus" of "slips of wood furnished with pins, contrived so that any part or class of the [logical] combinations can be picked out mechanically . . . More recently however I have reduced the system to a completely mechanical form, and have thus embodied the whole of the indirect process of inference in what may be called a Logical Machine" His machine came equipped with "certain moveable wooden rods" and "at the foot are 21 keys like those of a piano [etc] . . .". With this machine he could analyze a "syllogism or any other simple logical argument".[73]

This machine he displayed in 1870 before the Fellows of the Royal Society.[74] Another logician John Venn, however, in his 1881 Symbolic Logic, turned a jaundiced eye to this effort: "I have no high estimate myself of the interest or importance of what are sometimes called logical machines ... it does not seem to me that any contrivances at present known or likely to be discovered really deserve the name of logical machines"; see more at Algorithm characterizations. But not to be outdone he too presented "a plan somewhat analogous, I apprehend, to Prof. Jevon's abacus ... [And] [a]gain, corresponding to Prof. Jevons's logical machine, the following contrivance may be described. I prefer to call it merely alogical-diagram machine ... but I suppose that it could do very

Page 584: Tugas tik di kelas xi ips 3

completely all that can be rationally expected of any logical machine".[75]

Jacquard loom, Hollerith punch cards, telegraphy and telephony—the electromechanical relay: Bell and Newell (1971) indicate that the Jacquard loom (1801), precursor to Hollerith cards (punch cards, 1887), and "telephone switching technologies" were the roots of a tree leading to the development of the first computers.[76] By the mid-19th century the telegraph, the precursor of the telephone, was in use throughout the world, its discrete and distinguishable encoding of letters as "dots and dashes" a common sound. By the late19th century the ticker tape (ca 1870s) was in use, as was the use of Hollerith cards in the 1890 U.S. census. Then came the teleprinter (ca. 1910) with its punched-paper use of Baudot code on tape.

Telephone-switching networks of electromechanical relays (invented 1835) was behind the work of George Stibitz (1937), the inventor of the digital adding device. As he worked in Bell Laboratories, he observed the "burdensome' use of mechanical calculators with gears. "He went home one evening in 1937 intending to test his idea... Whenthe tinkering was over, Stibitz had constructed a binary adding device".[77]

Davis (2000) observes the particular importance of the electromechanical relay (with its two "binary states" open and closed):

It was only with the development, beginning in the 1930s, of electromechanical calculators using electrical relays, that machineswere built having the scope Babbage had envisioned."[78]

Mathematics during the 19th century up to the mid-20th century

Page 585: Tugas tik di kelas xi ips 3

Symbols and rules: In rapid succession the mathematics of George Boole (1847, 1854), Gottlob Frege (1879), and Giuseppe Peano (1888–1889) reduced arithmetic to a sequence of symbols manipulated by rules. Peano's The principles of arithmetic, presented by a new method (1888) was "the first attempt at an axiomatization of mathematics in a symbolic language".[79]

But Heijenoort gives Frege (1879) this kudos: Frege's is "perhaps the most important single work ever written in logic. ... in which we see a " 'formula language', that is a lingua characterica, a language written with special symbols, "for pure thought", that is, free from rhetorical embellishments ... constructed from specific symbols that are manipulated according to definite rules".[80] The work of Frege was further simplified and amplified by Alfred North Whitehead and Bertrand Russell in their Principia Mathematica (1910–1913).

The paradoxes: At the same time a number of disturbing paradoxes appeared in the literature, in particular the Burali-Forti paradox (1897), the Russell paradox (1902–03), and the Richard Paradox.[81] The resultant considerations led to Kurt Gödel's paper (1931)—he specifically cites the paradox of the liar—that completely reduces rules of recursion to numbers.

Effective calculability: In an effort to solve the Entscheidungsproblem defined precisely by Hilbert in 1928, mathematicians first set about to define what was meant by an "effective method" or "effective calculation" or "effective calculability" (i.e., a calculation that would succeed). In rapid succession the following appeared: Alonzo Church, Stephen Kleene andJ.B. Rosser's λ-calculus[82] a finely honed definition of "general recursion" from the work of Gödel acting on suggestions of Jacques Herbrand (cf. Gödel's Princeton lectures of 1934) and subsequent simplifications by Kleene.[83] Church's proof[84] that the Entscheidungsproblem was unsolvable, Emil Post's definition of effective calculability as a worker mindlessly following a list of instructions to move left or right through a sequence of rooms and

Page 586: Tugas tik di kelas xi ips 3

while there either mark or erase a paper or observe the paper and make a yes-no decision about the next instruction.[85] Alan Turing'sproof of that the Entscheidungsproblem was unsolvable by use of his "a- [automatic-] machine"[86]—in effect almost identical to Post's "formulation", J. Barkley Rosser's definition of "effective method" in terms of "a machine".[87] S. C. Kleene's proposal of a precursor to "Church thesis" that he called "Thesis I",[88] and a few years later Kleene's renaming his Thesis "Church's Thesis"[89] and proposing "Turing's Thesis".[90]

Emil Post (1936) and Alan Turing (1936–37, 1939)

Here is a remarkable coincidence of two men not knowing each other but describing a process of men-as-computers working on computations—and they yield virtually identical definitions.

Emil Post (1936) described the actions of a "computer" (human being)as follows:

"...two concepts are involved: that of a symbol space in which the work leading from problem to answer is to be carried out, and a fixed unalterable set of directions.

His symbol space would be

"a two way infinite sequence of spaces or boxes... The problem solver or worker is to move and work in this symbol space, being capable of being in, and operating in but one box at a time.... a box is to admit of but two possible conditions, i.e., being empty orunmarked, and having a single mark in it, say a vertical stroke.

"One box is to be singled out and called the starting point. ...a specific problem is to be given in symbolic form by a finite number of boxes [i.e., INPUT] being marked with a stroke.

Page 587: Tugas tik di kelas xi ips 3

Likewise the answer [i.e., OUTPUT] is to be given in symbolic form by such a configuration of marked boxes....

"A set of directions applicable to a general problem sets up a deterministic process when applied to each specific problem. This process terminates only when it comes to the direction of type (C ) [i.e., STOP]".[91] See more at Post–Turing machine

Alan Turing's statue at Bletchley Park.

Alan Turing's work[92] preceded that of Stibitz (1937); it is unknown whether Stibitz knew of the work of Turing. Turing's biographer believed that Turing's use of a typewriter-like model derived from a youthful interest: "Alan had dreamt of inventing typewriters as a boy; Mrs. Turing had a typewriter; and he could well have begun by asking himself what was meant by calling a typewriter 'mechanical'".[93] Given the prevalence of Morse code andtelegraphy, ticker tape machines, and teletypewriters we might conjecture that all were influences.

Turing—his model of computation is now called a Turing machine—begins, as did Post, with an analysis of a human computer that he whittles down to a simple set of basic motions and "states of mind".But he continues a step further and creates a machine as a model of computation of numbers.[94]

"Computing is normally done by writing certain symbols on paper.We may suppose this paper is divided into squares like a child's arithmetic book....I assume then that the computation is carried outon one-dimensional paper, i.e., on a tape divided into squares. I shall also suppose that the number of symbols which may be printed is finite....

Page 588: Tugas tik di kelas xi ips 3

"The behaviour of the computer at any moment is determined by the symbols which he is observing, and his "state of mind" at that moment. We may suppose that there is a bound B to the number of symbols or squares which the computer can observe at one moment. If he wishes to observe more, he must use successive observations. We will also suppose that the number of states of mind which need be taken into account is finite...

"Let us imagine that the operations performed by the computer tobe split up into 'simple operations' which are so elementary that itis not easy to imagine them further divided."[95]

Turing's reduction yields the following:

"The simple operations must therefore include:

"(a) Changes of the symbol on one of the observed squares

"(b) Changes of one of the squares observed to another square within L squares of one of the previously observed squares.

"It may be that some of these change necessarily invoke a change of state of mind. The most general single operation must therefore be taken to be one of the following:

"(A) A possible change (a) of symbol together with a possible change of state of mind.

"(B) A possible change (b) of observed squares, together with a possible change of state of mind"

"We may now construct a machine to do the work of this computer."[95]

Page 589: Tugas tik di kelas xi ips 3

A few years later, Turing expanded his analysis (thesis, definition)with this forceful expression of it:

"A function is said to be "effectively calculable" if its valuescan be found by some purely mechanical process. Though it is fairly easy to get an intuitive grasp of this idea, it is nevertheless desirable to have some more definite, mathematical expressible definition . . . [he discusses the history of the definition pretty much as presented above with respect to Gödel, Herbrand, Kleene, Church, Turing and Post] . . . We may take this statement literally,understanding by a purely mechanical process one which could be carried out by a machine. It is possible to give a mathematical description, in a certain normal form, of the structures of these machines. The development of these ideas leads to the author's definition of a computable function, and to an identification of computability † with effective calculability . . . .

"† We shall use the expression "computable function" to meana function calculable by a machine, and we let "effectively calculable" refer to the intuitive idea without particular identification with any one of these definitions".[96]

J. B. Rosser (1939) and S. C. Kleene (1943)

J. Barkley Rosser defined an 'effective [mathematical] method' in the following manner (italicization added):

"'Effective method' is used here in the rather special sense of a method each step of which is precisely determined and which is certain to produce the answer in a finite number of steps. With thisspecial meaning, three different precise definitions have been givento date. [his footnote #5; see discussion immediately below]. The simplest of these to state (due to Post and Turing) says essentially

Page 590: Tugas tik di kelas xi ips 3

that an effective method of solving certain sets of problems exists if one can build a machine which will then solve any problem of the set with no human intervention beyond inserting the question and (later) reading the answer. All three definitions are equivalent, soit doesn't matter which one is used. Moreover, the fact that all three are equivalent is a very strong argument for the correctness of any one." (Rosser 1939:225–6)

Rosser's footnote #5 references the work of (1) Church and Kleene and their definition of λ-definability, in particular Church's use of it in his An Unsolvable Problem of Elementary Number Theory (1936); (2) Herbrand and Gödel and their use of recursion in particular Gödel's use in his famous paper On Formally Undecidable Propositions of Principia Mathematica and Related Systems I (1931); and (3) Post (1936) and Turing (1936–7) in their mechanism-models ofcomputation.

Stephen C. Kleene defined as his now-famous "Thesis I" known as the Church–Turing thesis. But he did this in the following context (boldface in original):

"12. Algorithmic theories... In setting up a complete algorithmic theory, what we do is to describe a procedure, performable for each set of values of the independent variables, which procedure necessarily terminates and in such manner that from the outcome we can read a definite answer, "yes" or "no," to the question, "is the predicate value true?"" (Kleene 1943:273)

History after 1950

A number of efforts have been directed toward further refinement of the definition of "algorithm", and activity is on-going because of issues surrounding, in particular, foundations of mathematics (especially the Church–Turing thesis) and philosophy of mind

Page 591: Tugas tik di kelas xi ips 3

(especially arguments about artificial intelligence). For more, see Algorithm characterizations.

Key (cryptography)

From Wikipedia, the free encyclopedia

This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (April 2010)

In cryptography, a key is a piece of information (a parameter) that determines the functional output of a cryptographic algorithm or cipher. Without a key, the algorithm would produce no useful result.In encryption, a key specifies the particular transformation of plaintext into ciphertext, or vice versa during decryption. Keys arealso used in other cryptographic algorithms, such as digital signature schemes and message authentication codes.

Contents

1 Need for secrecy

2 Key scope

3 Ownership and revocation

4 Key sizes

5 Key choice

6 See also

7 References

Need for secrecy

Page 592: Tugas tik di kelas xi ips 3

In designing security systems, it is wise to assume that the detailsof the cryptographic algorithm are already available to the attacker. This is known as Kerckhoffs' principle — "only secrecy of the key provides security", or, reformulated as Shannon's maxim, "the enemy knows the system". The history of cryptography provides evidence that it can be difficult to keep the details of a widely used algorithm secret (see security through obscurity). A key is often easier to protect (it's typically a small piece of information) than an encryption algorithm, and easier to change if compromised. Thus, the security of an encryption system in most cases relies on some key being kept secret.

Trying to keep keys secret is one of the most difficult problems in practical cryptography; see key management. An attacker who obtains the key (by, for example, theft, extortion, dumpster diving or social engineering) can recover the original message from the encrypted data, and issue signatures.

Key scope

Keys are generated to be used with a given suite of algorithms, called a cryptosystem. Encryption algorithms which use the same key for both encryption and decryption are known as symmetric key algorithms. A newer class of "public key" cryptographic algorithms was invented in the 1970s. These asymmetric key algorithms use a pair of keys —or keypair— a public key and a private one. Public keys are used for encryption or signature verification; private onesdecrypt and sign. The design is such that finding out the private key is extremely difficult, even if the corresponding public key is known. As that design involves lengthy computations, a keypair is often used to exchange an on-the-fly symmetric key, which will only be used for the current session. RSA and DSA are two popular public-key cryptosystems; DSA keys can only be used for signing and verifying, not for encryption.

Ownership and revocation

Page 593: Tugas tik di kelas xi ips 3

Part of the security brought about by cryptography concerns confidence about who signed a given document, or who replies at the other side of a connection. Assuming that keys are not compromised, that question consists of determining the owner of the relevant public key. To be able to tell a key's owner, public keys are often enriched with attributes such as names, addresses, and similar identifiers. The packed collection of a public key and its attributes can be digitally signed by one or more supporters. In thePKI model, the resulting object is called a certificate and is signed by a certificate authority (CA). In the PGP model, it is still called a "key", and is signed by various people who personallyverified that the attributes match the subject.[1]

In both PKI and PGP models, compromised keys can be revoked. Revocation has the side effect of disrupting the relationship between a key's attributes and the subject, which may still be valid. In order to have a possibility to recover from such disruption, signers often use different keys for everyday tasks: Signing with an intermediate certificate (for PKI) or a subkey (for PGP) facilitates keeping the principal private key in an offline safe.

Key sizes

Main article: Key size

For the one-time pad system the key must be at least as long as the message. In encryption systems that use a cipher algorithm, messagescan be much longer than the key. The key must, however, be long enough so that an attacker cannot try all possible combinations.

A key length of 80 bits is generally considered the minimum for strong security with symmetric encryption algorithms. 128-bit keys are commonly used and considered very strong. See the key size article for a more complete discussion.

Page 594: Tugas tik di kelas xi ips 3

The keys used in public key cryptography have some mathematical structure. For example, public keys used in the RSA system are the product of two prime numbers. Thus public key systems require longerkey lengths than symmetric systems for an equivalent level of security. 3072 bits is the suggested key length for systems based onfactoring and integer discrete logarithms which aim to have securityequivalent to a 128 bit symmetric cipher. Elliptic curve cryptography may allow smaller-size keys for equivalent security, but these algorithms have only been known for a relatively short time and current estimates of the difficulty of searching for their keys may not survive. As of 2004, a message encrypted using a 109-bit key elliptic curve algorithm had been broken by brute force.[2] The current rule of thumb is to use an ECC key twice as long as the symmetric key security level desired. Except for the random one-timepad, the security of these systems has not (as of 2008) been proven mathematically, so a theoretical breakthrough could make everything one has encrypted an open book. This is another reason to err on theside of choosing longer keys.

Key choice

To prevent a key from being guessed, keys need to be generated trulyrandomly and contain sufficient entropy. The problem of how to safely generate truly random keys is difficult, and has been addressed in many ways by various cryptographic systems. There is a RFC on generating randomness (RFC 4086, Randomness Requirements for Security). Some operating systems include tools for "collecting" entropy from the timing of unpredictable operations such as disk drive head movements. For the production of small amounts of keying material, ordinary dice provide a good source of high quality randomness.

When a password (or passphrase) is used as an encryption key, well-designed cryptosystems first run it through a key derivation function which adds a salt and compresses or expands it to the key length desired, for example by compressing a long phrase into a 128-bit value suitable for use in a block cipher.

Page 595: Tugas tik di kelas xi ips 3

Sometimes, metadata is included in purchased media which records information such as the purchaser's name, account information, or email address. Also included may be the file's publisher, author, creation date, download date, and various notes. This information isnot embedded in the played content, like a watermark, but is kept separate, but within the file or stream.

As an example, metadata is used in media purchased from Apple's iTunes Store for DRM-free as well as DRM-restricted versions of their music or videos. This information is included as MPEG standardmetadata.[71][72]

Watermarks

Digital watermarks are features of media that are added during production or distribution. Digital watermarks involve data that is arguably steganographically embedded within the audio or video data.

Watermarks can be used for different purposes that may include:

recording the copyright owner

recording the distributor

recording the distribution chain

identifying the purchaser of the music

Watermarks are not complete DRM mechanisms in their own right, but are used as part of a system for copyright enforcement, such as helping provide prosecution evidence for purely legal avenues of rights management, rather than direct technological restriction. Some programs used to edit video and/or audio may distort, delete, or otherwise interfere with watermarks. Signal/modulator-carrier chromatography may also separate watermarks from original audio or detect them as glitches. Additionally, comparison of two separately

Page 596: Tugas tik di kelas xi ips 3

obtained copies of audio using simple, home-grown algorithms can often reveal watermarks. New methods of detection are currently under investigation by both industry and non-industry researchers.

Streaming media services

Since the late-2000s the trend in media consumption has been towardsrenting content using online streaming services, for example Spotifyfor music and Netflix for video content. Copyright holders often require that these services protect the content they licence using DRM mechanisms.

Laws regarding DRM

This section needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (May 2014)

Globe icon.

The examples and perspective in this section may not represent a worldwide view of the subject. Please improve this article and discuss the issue on the talk page. (November 2012)

Article 11 of the 1996 WIPO Copyright Treaty (WCT), requires nationsparty to the treaties to enact laws against DRM circumvention, and has been implemented in most member states of the World IntellectualProperty Organization. The American implementation is the Digital Millennium Copyright Act (DMCA), while in Europe the treaty has beenimplemented by the 2001 European directive on copyright, which requires member states of the European Union to implement legal protections for technological prevention measures. In 2006, the lower house of the French parliament adopted such legislation as part of the controversial DADVSI law, but added that protected DRM techniques should be made interoperable, a move which caused widespread controversy in the United States. The Tribunal de grande instance de Paris concluded in 2006, that the complete blocking of any possibilities of making private copies was an impermissible behaviour under French copyright law.[8]

Page 597: Tugas tik di kelas xi ips 3

Digital Millennium Copyright Act

Main article: Digital Millennium Copyright Act

The Digital Millennium Copyright Act (DMCA) is an amendment to United States copyright law, passed unanimously on May 14, 1998, which criminalizes the production and dissemination of technology that allows users to circumvent technical copy-restriction methods. Under the Act, circumvention of a technological measure that effectively controls access to a work is illegal if done with the primary intent of violating the rights of copyright holders.[verification needed] (For a more detailed analysis of the statute, see WIPO Copyright and Performances and Phonograms Treaties Implementation Act.)

Reverse engineering of existing systems is expressly permitted underthe Act under specific conditions. Under the reverse engineering safe harbor, circumvention necessary to achieve interoperability with other software is specifically authorized. See 17 U.S.C. Sec. 1201(f). Open-source software to decrypt content scrambled with the Content Scrambling System and other encryption techniques presents an intractable problem with the application of the Act. Much dependson the intent of the actor. If the decryption is done for the purpose of achieving interoperability of open source operating systems with proprietary operating systems, the circumvention would be protected by Section 1201(f) the Act. Cf., Universal City Studios, Inc. v. Corley, 273 F.3d 429 (2d Cir. 2001) at notes 5 and 16. However, dissemination of such software for the purpose of violating or encouraging others to violate copyrights has been held illegal. See Universal City Studios, Inc. v. Reimerdes, 111 F. Supp.2d 346 (S.D.N.Y. 2000).

The DMCA has been largely ineffective in protecting DRM systems,[73]as software allowing users to circumvent DRM remains widely available. However, those who wish to preserve the DRM systems have attempted to use the Act to restrict the distribution and development of such software, as in the case of DeCSS.

Page 598: Tugas tik di kelas xi ips 3

Although the Act contains an exception for research, the exception is subject to vague qualifiers that do little to reassure researchers. Cf., 17 U.S.C. Sec. 1201(g). The DMCA has had an impacton cryptography, because many[who?] fear that cryptanalytic researchmay violate the DMCA. The arrest of Russian programmer Dmitry Sklyarov in 2001, for alleged infringement of the DMCA, was a highlypublicized example of the law's use to prevent or penalize development of anti-DRM measures. Sklyarov was arrested in the United States after a presentation at DEF CON, and subsequently spent several months in jail. The DMCA has also been cited as chilling to non-criminal inclined users, such as students of cryptanalysis (including, in a well-known instance, Professor Feltenand students at Princeton[74]); security consultants, such as Netherlands based Niels Ferguson, who declined to publish vulnerabilities he discovered in Intel's secure-computing scheme dueto fear of being arrested under the DMCA when he travels to the US; and blind or visually impaired users of screen readers or other assistive technologies.[75]

European Union

On 22 May 2001, the European Union passed the EU Copyright Directive, an implementation of the 1996 WIPO Copyright Treaty, thataddressed many of the same issues as the DMCA.

On 25 April 2007, the European Parliament supported the first directive of EU, which aims to harmonize criminal law in the member states. It adopted a first reading report on harmonizing the national measures for fighting copyright abuse. If the European Parliament and the Council approve the legislation, the submitted directive will oblige the member states to consider a crime a violation of international copyright committed with commercial purposes. The text suggests numerous measures: from fines to imprisonment, depending on the gravity of the offense. The EP members supported the Commission motion, changing some of the texts.They excluded patent rights from the range of the directive and decided that the sanctions should apply only to offenses with

Page 599: Tugas tik di kelas xi ips 3

commercial purposes. Copying for personal, non-commercial purposes was also excluded from the range of the directive.

In 2012, the Court of Justice of the European Union ruled in favor of reselling copyrighted games, prohibiting any preventative action that would prevent such transaction.[76] The court said that "The first sale in the EU of a copy of a computer program by the copyright holder or with his consent exhausts the right of distribution of that copy in the EU. A rightholder who has marketed a copy in the territory of a Member State of the EU thus loses the right to rely on his monopoly of exploitation in order to oppose theresale of that copy."[77]

In 2014, the Court of Justice of the European Union ruled that circumventing DRM on game devices may be legal under some circumstances, limiting the legal protection to only cover technological measures intended to prevent or eliminate unauthorisedacts of reproduction, communication, public offer or distribution.[78][79]

International issues

In Europe, there are several ongoing dialog activities that are characterized by their consensus-building intention:

Workshop on Digital Rights Management of the World Wide Web Consortium (W3C), January 2001.[80]

Participative preparation of the European Committee for Standardization/Information Society Standardization System (CEN/ISSS) DRM Report, 2003 (finished).[81]

DRM Workshops of Directorate-General for Information Society andMedia (European Commission) (finished), and the work of the DRM working groups (finished), as well as the work of the High Level Group on DRM (ongoing).[82]

Page 600: Tugas tik di kelas xi ips 3

Consultation process of the European Commission, DG Internal Market, on the Communication COM(2004)261 by the European Commissionon "Management of Copyright and Related Rights" (closed).[83]

The INDICARE project is an ongoing dialogue on consumer acceptability of DRM solutions in Europe. It is an open and neutral platform for exchange of facts and opinions, mainly based on articles by authors from science and practice.

The AXMEDIS project is a European Commission Integrated Project of the FP6. The main goal of AXMEDIS is automating the content production, copy protection and distribution, reducing the related costs and supporting DRM at both B2B and B2C areas harmonizing them.

The Gowers Review of Intellectual Property is the result of a commission by the British Government from Andrew Gowers, undertaken in December 2005, and published in 2006, with recommendations regarding copyright term, exceptions, orphaned works, and copyright enforcement.

Israel

Israel is a signatory to, but has not yet ratified, the WIPO Copyright Treaty. Israeli law does not currently expressly prohibit the circumvention of technological measures used to implement digital rights management. The Israeli Ministry of Justice proposed a bill to prohibit such activities in June 2012, but the bill was not passed by the Knesset. In September 2013, the Supreme Court ruled that the current copyright law could not be interpreted to prohibit the circumvention of digital rights management, though the Court left open the possibility that such activities could result inliability under the law of unjust enrichment.[84]

Opposition to DRM

Many organizations, prominent individuals, and computer scientists are opposed to DRM. Two notable DRM critics are John Walker, as expressed for instance, in his article "The Digital Imprimatur: How

Page 601: Tugas tik di kelas xi ips 3

Big brother and big media can put the Internet genie back in the bottle",[85] and Richard Stallman in his article The Right to Read[86] and in other public statements: "DRM is an example of a malicious feature – a feature designed to hurt the user of the software, and therefore, it's something for which there can never betoleration".[87] Stallman also believes that using the word "rights"is misleading and suggests that the word "restrictions", as in "Digital Restrictions Management", be used instead.[88][89][90][91][92] This terminology has since been adopted by many other writers and critics unconnected with Stallman.[93][94][95]

Other prominent critics of DRM include Professor Ross Anderson of Cambridge University, who heads a British organization which opposesDRM and similar efforts in the UK and elsewhere, and Cory Doctorow, a writer and technology blogger.[96]

There have been numerous others who see DRM at a more fundamental level. This is similar to some of the ideas in Michael H. Goldhaber's presentation about "The Attention Economy and the Net" at a 1997 conference on the "Economics of Digital Information".[97] (sample quote from the "Advice for the Transition" section of that presentation:[97] "If you can't figure out how to afford it without charging, you may be doing something wrong.")

The EFF and similar organizations such as FreeCulture.org also hold positions which are characterized as opposed to DRM.[citation needed]

The Foundation for a Free Information Infrastructure has criticized DRM's impact as a trade barrier from a free market perspective.[citation needed]

The final version of the GNU General Public License version 3, as released by the Free Software Foundation, has a provision that

Page 602: Tugas tik di kelas xi ips 3

'strips' DRM of its legal value, so people can break the DRM on GPL software without breaking laws like the DMCA. Also, in May 2006, theFSF launched a "Defective by Design" campaign against DRM.[98][99]

Creative Commons provides licensing options encouraging the expansion of and building upon creative work without the use of DRM.[100] In addition, the use of DRM by a licensee to restrict the freedoms granted by a Creative Commons license is a breach of the Baseline Rights asserted by each license.[101]

Bill Gates spoke about DRM at CES in 2006. According to him, DRM is not where it should be, and causes problems for legitimate consumerswhile trying to distinguish between legitimate and illegitimate users.[102]

According to Steve Jobs, Apple opposes DRM music after a public letter calling its music labels to stop requiring DRM on its iTunes Store. As of January 6, 2009, the iTunes Store is DRM-free for songs.[103]

Defective by Design member protesting DRM on May 25, 2007.

The Norwegian consumer rights organization "Forbrukerrådet" complained to Apple Inc. in 2007, about the company's use of DRM in,and in conjunction with, its iPod and iTunes products. Apple was accused of restricting users' access to their music and videos in anunlawful way, and of using EULAs which conflict with Norwegian consumer legislation. The complaint was supported by consumers' ombudsmen in Sweden and Denmark, and is currently being reviewed in the EU. Similarly, the United States Federal Trade Commission held hearings in March 2009, to review disclosure of DRM limitations to customers' use of media products.[104]

DRM opponents argue that the presence of DRM violates existing private property rights and restricts a range of heretofore normal

Page 603: Tugas tik di kelas xi ips 3

and legal user activities. A DRM component would control a device a user owns (such as a digital audio player) by restricting how it mayact with regards to certain content, overriding some of the user's wishes (for example, preventing the user from burning a copyrighted song to CD as part of a compilation or a review). Doctorow has described this possibility as "the right to make up your own copyright laws".[105]

An example of this restriction to legal user activities may be seen in Microsoft's Windows Vista operating system in which content usinga Protected Media Path is disabled or degraded depending on the DRM scheme's evaluation of whether the hardware and its use are 'secure'.[106] All forms of DRM depend on the DRM enabled device (e.g., computer, DVD player, TV) imposing restrictions that (at least by intent) cannot be disabled or modified by the user. Key issues around DRM such as the right to make personal copies, provisions for persons to lend copies to friends, provisions for service discontinuance, hardware agnosticism, software and operatingsystem agnosticism,[107] contracts for public libraries, and customers' protection against one-side amendments of the contract bythe publisher have not been fully addressed.(see references 80–89) It has also been pointed out that it is entirely unclear whether owners of content with DRM are legally permitted to pass on their property as inheritance to another person.[108]

Tools like FairUse4WM have been created to strip Windows Media of DRM restrictions.[109]

Valve Corporation president Gabe Newell also stated "most DRM strategies are just dumb" because they only decrease the value of a game in the consumer's eyes. Newell suggests that the goal should instead be "[creating] greater value for customers through service value".[110]

Page 604: Tugas tik di kelas xi ips 3

At the 2012 Game Developers Conference, the CEO of CD Projekt Red, Marcin Iwinski, announced that the company will not use DRM in any of its future releases. Iwinski stated of DRM, "it's just over-complicating things. We release the game. It's cracked in two hours,it was no time for Witcher 2. What really surprised me is that the pirates didn't use the GOG version, which was not protected. They took the SecuROM retail version, cracked it and said 'we cracked it'– meanwhile there's a non-secure version with a simultaneous release. You'd think the GOG version would be the one floating around." Iwinski added after the presentation, "DRM does not protectyour game. If there are examples that it does, then people maybe should consider it, but then there are complications with legit users."[111]

Bruce Schneier argues that digital copy prevention is futile: "What the entertainment industry is trying to do is to use technology to contradict that natural law. They want a practical way to make copying hard enough to save their existing business. But they are doomed to fail."[112] He has also described trying to make digital files uncopyable as being like "trying to make water not wet".[113] The creators of StarForce also take this stance, stating that "The purpose of copy protection is not making the game uncrackable – it is impossible."[114]

The Association for Computing Machinery and the Institute of Electrical and Electronics Engineers have historically opposed DRM, even going so far as to name AACS as a technology "most likely to fail" in an issue of IEEE Spectrum.[115]

DRM-free works

Label proposed by the Free Software Foundation for DRM-free works

In reaction to opposition to DRM, many publishers and artists label their works as "DRM-free". Major companies that have done so includethe following:

Page 605: Tugas tik di kelas xi ips 3

Apple Inc. sold DRM content on their iTunes Store when it started 2003, but made music DRM-free after April 2007[116] and has been labeling all music as "DRM-Free" since January 2009.[117] The music still carries a digital watermark to identify the purchaser. Other works sold on iTunes such as e-books, movies, TV shows, audiobooks and apps continue to be protected by DRM.[118]

All music sold on Google Play is DRM free.

GOG.com, a digital distributor started in 2008, specialized in the distribution of PC video games. While most other digital distribution services allow various forms of DRM (or have them embedded) gog.com has a strict non-DRM policy.[119]

Tor Books, a major publisher of science fiction and fantasy books, started selling DRM-free e-books in July 2012.[120] Smaller e-book publishers such as O'Reilly Media and Baen Books had already forgone DRM previously.

Since 2014, Comixology, which distributes digital comics, allowsrights holders to provide the option of a DRM-free download of purchased comics. Publishers which allow this include Image Comics, Dynamite Entertainment, Zenescope Entertainment, Thrillbent and Top Shelf Productions.[121]

The Free Software Foundation does a DRM-free guide which includes GOG and Vimeo on Demand.[122]

Shortcomings

DRM server and Internet outages

Many DRM systems require authentication with an online server. Whenever the server goes down, or a region or country experiences anInternet outage, it effectively locks out people from registering orusing the material. This is especially true for a product that requires a persistent online authentication, where, for example, a successful DDoS attack on the server would essentially make all copies of the material unusable.

Methods to bypass DRM

Page 606: Tugas tik di kelas xi ips 3

There are many methods to bypass DRM control on audio, video, and ebook content.

DRM bypass methods for audio and video content

One simple method to bypass DRM on audio files is to burn the content to an audio CD and then rip it into DRM-free files. Some software products simplify and automate this burn-rip process by allowing the user to burn music to a CD-RW disc or to a Virtual CD-Rdrive, then automatically ripping and encoding the music, and automatically repeating this process until all selected music has been converted, rather than forcing the user to do this one CD (72–80 minutes worth of music) at a time.

Many software programs have been developed that intercept the data stream as it is decrypted out of the DRM-restricted file, and then use this data to construct a DRM-free file. These programs require adecryption key. Programs that do this for DVDs, HD DVDs, and Blu-rayDiscs include universal decryption keys in the software itself. Programs that do this for TiVo ToGo recordings, iTunes audio, and PlaysForSure songs, however, rely on the user's own key — that is, they can only process content that the user has legally acquired under his or her own account.

Another method is to use software to record the signals being sent through the audio or video cards, or to plug analog recording devices into the analog outputs of the media player. These techniques utilize the "analog hole."

Analog hole

Main article: Analog hole

All forms of DRM for audio and visual material (excluding interactive materials, e.g., videogames) are subject to the analog

Page 607: Tugas tik di kelas xi ips 3

hole, namely that in order for a viewer to play the material, the digital signal must be turned into an analog signal containing lightand/or sound for the viewer, and so available to be copied as no DRMis capable of controlling content in this form. In other words, a user could play a purchased audio file while using a separate program to record the sound back into the computer into a DRM-free file format.

All DRM to date can therefore be bypassed by recording this signal and digitally storing and distributing it in a non DRM limited form,by anyone who has the technical means of recording the analog stream. Furthermore, the analog hole vulnerability cannot be overcome without the additional protection of externally imposed restrictions, such as legal regulations, because the vulnerability is inherent to all analog means of transmission.[123] However, the conversion from digital to analog and back is likely to force a lossof quality, particularly when using lossy digital formats. HDCP is an attempt to plug the analog hole, although it is largely ineffective.[124][125]

Asus released a soundcard which features a function called "Analog Loopback Transformation" to bypass the restrictions of DRM. This feature allows the user to record DRM-restricted audio via the soundcard's built-in analog I/O connection.[126][127]

In order to prevent this exploit there has been some discussions between copyright holders and manufacturers of electronics capable of playing such content, to no longer include analog connectivity intheir devices. The movement dubbed as "Analog Sunset" has seen a steady decline in analog output options on most Blu-ray devices manufactured after 2010.

DRM on general computing platforms

Many of the DRM systems in use are designed to work on general purpose computing hardware, such as desktop PCs, apparently because

Page 608: Tugas tik di kelas xi ips 3

this equipment is felt to be a major contributor to revenue loss from disallowed copying.[citation needed] Large commercial copyrightinfringers ("pirates") avoid consumer equipment,[citation needed] solosses from such infringers will not be covered by such provisions.

Such schemes, especially software based ones, can never be wholly secure since the software must include all the information necessaryto decrypt the content, such as the decryption keys. An attacker will be able to extract this information, directly decrypt and copy the content, which bypasses the restrictions imposed by a DRM system.[96]

DRM on purpose-built hardware

Many DRM schemes use encrypted media which requires purpose-built hardware to hear or see the content. This appears to ensure that only licensed users (those with the hardware) can access the content. It additionally tries to protect a secret decryption key from the users of the system.

While this in principle can work, it is extremely difficult to buildthe hardware to protect the secret key against a sufficiently determined adversary. Many such systems have failed in the field. Once the secret key is known, building a version of the hardware that performs no checks is often relatively straightforward. In addition user verification provisions are frequently subject to attack, pirate decryption being among the most frequented ones.

A common real-world example can be found in commercial direct broadcast satellite television systems such as DirecTV and Malaysia's Astro. The company uses tamper-resistant smart cards to store decryption keys so that they are hidden from the user and the satellite receiver. However, the system has been compromised in the past, and DirecTV has been forced to roll out periodic updates and replacements for its smart cards.

Page 609: Tugas tik di kelas xi ips 3

Watermarks

Watermarks can often be removed, although degradation of video or audio can occur.

Undecrypted copying failure

Mass piracy of hard copies does not necessarily need DRM to be decrypted or removed, as it can be achieved by bit-perfect copying of a legally obtained medium without accessing the decrypted content. Additionally, still-encrypted disk images can be distributed over the Internet and played on legitimately licensed players.

Obsolescence

When standards and formats change, it may be difficult to transfer DRM-restricted content to new media. Additionally, any system that requires contact with an authentication server is vulnerable to thatserver becoming unavailable, as happened[128] in 2007, when videos purchased from Major League Baseball (mlb.com) prior to 2006, becameunplayable due to a change to the servers that validate the licenses.

Amazon PDF and LIT ebooks

In August 2006, Amazon stopped selling DRMed PDF and .LIT formatebooks. Customers were unable to download purchased ebooks 30 days after that date, losing access to their purchased content on new devices.[129][130]

Microsoft Zune

When Microsoft introduced their Zune[131] media player in 2006, it did not support content that uses Microsoft's own PlaysForSure DRM scheme they had previously been selling. The EFF calls this "a raw deal".[132]

Page 610: Tugas tik di kelas xi ips 3

MSN Music

In April 2008, Microsoft sent an email to former customers of the now-defunct MSN Music store:

As of August 31, 2008, we will no longer be able to support the retrieval of license keys for the songs you purchased from MSN Music or the authorization of additional computers. You will need toobtain a license key for each of your songs downloaded from MSN Music on any new computer, and you must do so before August 31, 2008. If you attempt to transfer your songs to additionalCryptosystem

From Wikipedia, the free encyclopedia

×

Sign into your Wajam account and discover what your friends have shared

Twitter

Facebook

In cryptography cryptosystem refers to a suite of cryptographic algorithms needed to implement a particular security service, most commonly for achieving confidentiality (encryption).[1]

Typically, a cryptosystem consists of three algorithms: one for key generation, one for encryption, and one for decryption. The term cipher (sometimes cypher) is often used to refer to a pair of algorithms, one for encryption and one for decryption. Therefore, the term "cryptosystem" is most often used when the key generation algorithm is important. For this reason, the term "cryptosystem" is commonly used to refer to public key techniques; however both "cipher" and "cryptosystem" are used for symmetric key techniques.

Formal definition

Page 611: Tugas tik di kelas xi ips 3

Mathematically, a cryptosystem or encryption scheme can be defined as a tuple (\mathcal{P},\mathcal{C},\mathcal{K},\mathcal{E},\mathcal{D}) with the following properties.

\mathcal{P} is a set called the "plaintext space". Its elements are called plaintexts.

\mathcal{C} is a set called the "ciphertext space". Its elementsare called ciphertexts.

\mathcal{K} is a set called the "key space". Its elements are called keys.

\mathcal{E} = \{ E_k : k \in \mathcal{K} \} is a set of functions E_k : \mathcal{P} \rightarrow \mathcal{C}. Its elements are called "encryption functions".

\mathcal{D} = \{ D_k : k \in \mathcal{K} \} is a set of functions D_k : \mathcal{C} \rightarrow \mathcal{P}. Its elements are called "decryption functions".

For each e \in \mathcal{K}, there is d \in \mathcal{K} such that D_d(E_e(p)) = p for all p \in \mathcal{P}.[2]

Note; typically this definition is modified in order to distinguish an encryption scheme as being either a symmetric-key or public-key type of cryptosystem. computers after August 31, 2008, those songs will not successfully play.[133]

Authentication

From Wikipedia, the free encyclopedia

Contents

1 Methods

Page 612: Tugas tik di kelas xi ips 3

2 Factors and identity

2.1 Two-factor authentication

3 Product authentication

3.1 Packaging

4 Information content

4.1 Factual verification

4.2 Video authentication

4.3 Literacy & Literature authentication

5 History and state-of-the-art

5.1 Strong authentication

6 Authorization

7 Access control

8 See also

9 References

10 External links

Methods

Main article: Provenance

Authentication has relevance to multiple fields. In art, antiques, and anthropology, a common problem is verifying that a given artifact was produced by a certain person or was produced in a certain place or period of history. In computer science, verifying aperson's identity is often required to secure access to confidentialdata or systems.

Authentication can be considered to be of three types:

Page 613: Tugas tik di kelas xi ips 3

The first type of authentication is accepting proof of identity given by a credible person who has first-hand evidence that the identity is genuine. When authentication is required of art or physical objects, this proof could be a friend, family member or colleague attesting to the item's provenance, perhaps by having witnessed the item in its creator's possession. With autographed sports memorabilia, this could involve someone attesting that they witnessed the object being signed. A vendor selling branded items implies authenticity, while he or she may not have evidence that every step in the supply chain was authenticated. Authority based trust relationships (centralized) drive the majority of secured internet communication through known public certificate authorities;peer based trust (decentralized, web of trust) is used for personal services like email or files (pretty good privacy, GNU Privacy Guard) and trust is established by known individuals signing each other's keys (proof of identity) or at Key signing parties, for example.

The second type of authentication is comparing the attributes of theobject itself to what is known about objects of that origin. For example, an art expert might look for similarities in the style of painting, check the location and form of a signature, or compare theobject to an old photograph. An archaeologist might use carbon dating to verify the age of an artifact, do a chemical analysis of the materials used, or compare the style of construction or decoration to other artifacts of similar origin. The physics of sound and light, and comparison with a known physical environment, can be used to examine the authenticity of audio recordings, photographs, or videos. Documents can be verified as being created on ink or paper readily available at the time of the item's implied creation.

Attribute comparison may be vulnerable to forgery. In general, it relies on the facts that creating a forgery indistinguishable from agenuine artifact requires expert knowledge, that mistakes are easilymade, and that the amount of effort required to do so is

Page 614: Tugas tik di kelas xi ips 3

considerably greater than the amount of profit that can be gained from the forgery.

In art and antiques, certificates are of great importance for authenticating an object of interest and value. Certificates can, however, also be forged, and the authentication of these poses a problem. For instance, the son of Han van Meegeren, the well-known art-forger, forged the work of his father and provided a certificatefor its provenance as well; see the article Jacques van Meegeren.

Criminal and civil penalties for fraud, forgery, and counterfeiting can reduce the incentive for falsification, depending on the risk ofgetting caught.

Currency and other financial instruments commonly use this second type of authentication method. Bills, coins, and cheques incorporatehard-to-duplicate physical features, such as fine printing or engraving, distinctive feel, watermarks, and holographic imagery, which are easy for trained receivers to verify.

The third type of authentication relies on documentation or other external affirmations. In criminal courts, the rules of evidence often require establishing the chain of custody of evidence presented. This can be accomplished through a written evidence log, or by testimony from the police detectives and forensics staff that handled it. Some antiques are accompanied by certificates attesting to their authenticity. Signed sports memorabilia is usually accompanied by a certificate of authenticity. These external recordshave their own problems of forgery and perjury, and are also vulnerable to being separated from the artifact and lost.

In computer science, a user can be given access to secure systems based on user credentials that imply authenticity. A network administrator can give a user a password, or provide the user with a

Page 615: Tugas tik di kelas xi ips 3

key card or other access device to allow system access. In this case, authenticity is implied but not guaranteed.

Consumer goods such as pharmaceuticals, perfume, fashion clothing can use all three forms of authentication to prevent counterfeit goods from taking advantage of a popular brand's reputation (damaging the brand owner's sales and reputation). As mentioned above, having an item for sale in a reputable store implicitly attests to it being genuine, the first type of authentication. The second type of authentication might involve comparing the quality and craftsmanship of an item, such as an expensive handbag, to genuine articles. The third type of authentication could be the presence of a trademark on the item, which is a legally protected marking, or any other identifying feature which aids consumers in the identification of genuine brand-name goods. With software, companies have taken great steps to protect from counterfeiters, including adding holograms, security rings, security threads and color shifting ink.[1]

Factors and identity

The ways in which someone may be authenticated fall into three categories, based on what are known as the factors of authentication: something the user knows, something the user has, and something the user is. Each authentication factor covers a rangeof elements used to authenticate or verify a person's identity priorto being granted access, approving a transaction request, signing a document or other work product, granting authority to others, and establishing a chain of authority.

Security research has determined that for a positive authentication,elements from at least two, and preferably all three, factors shouldbe verified.[2] The three factors (classes) and some of elements of each factor are:

This is a picture of the front (top) and back (bottom) of an ID Card.

Page 616: Tugas tik di kelas xi ips 3

the knowledge factors: Something the user knows (e.g., a password, pass phrase, or personal identification number (PIN), challenge response (the user must answer a question, or pattern)

the ownership factors: Something the user has (e.g., wrist band,ID card, security token, cell phone with built-in hardware token, software token, or cell phone holding a software token)

the inherence factors: Something the user is or does (e.g., fingerprint, retinal pattern, DNA sequence (there are assorted definitions of what is sufficient), signature, face, voice, unique bio-electric signals, or other biometric identifier).

Two-factor authentication

Main article: Two-factor authentication

When elements representing two factors are required for authentication, the term two-factor authentication is applied — e.g.a bankcard (something the user has) and a PIN (something the user knows). Business networks may require users to provide a password (knowledge factor) and a pseudorandom number from a security token (ownership factor). Access to a very-high-security system might require a mantrap screening of height, weight, facial, and fingerprint checks (several inherence factor elements) plus a PIN and a day code (knowledge factor elements), but this is still a two-factor authentication.

Product authentication

A Security hologram label on an electronics box for authentication

Counterfeit products are often offered to consumers as being authentic. Counterfeit consumer goods such as electronics, music, apparel, and Counterfeit medications have been sold as being legitimate. Efforts to control the supply chain and educate consumers help ensure that authentic products are sold and used.

Page 617: Tugas tik di kelas xi ips 3

Even security printing on packages, labels, and nameplates, however,is subject to counterfeiting.

A secure key storage device can be used for authentication in consumer electronics, network authentication, license management, supply chain management, etc. Generally the device to be authenticated needs some sort of wireless or wired digital connection to either a host system or a network. Nonetheless, the component being authenticated need not be electronic in nature as anauthentication chip can be mechanically attached and read through a connector to the host e.g. an authenticated ink tank for use with a printer. For products and services that these Secure Coprocessors can be applied to, they can offer a solution that can be much more difficult to counterfeit than most other options while at the same time being more easily verified.

Packaging

Packaging and labeling can be engineered to help reduce the risks ofcounterfeit consumer goods or the theft and resale of products.[3][4] Some package constructions are more difficult to copy and some have pilfer indicating seals. Counterfeit goods, unauthorized sales (diversion), material substitution and tampering can all be reduced with these anti-counterfeiting technologies. Packages may include authentication seals and use security printing to help indicate thatthe package and contents are not counterfeit; these too are subject to counterfeiting. Packages also can include anti-theft devices, such as dye-packs, RFID tags, or electronic article surveillance[5] tags that can be activated or detected by devices at exit points andrequire specialized tools to deactivate. Anti-counterfeiting technologies that can be used with packaging include:

Taggant fingerprinting - uniquely coded microscopic materials that are verified from a database

Encrypted micro-particles - unpredictably placed markings (numbers, layers and colors) not visible to the human eye

Page 618: Tugas tik di kelas xi ips 3

Holograms - graphics printed on seals, patches, foils or labels and used at point of sale for visual verification

Micro-printing - second line authentication often used on currencies

Serialized barcodes

UV printing - marks only visible under UV light

Track and trace systems - use codes to link products to databasetracking system

Water indicators - become visible when contacted with water

DNA tracking - genes embedded onto labels that can be traced

Color shifting ink or film - visible marks that switch colors ortexture when tilted

Tamper evident seals and tapes - destructible or graphically verifiable at point of sale

2d barcodes - data codes that can be tracked

RFID chips

Information content

The authentication of information can pose special problems with electronic communication, such as vulnerability to man-in-the-middleattacks, whereby a third party taps into the communication stream, and poses as each of the two other communicating parties, in order to intercept information from each. Extra identity factors can be required to authenticate each party's identity.

Literary forgery can involve imitating the style of a famous author.If an original manuscript, typewritten text, or recording is available, then the medium itself (or its packaging — anything from a box to e-mail headers) can help prove or disprove the authenticityof the document.

Page 619: Tugas tik di kelas xi ips 3

However, text, audio, and video can be copied into new media, possibly leaving only the informational content itself to use in authentication.

Various systems have been invented to allow authors to provide a means for readers to reliably authenticate that a given message originated from or was relayed by them. These involve authenticationfactors like:

A difficult-to-reproduce physical artifact, such as a seal, signature, watermark, special stationery, or fingerprint.

A shared secret, such as a passphrase, in the content of the message.

An electronic signature; public-key infrastructure is often usedto cryptographically guarantee that a message has been signed by theholder of a particular private key.

The opposite problem is detection of plagiarism, where information from a different author is passed off as a person's own work. A common technique for proving plagiarism is the discovery of another copy of the same or very similar text, which has different attribution. In some cases, excessively high quality or a style mismatch may raise suspicion of plagiarism.

Factual verification

Determining the truth or factual accuracy of information in a message is generally considered a separate problem from authentication. A wide range of techniques, from detective work, to fact checking in journalism, to scientific experiment might be employed.

Video authentication

Page 620: Tugas tik di kelas xi ips 3

It is sometimes necessary to authenticate the veracity of video recordings used as evidence in judicial proceedings. Proper chain-of-custody records and secure storage facilities can help ensure theadmissibility of digital or analog recordings by the Court.

Literacy & Literature authentication

In literacy, authentication is a readers’ process of questioning theveracity of an aspect of literature and then verifying those questions via research. The fundamental question for authentication of literature is - Do you believe it? Related to that, an authentication project is therefore a reading and writing activity which students documents the relevant research process ([6]). It builds students' critical literacy. The documentation materials for literature go beyond narrative texts and likely include informational texts, primary sources, and multimedia. The process typically involves both internet and hands-on library research. Whenauthenticating historical fiction in particular, readers considers the extent that the major historical events, as well as the culture portrayed (e.g., the language, clothing, food, gender roles), are believable for the time period. ([7]).

History and state-of-the-art

Historically, fingerprints have been used as the most authoritative method of authentication, but recent court cases in the US and elsewhere have raised fundamental doubts about fingerprint reliability.[citation needed] Outside of the legal system as well, fingerprints have been shown to be easily spoofable, with British Telecom's top computer-security official noting that "few" fingerprint readers have not already been tricked by one spoof or another.[8] Hybrid or two-tiered authentication methods offer a compelling solution, such as private keys encrypted by fingerprint inside of a USB device.

Page 621: Tugas tik di kelas xi ips 3

In a computer data context, cryptographic methods have been developed (see digital signature and challenge-response authentication) which are currently not spoofable if and only if theoriginator's key has not been compromised. That the originator (or anyone other than an attacker) knows (or doesn't know) about a compromise is irrelevant. It is not known whether these cryptographically based authentication methods are provably secure, since unanticipated mathematical developments may make them vulnerable to attack in future. If that were to occur, it may call into question much of the authentication in the past. In particular,a digitally signed contract may be questioned when a new attack on the cryptography underlying the signature is discovered.

Strong authentication

The U.S. Government's National Information Assurance Glossary defines strong authentication as

layered authentication approach relying on two or more authenticators to establish the identity of an originator or receiver of information.

The above definition is consistent with that of the European CentralBank, as discussed in the strong authentication entry.

Authorization

Main article: Authorization

A soldier checks a driver's identification card before allowing her to enter a military base.

The process of authorization is distinct from that of authentication. Whereas authentication is the process of verifying that "you are who you say you are", authorization is the process of verifying that "you are permitted to do what you are trying to do". Authorization thus presupposes authentication.

Page 622: Tugas tik di kelas xi ips 3

For example, a client showing proper identification credentials to abank teller is asking to be authenticated that he really is the one whose identification he is showing. A client whose authentication request is approved becomes authorized to access the accounts of that account holder, but no others.

However note that if a stranger tries to access someone else's account with his own identification credentials, the stranger's identification credentials will still be successfully authenticated because they are genuine and not counterfeit, however the stranger will not be successfully authorized to access the account, as the stranger's identification credentials had not been previously set tobe eligible to access the account, even if valid (i.e. authentic).

Similarly when someone tries to log on a computer, they are usually first requested to identify themselves with a login name and supportthat with a password. Afterwards, this combination is checked against an existing login-password validity record to check if the combination is authentic. If so, the user becomes authenticated (i.e. the identification he supplied in step 1 is valid, or authentic). Finally, a set of pre-defined permissions and restrictions for that particular login name is assigned to this user, which completes the final step, authorization.

Even though authorization cannot occur without authentication, the former term is sometimes used to mean the combination of both.

To distinguish "authentication" from the closely related "authorization", the shorthand notations A1 (authentication), A2 (authorization) as well as AuthN / AuthZ (AuthR) or Au / Az are usedin some communities.[9]

Page 623: Tugas tik di kelas xi ips 3

Normally delegation was considered to be a part of authorization domain. Recently authentication is also used for various type of delegation tasks. Delegation in IT network is also a new but evolving field.[10]

Access control

Main article: Access control

One familiar use of authentication and authorization is access control. A computer system that is supposed to be used only by thoseauthorized must attempt to detect and exclude the unauthorized. Access to it is therefore usually controlled by insisting on an authentication procedure to establish with some degree of confidencethe identity of the user, granting privileges established for that identity. One such procedure involves the usage of Layer 8 which allows IT administrators to identify users, control Internet activity of users in the network, set user based policies and generate reports by username. Common examples of access control involving authentication include:

Asking for photoID when a contractor first arrives at a house toperform work.

Using captcha as a means of asserting that a user is a human being and not a computer program.

By using One Time Password (OTP), received on a tele-network enabled device like mobile phone, as an authentication password/PIN

A computer program using a blind credential to authenticate to another program

Entering a country with a passport

Logging in to a computer

Using a confirmation E-mail to verify ownership of an e-mail address

Using an Internet banking system

Page 624: Tugas tik di kelas xi ips 3

Withdrawing cash from an ATM

In some cases, ease of access is balanced against the strictness of access checks. For example, the credit card network does not requirea personal identification number for authentication of the claimed identity; and a small transaction usually does not even require a signature of the authenticated person for proof of authorization of the transaction. The security of the system is maintained by limiting distribution of credit card numbers, and by the threat of punishment for fraud.

Security experts argue that it is impossible to prove the identity of a computer user with absolute certainty. It is only possible to apply one or more tests which, if passed, have been previously declared to be sufficient to proceed. The problem is to determine which tests are sufficient, and many such are inadequate. Any given test can be spoofed one way or another, with varying degrees of difficulty.

Computer security experts are now also recognising that despite extensive efforts, as a business, research and network community, westill do not have a secure understanding of the requirements for authentication, in a range of circumstances. Lacking this understanding is a significant barrier to identifying optimum methods of authentication. major questions are:

What is authentication for?

Who benefits from authentication/who is disadvantaged by authentication failures?

What disadvantages can effective authentication actually guard against?

Colloquialism

From Wikipedia, the free encyclopedia

Page 625: Tugas tik di kelas xi ips 3

(Redirected from Colloquial)

"Colloquial name" redirects here. For usage in biological nomenclature, see Common name.

A colloquialism is a word, phrase or other form used in informal language. Dictionaries often display colloquial words and phrases with the abbreviation colloq. as an identifier.[1] Colloquial language or informal language is a variety of language commonly employed in conversation or other communication in informal situations. The word colloquial by its etymology originally referredto speech as distinguished from writing, but colloquial register is fundamentally about the degree of informality or casualness rather than the medium, and some usage commentators thus prefer the term casualism.

Contents

1 Usage

2 Distinction from other styles

3 See also

4 References

5 External links

Usage

Colloquial language is distinct from formal speech or formal writing.[2] It is the variety of language that speakers typically use when they are relaxed and not especially self-conscious.[3]

Page 626: Tugas tik di kelas xi ips 3

Some colloquial speech contains a great deal of slang, but some contains no slang at all. Slang is permitted in colloquial language,but it is not a necessary element.[3] Other examples of colloquial usage in English include contractions or profanity.[3]

In the philosophy of language the term "colloquial language" refers to ordinary natural language, as distinct from specialized forms used in logic or other areas of philosophy.[4] In the field of logical atomism, meaning is evaluated in a different way than with more formal propositions.

A colloquial name or familiar name is a name or term commonly used to identify a person or thing in informal language, in place of another usually more formal or technical name.[5]

Distinction from other styles

Colloquialisms are distinct from slang or jargon. Slang refers to words used only by specific social groups, such as teenagers or soldiers.[6] Colloquial language may include slang, but mostly formssuch as contractions or other informal words known to most native speakers of the language.[6]

Jargon is terminology that is especially defined in relationship to a specific activity, profession or group. The term refers to the language used by people who work in a particular area or who have a common interest. Much like slang, it is a kind of shorthand used to express ideas that are frequently discussed between members of a group, though it can also be developed deliberately using chosen terms.[7] While a standard term may be given a more precise or unique usage amongst practitioners of relevant disciplines, it is often reported that jargon is a barrier to communication for those people unfamiliar with the respective field.[citation needed]

Code (cryptography)

From Wikipedia, the free encyclopedia

Page 627: Tugas tik di kelas xi ips 3

For other uses, see Code (disambiguation).

In cryptology, a code is a method used to transform a message into an obscured form so it cannot be understood. Special information or a key is required to read the original message. The usual method is to use a codebook with a list of common phrases or words matched with a codeword. Encoded messages are sometimes termed codetext, while the original message is usually referred to as plaintext.

Terms like code and cipher are often used to refer to any form of encryption. However, there is an important distinction between codesand ciphers in technical work; it is, essentially, the scope of the transformation involved. Codes operate at the level of meaning; thatis, words or phrases are converted into something else. Ciphers workat the level of individual letters, or small groups of letters, or even, in modern ciphers, with individual bits. While a code might transform "change" into "CVGDK" or "cocktail lounge", a cipher transforms elements below the semantic level, i.e., below the level of meaning. The "a" in "attack" might be converted to "Q", the first"t" to "f", the second "t" to "3", and so on. Ciphers are more convenient than codes in some situations, there being no need for a codebook, with its inherently limited number of valid messages, and the possibility of fast automatic operation on computers.

Codes were long believed to be more secure than ciphers, since (if the compiler of the codebook did a good job) there is no pattern of transformation which can be discovered, whereas ciphers use a consistent transformation, which can potentially be identified and reversed (except in the case of the one-time pad).

Contents

1 One- and two-part codes

2 One-time code

Page 628: Tugas tik di kelas xi ips 3

3 Idiot code

4 Cryptanalysis of codes

5 Superencipherment

6 References

7 See also

One- and two-part codes

Codes are defined by "codebooks" (physical or notional), which are dictionaries of codegroups listed with their corresponding plaintext. Codes originally had the codegroups assigned in 'plaintext order' for convenience of the code designed, or the encoder. For example, in a code using numeric code groups, a plaintext word starting with "a" would have a low-value group, whileone starting with "z" would have a high-value group. The same codebook could be used to "encode" a plaintext message into a coded message or "codetext", and "decode" a codetext back into plaintext message.

However, such "one-part" codes had a certain predictability that made it easier for others to notice patterns and "crack" or "break" the message, revealing the plaintext, or part of it and, at the sametime, gradually reconstruct the codebook. In order to make life moredifficult for codebreakers, codemakers designed codes with no predictable relationship between the codegroups and the ordering of the matching plaintext. In practice, this meant that two codebooks were now required, one to find codegroups for encoding, the other tolook up codegroups to find plaintext for decoding. Students of foreign languages work much the same way; for, say, a French-speaking person learning to speak English, there is need for both anEnglish-French and a French-English dictionary. Such "two-part" codes required more effort to develop, and twice as much effort to distribute (and discard safely when replaced), but they were harder to break.

Page 629: Tugas tik di kelas xi ips 3

One-time code

A one-time code is a prearranged word, phrase or symbol that is intended to be used only once to convey a simple message, often the signal to execute or abort some plan or confirm that it has succeeded or failed. One-time codes are often designed to be included in what would appear to be an innocent conversation. Done properly they are almost impossible to detect, though a trained analyst monitoring the communications of someone who has already aroused suspicion might be able to recognize a comment like "Aunt Bertha has gone into labor" as having an ominous meaning. Famous example of one time codes include:

"One if by land; two if by sea" in "Paul Revere's Ride" made famous in the poem by Henry Wadsworth Longfellow

"Climb Mount Niitaka" - the signal to Japanese planes to begin the attack on Pearl Harbor

During World War II the British Broadcasting Corporation's overseas service frequently included "personal messages" as part of its regular broadcast schedule. The seemingly nonsensical stream of messages read out by announcers were actually one time codes intended for Special Operations Executive (SOE) agents operating behind enemy lines. An example might be "The princess wears red shoes" or "Mimi's cat is asleep under the table". Each code message was read out twice. By such means, the French Resistance were instructed to start sabotaging rail and other transport links the night before D-day.

"Over all of Spain, the sky is clear" was a signal (broadcast onradio) to start the nationalist military revolt in Spain on July 17,1936.

Sometimes messages are not prearranged and rely on shared knowledge hopefully known only to the recipients. An example is the telegram sent to U.S. President Harry Truman, then at the Potsdam Conference

Page 630: Tugas tik di kelas xi ips 3

to meet with Soviet premier Joseph Stalin, informing Truman of the first successful test of an atomic bomb.

"Operated on this morning. Diagnosis not yet complete but results seem satisfactory and already exceed expectations. Local press release necessary as interest extends great distance. Dr. Groves pleased. He returns tomorrow. I will keep you posted."

See also one-time pad, an unrelated cypher algorithm

Idiot code

An idiot code is a code that is created by the parties using it. This type of communication is akin to the hand signals used by armies in the field.

Example: Any sentence where 'day' and 'night' are used means 'attack'. The location mentioned in the following sentence specifiesthe location to be attacked.

Plaintext: Attack X.

Codetext: We walked day and night through the streets but couldn't find it! Tomorrow we'll head into X.

An early use of the term appears to be by George Perrault, a character in the science fiction book Friday[1] by Robert A. Heinlein:

The simplest sort [of code] and thereby impossible to break. Thefirst ad told the person or persons concerned to carry out number seven or expect number seven or it said something about something designated as seven. This one says the same with respect to code

Page 631: Tugas tik di kelas xi ips 3

item number ten. But the meaning of the numbers cannot be deduced through statistical analysis because the code can be changed long before a useful statistical universe can be reached. It's an idiot code... and an idiot code can never be broken if the user has the good sense not to go too often to the well.

Terrorism expert Magnus Ranstorp said that the men who carried out the September 11, 2001, attacks on the United States used basic e-mail and what he calls "idiot code" to discuss their plans.[2]

Cryptanalysis of codes

While solving a monoalphabetic substitution cipher is easy, solving even a simple code is difficult. Decrypting a coded message is a little like trying to translate a document written in a foreign language, with the task basically amounting to building up a "dictionary" of the codegroups and the plaintext words they represent.

One fingerhold on a simple code is the fact that some words are morecommon than others, such as "the" or "a" in English. In telegraphic messages, the codegroup for "STOP" (i.e., end of sentence or paragraph) is usually very common. This helps define the structure of the message in terms of sentences, if not their meaning, and thisis cryptanalytically useful.

Further progress can be made against a code by collecting many codetexts encrypted with the same code and then using information from other sources

spies,

newspapers,

diplomatic cocktail party chat,

Page 632: Tugas tik di kelas xi ips 3

the location from where a message was sent,

where it was being sent to (i.e., traffic analysis)

the time the message was sent,

events occurring before and after the message was sent

the normal habits of the people sending the coded messages

etc.

For example, a particular codegroup found almost exclusively in messages from a particular army and nowhere else might very well indicate the commander of that army. A codegroup that appears in messages preceding an attack on a particular location may very well stand for that location.

Of course, cribs can be an immediate giveaway to the definitions of codegroups. As codegroups are determined, they can gradually build up a critical mass, with more and more codegroups revealed from context and educated guesswork. One-part codes are more vulnerable to such educated guesswork than two-part codes, since if the codenumber "26839" of a one-part code is determined to stand for "bulldozer", then the lower codenumber "17598" will likely stand fora plaintext word that starts with "a" or "b". At least, for simple one part codes.

Various tricks can be used to "plant" or "sow" information into a coded message, for example by executing a raid at a particular time and location against an enemy, and then examining code messages sentafter the raid. Coding errors are a particularly useful fingerhold into a code; people reliably make errors, sometimes disastrous ones.Of course, planting data and exploiting errors works against ciphersas well.

The most obvious and, in principle at least, simplest way of cracking a code is to steal the codebook through bribery, burglary,

Page 633: Tugas tik di kelas xi ips 3

or raiding parties — procedures sometimes glorified by the phrase "practical cryptography" — and this is a weakness for both codes andciphers, though codebooks are generally larger and used longer than cipher keys. While a good code may be harder to break than a cipher,the need to write and distribute codebooks is seriously troublesome.

Constructing a new code is like building a new language and writing a dictionary for it; it was an especially big job before computers. If a code is compromised, the entire task must be done all over again, and that means a lot of work for both cryptographers and the code users. In practice, when codes were in widespread use, they were usually changed on a periodic basis to frustrate codebreakers, and to limit the useful life of stolen or copied codebooks.

Once codes have been created, codebook distribution is logistically clumsy, and increases chances the code will be compromised. There isa saying that "Three people can keep a secret if two of them are dead," Benjamin Franklin - Wikiquote and though it may be something of an exaggeration, a secret becomes harder to keep if it is shared among several people. Codes can be thought reasonably secure if theyare only used by a few careful people, but if whole armies use the same codebook, security becomes much more difficult.

In contrast, the security of ciphers is generally dependent on protecting the cipher keys. Cipher keys can be stolen and people canbetray them, but they are much easier to change and distribute.

Superencipherment

In more recent practice, it became typical to encipher a message after first encoding it, so as to provide greater security by increasing the degree of difficulty for cryptanalysts. With a numerical code, this was commonly done with an "additive" - simply along key number which was digit-by-digit added to the code groups, modulo 10. Unlike the codebooks, additives would be changed frequently. The famous Japanese Navy code, JN-25, was of this

Page 634: Tugas tik di kelas xi ips 3

design, as were several of the (confusingly named) Royal Navy Cyphers used after WWI and into WWII.

One might wonder why a code would be used if it had to be encipheredto provide security. As well as providing security, a well designed code can also compress the message, and provide some degree of automatic error correction. For ciphers, the same degree of error correction has generally required use of computers.

References

Friday (1982) by Robert A. Heinlein

Radio Free Europe / Radio Liberty: "Middle East: Islamic Militants Take Jihad To The Internet" By Jeffrey Donovan, 16 June 2004.

Kahn, David (1996). The Codebreakers : The Comprehensive Historyof Secret Communication from Ancient Times to the Internet. Scribner.

Pickover, Cliff (2000). Cryptorunes: Codes and Secret Writing. Pomegranate Communications. ISBN 978-0-7649-1251-1.

Code wordFrom Wikipedia, the free encyclopediaFor other uses, see Code word (disambiguation).

This article includes a list of references, related reading or external links, but its sources remain unclear because it lacks inline citations. Please improve this article by introducing more precise citations. (July 2010)

In communication, a code word is an element of a standardized code or protocol. Each code word is assembled in accordance with the specific rules of the code and assigned a unique meaning. Code words are typically used for reasons of reliability, clarity, brevity, or secrecy.

Page 635: Tugas tik di kelas xi ips 3

ComputerFrom Wikipedia, the free encyclopedia"Computer technology" and "Computer system" redirect here. Forthe company, see Computer Technology Limited. For other uses, see Computer (disambiguation) and Computer system (disambiguation).

Computer

A computer is a general-purpose device that can be programmed to carry out a set of arithmetic or logical operations automatically. Since a sequence of operations can be readily changed, the computer can solve more than one kind of problem.

Conventionally, a computer consists of at least one processingelement, typically a central processing unit (CPU), and some form of memory. The processing element carries out arithmetic and logic operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices allow information to be retrieved from an external source, and the result of operations saved and retrieved.

Mechanical analog computers started appearing in the first century and were later used in the medieval era for astronomical calculations. In World War II, mechanical analog computers were used for specialized military applications suchas calculating torpedo aiming. During this time the first electronic digital computers were developed. Originally they were the size of a large room, consuming as much power as several hundred modern personal computers (PCs).[1]

Page 636: Tugas tik di kelas xi ips 3

Modern computers based on integrated circuits are millions to billions of times more capable than the early machines, and occupy a fraction of the space.[2] Computers are small enough to fit into mobile devices, and mobile computers can be powered by small batteries. Personal computers in their various forms are icons of the Information Age and are what most people consider as “computers.” However, the embedded computers found in many devices from MP3 players to fighter aircraft and from electronic toys to industrial robots are themost numerous.

Contents1 Etymology

2 History

2.1 Pre-twentieth century

2.2 First general-purpose computing device

2.3 Later Analog computers

2.4 Digital computer development

2.4.1 Electromechanical

2.4.2 Vacuum tubes and digital electronic circuits

2.4.3 Stored programs

2.4.4 Transistors

2.4.5 Integrated circuits

2.5 Mobile computers become dominant

3 Programs

3.1 Stored program architecture

3.2 Machine code

3.3 Programming language

Page 637: Tugas tik di kelas xi ips 3

3.3.1 Low-level languages

3.3.2 High-level languages/Third Generation Language

3.4 Fourth Generation Languages

3.5 Program design

3.6 Bugs

4 Components

4.1 Control unit

4.2 Central Processing unit (CPU)

4.3 Arithmetic logic unit (ALU)

4.4 Memory

4.5 Input/output (I/O)

4.6 Multitasking

4.7 Multiprocessing

5 Networking and the Internet

5.1 Computer architecture paradigms

6 Misconceptions

6.1 Unconventional computing

7 Future

8 Further topics

8.1 Artificial intelligence

9 Hardware

9.1 History of computing hardware

9.2 Other hardware topics

Page 638: Tugas tik di kelas xi ips 3

10 Software

11 Languages

11.1 Firmware

11.2 Liveware

12 Types of computers

12.1 Based on uses

12.2 Based on sizes

13 Input Devices

14 Output Devices

15 Professions and organizations

16 See also

17 Notes

18 References

19 External links

EtymologyThe first known use of the word "computer" was in 1613 in a book called The Yong Mans Gleanings by English writer Richard Braithwait: "I haue read the truest computer of Times, and thebest Arithmetician that euer breathed, and he reduceth thy dayes into a short number." It referred to a person who carried out calculations, or computations. The word continued with the same meaning until the middle of the 20th century. From the end of the 19th century the word began to take on itsmore familiar meaning, a machine that carries out computations.[3]

HistoryMain article: History of computing hardware

Page 639: Tugas tik di kelas xi ips 3

Pre-twentieth century

The Ishango bone

Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was probably a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, probably livestock or grains, sealed in hollow unbaked clay containers.[4][5] The use of counting rods is one example.

Suanpan (the number represented on this abacus is 6,302,715,408)

The abacus was initially used for arithmetic tasks. The Roman abacus was used in Babylonia as early as 2400 BC. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money.

Page 640: Tugas tik di kelas xi ips 3

The ancient Greek-designed Antikythera mechanism, dating between 150 to 100 BC, is the world's oldest analog computer.

The Antikythera mechanism is believed to be the earliest mechanical analog "computer", according to Derek J. de Solla Price.[6] It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to circa 100 BC. Devices of a level of complexitycomparable to that of the Antikythera mechanism would not reappear until a thousand years later.

Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century.[7] The astrolabe was invented in theHellenistic world in either the 1st or 2nd centuries BC and isoften attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kindsof problems in spherical astronomy. An astrolabe incorporatinga mechanical calendar computer[8][9] and gear-wheels was inventedby Abi Bakr of Isfahan, Persia in 1235.[10] Abū Rayhān al-Bīrūnīinvented the first mechanical geared lunisolar calendar astrolabe,[11] an early fixed-wired knowledge processing machine [12] with a gear train and gear-wheels,[13] circa 1000 AD.

The sector, a calculating instrument used for solving problemsin proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation.

Page 641: Tugas tik di kelas xi ips 3

The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage.

A slide rule

The slide rule was invented around 1620–1630, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cuberoots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Aviation is one of the few fields where sliderules are still in widespread use, particularly for solving time–distance problems in light aircraft. To save space and for ease of reading, these are typically circular devices rather than the classic linear slide rule shape. A popular example is the E6B.

In the 1770s Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automata) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire of Neuchâtel, Switzerland, and still operates.[14]

The tide-predicting machine invented by Sir William Thomson in1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location.

The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876 Lord Kelvin had already discussed the possible construction ofsuch calculators, but he had been stymied by the limited

Page 642: Tugas tik di kelas xi ips 3

output torque of the ball-and-disk integrators.[15] In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torqueamplifier was the advance that allowed these machines to work.Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers.

First general-purpose computing device

A portion of Babbage's Difference engine.

Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer",[16] he conceptualized and inventedthe first mechanical computer in the early 19th century. Afterworking on his revolutionary difference engine, designed to aid in navigational calculations, in 1833 he realized that a much more general design, an Analytical Engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output,the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The Engine incorporated an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for ageneral-purpose computer that could be described in modern terms as Turing-complete.[17][18]

Page 643: Tugas tik di kelas xi ips 3

The machine was about a century ahead of its time. All the parts for his machine had to be made by hand - this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of theBritish Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to difficulties not only of politics and financing, but also to his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in1888. He gave a successful demonstration of its use in computing tables in 1906.

Later Analog computers

Sir William Thomson's third tide-predicting machine design, 1879–81

During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these werenot programmable and generally lacked the versatility and accuracy of modern digital computers.[19]

The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the brother of the more famous Lord Kelvin.[15]

Page 644: Tugas tik di kelas xi ips 3

The art of mechanical analog computing reached its zenith withthe differential analyzer, built by H. L. Hazen and Vannevar Bush at MIT starting in 1927. This built on the mechanical integrators of James Thomson and the torque amplifiers invented by H. W. Nieman. A dozen of these devices were built before their obsolescence became obvious.

By the 1950s the success of digital electronic computers had spelled the end for most analog computing machines, but analogcomputers remain in use in some specialized applications such as education (control systems) and aircraft (slide rule).

Digital computer development

The principle of the modern computer was first described by mathematician and pioneering computer scientist Alan Turing, who set out the idea in his seminal 1936 paper,[20] On Computable Numbers. Turing reformulated Kurt Gödel's 1931 results on the limits of proof and computation, replacing Gödel's universal arithmetic-based formal language with the formal and simple hypothetical devices that became known as Turing machines. He proved that some such machine would be capable of performing any conceivable mathematical computation if it were representable as an algorithm. He went on to prove that there was no solution to the Entscheidungsproblem by first showing thatthe halting problem for Turing machines is undecidable: in general, it is not possible to decide algorithmically whether a given Turing machine will ever halt.

He also introduced the notion of a 'Universal Machine' (now known as a Universal Turing machine), with the idea that such a machine could perform the tasks of any other machine, or in other words, it is provably capable of computing anything thatis computable by executing a program stored on tape, allowing the machine to be programmable. Von Neumann acknowledged that the central concept of the modern computer was due to this paper.[21] Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine.

Page 645: Tugas tik di kelas xi ips 3

Electromechanical

By 1938 the United States Navy had developed an electromechanical analog computer small enough to use aboard asubmarine. This was the Torpedo Data Computer, which used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II similar devices were developed in other countries as well.

Replica of Zuse's Z3, the first fully automatic, digital (electromechanical) computer.

Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939, was one of the earliest examples of an electromechanical relay computer.[22]

In 1941, Zuse followed his earlier machine up with the Z3, theworld's first working electromechanical programmable, fully automatic digital computer.[23][24] The Z3 was built with 2000 relays, implementing a 22 bit word length that operated at a clock frequency of about 5–10 Hz.[25] Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advancessuch as floating point numbers. Replacement of the hard-to-implement decimal system (used in Charles Babbage's earlier design) by the simpler binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time.[26] The Z3 was probably a complete Turing machine.

Vacuum tubes and digital electronic circuits

Page 646: Tugas tik di kelas xi ips 3

Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same timethat digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in Londonin the 1930s, began to explore the possible use of electronicsfor the telephone exchange. Experimental equipment that he built in 1934 went into operation 5 years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes.[19] In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942,[27] the first "automatic electronic digital computer".[28] This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory.[29]

Colossus was the first electronic digital programmable computing device, and was used to break German ciphers during World War II.

During World War II, the British at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes.To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus.[29] He spent eleven months from early February 1943 designing and building the first Colossus.[30] After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944[31] and attacked its first message on 5 February.[29]

Colossus was the world's first electronic digital programmablecomputer.[19] It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to

Page 647: Tugas tik di kelas xi ips 3

perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1500 thermionic valves (tubes), but Mark II with 2400 valves, was both 5 times fasterand simpler to operate than Mark 1, greatly speeding the decoding process.[32][33]

ENIAC was the first Turing-complete device, and performed ballistics trajectory calculations for the United States Army.

The US-built ENIAC [34] (Electronic Numerical Integrator and Computer) was the first electronic programmable computer builtin the US. Although the ENIAC was similar to the Colossus it was much faster and more flexible. It was unambiguously a Turing-complete device and could compute any problem that would fit into its memory. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches.

It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than anyother machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors.[35]

Page 648: Tugas tik di kelas xi ips 3

Stored programs

A section of the Manchester Small-Scale Experimental Machine, the first stored-program computer.

Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine.[29] With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid by Alan Turing in his 1936 paper. In 1945 Turing joined the National PhysicalLaboratory and began work on developing an electronic stored-program digital computer. His 1945 report ‘Proposed ElectronicCalculator’ was the first specification for such a device. John von Neumann at the University of Pennsylvania, also circulated his First Draft of a Report on the EDVAC in 1945.[19]

Ferranti Mark 1, c. 1951.

The Manchester Small-Scale Experimental Machine, nicknamed Baby, was the world's first stored-program computer. It was built at the Victoria University of Manchester by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948.[36] It was designed as a testbed for

Page 649: Tugas tik di kelas xi ips 3

the Williams tube the first random-access digital storage device.[37] Although the computer was considered "small and primitive" by the standards of its time, it was the first working machine to contain all of the elements essential to a modern electronic computer.[38] As soon as the SSEM had demonstrated the feasibility of its design, a project was initiated at the university to develop it into a more usable computer, the Manchester Mark 1.

The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer.[39] Built by Ferranti, it was delivered to the University of Manchester in February 1951. Atleast seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam.[40] In October 1947, the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. The LEO I computer became operational in April 1951 [41] and ran the world's first regular routine office computer job.

Transistors

A bipolar junction transistor

The bipolar transistor was invented in 1947. From 1955 onwardstransistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give offless heat. Silicon junction transistors were much more reliable than vacuum tubes and had longer, indefinite, servicelife. Transistorized computers could contain tens of thousandsof binary logic circuits in a relatively compact space.

Page 650: Tugas tik di kelas xi ips 3

At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves.[42] Their first transistorised computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955,[43] built by the electronics division of the Atomic Energy Research Establishment at Harwell.[44][45]

Integrated circuits

The next great advance in computing power came with the adventof the integrated circuit. The idea of the integrated circuit was first conceived by a radar scientist working for the RoyalRadar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington,   D.C. on 7 May 1952.[46]

The first practical ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor.[47] Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958.[48] In his patent application of 6 February 1959, Kilby described his newdevice as “a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated.”[49][50] Noyce also came up with his own idea of an integrated circuit half a year later than Kilby.[51] His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium.

This new development heralded an explosion in the commercial and personal use of computers and led to the invention of the microprocessor. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack ofagreement on the exact definition of the term "microprocessor", it is largely undisputed that the first

Page 651: Tugas tik di kelas xi ips 3

single-chip microprocessor was the Intel 4004,[52] designed and realized by Ted Hoff, Federico Faggin, and Stanley Mazor at Intel.[53]

Mobile computers become dominant

With the continued miniaturization of computing resources, andadvancements in portable battery life, portable computers grewin popularity in the 2000s.[54] The same developments that spurred the growth of laptop computers and other portable computers allowed manufacturers to integrate computing resources into cellular phones. These so-called smartphones and tablets run on a variety of operating systems and have become the dominant computing device on the market, with manufacturers reporting having shipped an estimated 237 million devices in 2Q 2013.[55]

ProgramsThe defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Moderncomputers based on the von Neumann architecture often have machine code in the form of an imperative programming language.

In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as dothe programs for word processors and web browsers for example.A typical modern computer can execute billions of instructionsper second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors.

Stored program architecture

Main articles: Computer program and Computer programming

Page 652: Tugas tik di kelas xi ips 3

Replica of the Small-Scale Experimental Machine (SSEM), the world's first stored-program computer, at the Museum of Science and Industry in Manchester, England

This section applies to most common RAM machine-based computers.

In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given.However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place inthe program and to carry on executing from there. These are called “jump” instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that “remembers” the location it jumped from and another instruction to return to the instruction following that jump instruction.

Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, theymay at times jump back to an earlier place in the text or skipsections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention.

Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with

Page 653: Tugas tik di kelas xi ips 3

just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button pressesand a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language:

begin: addi $8, $0, 0 # initialize sum to 0 addi $9, $0, 1 # set first number to add = 1 loop: slti $10, $9, 1000 # check if the number is less than 1000 beq $10, $0, finish # if odd number is greater than n then exit add $8, $8, $9 # update sum addi $9, $9, 1 # get next number j loop # repeat the summing process finish: add $2, $8, $0 # put sum in output register

Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second.

Machine code

In most computers, individual instructions are stored as machine code with each instruction being given a unique number(its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, itcan also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers andcan themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program[citation needed], architecture. In some cases, a computer mightstore some or all of its program in memory that is kept

Page 654: Tugas tik di kelas xi ips 3

separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modernvon Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches.

While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[56] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easyto remember – a mnemonic such as ADD, SUB, MULT or JUMP. Thesemnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler.

A 1970s punched card containing one line from a FORTRAN program. The card reads: “Z(1) = Y + W(1)” and is labeled “PROJ039” for identification purposes.

Programming language

Main article: Programming language

Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are oftendifficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques.

Low-level languages

Page 655: Tugas tik di kelas xi ips 3

Main article: Low-level programming language

Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) tend to be unique to a particular type of computer. For instance, an ARM architecture computer (such as may be found in a PDA or a hand-held videogame) cannot understand the machine language ofan Intel Pentium or the AMD Athlon 64 computer that might be in a PC.[57]

High-level languages/Third Generation Language

Main article: High-level programming language

Though considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages thatare able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually “compiled” into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[58] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles.

Fourth Generation Languages

These 4G languages are less procedural than 3G languages. The benefit of 4GL is that it provides ways to obtain information without requiring the direct help of a programmer. Example of 4GL is SQL.

Program design

Page 656: Tugas tik di kelas xi ips 3

This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (July 2012)

Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising orusing established procedures and algorithms, providing data for output devices and solutions to the problem as applicable.As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge.

Bugs

Main article: Software bug

The actual first computer bug, a moth found trapped on a relayof the Harvard Mark II computer

Errors in computer programs are called “bugs.” They may be benign and not affect the usefulness of the program, or have only subtle effects. But in some cases, they may cause the program or the entire system to “hang,” becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing

Page 657: Tugas tik di kelas xi ips 3

an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the resultof programmer error or an oversight made in the program's design.[59]

Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term “bugs” in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947.[60]

ComponentsMain articles: Central processing unit and Microprocessor

Play mediaVideo demonstrating the standard components of a "slimline" computer

A general purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, andthe input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires.

Inside each of these parts are thousands to trillions of smallelectrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a “1”, and when off it represents a “0” (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits.

Control unit

Page 658: Tugas tik di kelas xi ips 3

Main articles: CPU design and Control unit

Diagram showing how a particular MIPS architecture instructionwould be decoded by the control system

The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer.[61] Control systems in advanced computersmay change the order of execution of some instructions to improve performance.

A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[62]

The control system's function is as follows—note that this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU:

Read the code for the next instruction from the cell indicatedby the program counter.

Decode the numerical code for the instruction into a set of commands or signals for each of the other systems.

Increment the program counter so it points to the next instruction.

Read whatever data the instruction requires from cells in memory (or perhaps from an input device). The location of thisrequired data is typically stored within the instruction code.

Provide the necessary data to an ALU or register.

If the instruction requires an ALU or specialized hardware to complete, instruct the hardware to perform the requested operation.

Page 659: Tugas tik di kelas xi ips 3

Write the result from the ALU back to a memory location or to a register or perhaps an output device.

Jump back to step (1).

Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in theALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further downthe program. Instructions that modify the program counter are often known as “jumps” and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow).

The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, whichruns a microcode program that causes all of these events to happen.

Central Processing unit (CPU)

The control unit, ALU, and registers are collectively known asa central processing unit (CPU). Early CPUs were composed of many separate components but since the mid-1970s CPUs have typically been constructed on a single integrated circuit called a microprocessor.

Arithmetic logic unit (ALU)

Main article: Arithmetic logic unit

The ALU is capable of performing two classes of operations: arithmetic and logic.[63]

The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can only operateon whole numbers (integers) whilst others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the

Page 660: Tugas tik di kelas xi ips 3

simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU mayalso compare numbers and return boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other (“is 64 greater than 65?”).

Logic operations involve Boolean logic: AND, OR, XOR and NOT. These can be useful for creating complicated conditional statements and processing boolean logic.

Superscalar computers may contain multiple ALUs, allowing themto process several instructions simultaneously.[64] Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices.

Memory

Main article: Computer data storage

Magnetic core memory was the computer memory of choice throughout the 1960s, until it was replaced by semiconductor memory.

A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered “address” and can store a single number. The computer can be instructed to “put the number 123 into the cell numbered 1357”or to “add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595.” The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not

Page 661: Tugas tik di kelas xi ips 3

differentiate between different types of information, it is the software's responsibility to give significance to what thememory sees as nothing but a series of numbers.

In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (2^8 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside ofspecialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory.

The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two andone hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed.

Computer main memory comes in two principal varieties:

random-access memory or RAM

read-only memory or ROM

RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded

Page 662: Tugas tik di kelas xi ips 3

computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored inROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slowerthan conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[65]

In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part.

Input/output (I/O)

Main article: Input/output

Hard disk drives are common storage devices used with computers.

I/O is the means by which a computer exchanges information with the outside world.[66] Devices that provide input or outputto the computer are called peripherals.[67] On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display andprinter. Hard disk drives, floppy disk drives and optical discdrives serve as both input and output devices. Computer networking is another form of I/O.

I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern

Page 663: Tugas tik di kelas xi ips 3

desktop computers contain many smaller computers that assist the main CPU in performing I/O.

Multitasking

Main article: Computer multitasking

While a computer may be viewed as running one gigantic programstored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking i.e. having the computer switch rapidly between running each program in turn.[68]

One means by which this is done is with a special signal called an interrupt, which can periodically cause the computerto stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running “at the same time,” then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time even though only one is ever executing in any given instant. This method of multitasking is sometimes termed “time-sharing” since each program is allocated a “slice” of time in turn.[69]

Before the era of cheap computers, the principal use for multitasking was to allow many people to share the same computer.

Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, butmost programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a “time slice” until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss.

Page 664: Tugas tik di kelas xi ips 3

Multiprocessing

Main article: Multiprocessing

Cray designed many supercomputers that used multiprocessing heavily.

Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed only in large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-endmarkets as a result.

Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general purpose computers.[70] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful only for specialized tasks due to the large scale of program organization required to successfully utilize most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called “embarrassingly parallel” tasks.

Networking and the InternetMain articles: Computer networking and Internet

Page 665: Tugas tik di kelas xi ips 3

Visualization of a portion of the routes on the Internet

Computers have been used to coordinate information between multiple locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre.[71]

In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET.[72] The technologies that made the Arpanet possible spread and evolved.

In time, the network spread beyond academic and military institutions and became known as the Internet. The emergence of networking involved a redefinition of the nature and boundaries of the computer. Computer operating systems and applications were modified to include the ability to define and access the resources of other computers on the network, such as peripheral devices, stored information, and the like, as extensions of the resources of an individual computer. Initially these facilities were available primarily to people working in high-tech environments, but in the 1990s the spreadof applications like e-mail and the World Wide Web, combined with the development of cheap, fast networking technologies like Ethernet and ADSL saw computer networking become almost ubiquitous. In fact, the number of computers that are networked is growing phenomenally. A very large proportion of personal computers regularly connect to the Internet to communicate and receive information. “Wireless” networking, often utilizing mobile phone networks, has meant networking is

Page 666: Tugas tik di kelas xi ips 3

becoming increasingly ubiquitous even in mobile computing environments.

Computer architecture paradigms

There are many types of computer architectures:

Quantum computer vs Chemical computer

Scalar processor vs Vector processor

Non-Uniform Memory Access (NUMA) computers

Register machine vs Stack machine

Harvard architecture vs von Neumann architecture

Cellular architecture

Of all these abstract machines, a quantum computer holds the most promise for revolutionizing computing.[73]

Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms.

The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle,capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity.

MisconceptionsMain articles: Human computer and Harvard Computers

Page 667: Tugas tik di kelas xi ips 3

Women as computers in NACA High Speed Flight Station "Computer Room"

A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word “computer” is synonymous with a personal electronic computer, the modern[74] definition of a computer is literally “A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information.”[75] Any device which processes information qualifies as a computer, especially if the processing is purposeful. Even a human is a computer, in this sense.

Unconventional computing

Main article: Unconventional computing

Historically, computers evolved from mechanical computers and eventually from vacuum tubes to transistors. However, conceptually computational systems as flexible as a personal computer can be built out of almost anything. For example, a computer can be made out of billiard balls (billiard ball computer); an often quoted example.[citation needed] More realistically, modern computers are made out of transistors made of photolithographed semiconductors.

FutureThere is active research to make computers out of many promising new types of technology, such as optical computers, DNA computers, neural computers, and quantum computers. Most computers are universal, and are able to calculate any computable function, and are limited only by their memory

Page 668: Tugas tik di kelas xi ips 3

capacity and operating speed. However different designs of computers can give very different performance for particular problems; for example quantum computers can potentially break some modern encryption algorithms (by quantum factoring) very quickly.

Further topicsGlossary of computers

Artificial intelligence

A computer will solve problems in exactly the way it is programmed to, without regard to efficiency, alternative solutions, possible shortcuts, or possible errors in the code.Computer programs that learn and adapt are part of the emerging field of artificial intelligence and machine learning.

HardwareMain articles: Computer hardware and Personal computer hardware

The term hardware covers all of those parts of a computer that are tangible objects. Circuits, displays, power supplies, cables, keyboards, printers and mice are all hardware.

History of computing hardware

Main article: History of computing hardware

First generation (mechanical/electromechanical)

Calculators

Pascal's calculator, Arithmometer, Difference engine, Quevedo's analytical machines

Programmable devices

Jacquard loom, Analytical engine, IBM ASCC/Harvard MarkI, Harvard Mark II, IBM SSEC, Z1, Z2, Z3

Second generation Calculators Atanasoff–Berry

Page 669: Tugas tik di kelas xi ips 3

(vacuum tubes)

Computer, IBM 604, UNIVAC 60, UNIVAC 120

Programmable devices

Colossus, ENIAC, Manchester Small-Scale Experimental Machine, EDSAC, Manchester Mark 1, Ferranti Pegasus, Ferranti Mercury, CSIRAC, EDVAC, UNIVACI, IBM 701, IBM 702, IBM 650, Z22

Third generation (discrete transistors and SSI, MSI, LSI integrated circuits)

Mainframes IBM 7090, IBM 7080, IBM System/360, BUNCH

MinicomputerPDP-8, PDP-11, IBM System/32, IBM System/36

Fourth generation (VLSI integrated circuits)

Minicomputer VAX, IBM System i4-bit microcomputer

Intel 4004, Intel 4040

8-bit microcomputer

Intel 8008, Intel 8080, Motorola 6800, Motorola 6809, MOS Technology 6502, Zilog Z80

16-bit microcomputer

Intel 8088, Zilog Z8000, WDC 65816/65802

32-bit microcomputer

Intel 80386, Pentium,Motorola 68000, ARM

64-bit microcomputer[76]

Alpha, MIPS, PA-RISC,PowerPC, SPARC, x86-64, ARMv8-A

Embedded computer Intel 8048, Intel 8051Personal computer Desktop computer,

Home computer, Laptopcomputer, Personal digital assistant (PDA), Portable computer, Tablet PC,

Page 670: Tugas tik di kelas xi ips 3

Wearable computer

Theoretical/experimental

Quantum computer,Chemical computer, DNA computing, Optical computer,Spintronics basedcomputer

Other hardware topics

Peripheral device (input/output)

InputMouse, keyboard, joystick, image scanner, webcam, graphicstablet, microphone

Output Monitor, printer, loudspeaker

BothFloppy disk drive, hard disk drive, optical disc drive, teleprinter

Computer buses

Short range RS-232, SCSI, PCI, USBLong range (computer networking)

Ethernet, ATM, FDDI

SoftwareMain article: Computer software

Software refers to parts of the computer which do not have a material form, such as programs, data, protocols, etc. When software is stored in hardware that cannot easily be modified (such as BIOS ROM in an IBM PC compatible), it is sometimes called “firmware.”

Operating system /System Software

Unix and BSDUNIX System V, IBM AIX, HP-UX, Solaris(SunOS), IRIX, List of BSD operating systems

GNU/Linux List of Linux distributions, Comparison of Linux distributions

Microsoft Windows

Windows 95, Windows 98, Windows NT, Windows 2000, Windows Me, Windows XP, Windows Vista, Windows 7, Windows 8

Page 671: Tugas tik di kelas xi ips 3

DOS 86-DOS (QDOS), IBM PC DOS, MS-DOS, DR-DOS, FreeDOS

Mac OS Mac OS classic, Mac OS XEmbedded andreal-time List of embedded operating systems

Experimental Amoeba, Oberon/Bluebottle, Plan 9 fromBell Labs

LibraryMultimedia DirectX, OpenGL, OpenALProgramming library

C standard library, Standard Template Library

Data Protocol TCP/IP, Kermit, FTP, HTTP, SMTPFile format HTML, XML, JPEG, MPEG, PNG

User interface

Graphical user interface (WIMP)

Microsoft Windows, GNOME, KDE, QNX Photon, CDE, GEM, Aqua

Text-based user interface

Command-line interface, Text user interface

ApplicationSoftware

Office suite

Word processing, Desktop publishing, Presentation program, Database management system, Scheduling & Time management, Spreadsheet, Accounting software

Internet Access

Browser, E-mail client, Web server, Mail transfer agent, Instant messaging

Design and manufacturing

Computer-aided design, Computer-aided manufacturing, Plant management, Robotic manufacturing, Supply chain management

Graphics

Raster graphics editor, Vector graphics editor, 3D modeler, Animationeditor, 3D computer graphics, Video editing, Image processing

AudioDigital audio editor, Audio playback, Mixing, Audio synthesis, Computer music

Software engineering

Compiler, Assembler, Interpreter, Debugger, Text editor, Integrated development environment, Software performance analysis, Revision

Page 672: Tugas tik di kelas xi ips 3

control, Software configuration management

Educational Edutainment, Educational game, Seriousgame, Flight simulator

Games

Strategy, Arcade, Puzzle, Simulation, First-person shooter, Platform, Massively multiplayer, Interactive fiction

Misc

Artificial intelligence, Antivirus software, Malware scanner, Installer/Package management systems, File manager

LanguagesThere are thousands of different programming languages—some intended to be general purpose, others useful only for highly specialized applications.

Programming languages

Lists of programming languages

Timeline of programming languages, List of programming languages by category, Generational list of programming languages, List of programming languages, Non-English-based programming languages

Commonly used assembly languages

ARM, MIPS, x86

Commonly used high-level programming languages

Ada, BASIC, C, C++, C#, COBOL, Fortran, Java, Lisp, Pascal, Object Pascal

Commonly used scripting languages

Bourne script, JavaScript, Python, Ruby, PHP, Perl

Firmware

Firmware is the technology which has the combination of both hardware and software such as BIOS chip inside a computer.this

Page 673: Tugas tik di kelas xi ips 3

chip(hardware) is located on the motherboard and has the BIOS set up (software) stored in it.

Liveware

At times the user working on the system are termed as Liveware.

Types of computersComputer can be classified based on their uses

Cryptanalysis

From Wikipedia, the free encyclopedia

Close-up of the rotors in a Fialka cipher machine

Cryptanalysis (from the Greek kryptós, "hidden", and analýein, "to loosen" or "to untie") is the study of analyzing information systemsin order to study the hidden aspects of the systems.[1] Cryptanalysis is used to breach cryptographic security systems and gain access to the contents of encrypted messages, even if the cryptographic key is unknown.

In addition to mathematical analysis of cryptographic algorithms, cryptanalysis includes the study of side-channel attacks that do nottarget weaknesses in the cryptographic algorithms themselves, but instead exploit weaknesses in their implementation.

Even though the goal has been the same, the methods and techniques of cryptanalysis have changed drastically through the history of cryptography, adapting to increasing cryptographic complexity, ranging from the pen-and-paper methods of the past, through machineslike the British Bombes and Colossus computers at Bletchley Park in World War II, to the mathematically advanced computerized schemes ofthe present. Methods for breaking modern cryptosystems often involve

Page 674: Tugas tik di kelas xi ips 3

solving carefully constructed problems in pure mathematics, the best-known being integer factorization.

Contents

1 Overview

1.1 Amount of information available to the attacker

1.2 Computational resources required

1.3 Partial breaks

2 History

2.1 Classical ciphers

2.2 Ciphers from World War I and World War II

2.2.1 Indicator

2.2.2 Depth

2.3 Development of modern cryptography

3 Symmetric ciphers

4 Asymmetric ciphers

5 Attacking cryptographic hash systems

6 Side-channel attacks

7 Quantum computing applications for cryptanalysis

8 See also

8.1 Historic cryptanalysts

9 References

9.1 Notes

9.2 Bibliography

10 Further reading

Page 675: Tugas tik di kelas xi ips 3

11 External links

Overview

Given some encrypted data ("ciphertext"), the goal of the cryptanalyst is to gain as much information as possible about the original, unencrypted data ("plaintext").

Amount of information available to the attacker

Attacks can be classified based on what type of information the attacker has available. As a basic starting point it is normally assumed that, for the purposes of analysis, the general algorithm isknown; this is Shannon's Maxim "the enemy knows the system"—in its turn, equivalent to Kerckhoffs' principle. This is a reasonable assumption in practice — throughout history, there are countless examples of secret algorithms falling into wider knowledge, variously through espionage, betrayal and reverse engineering. (And on occasion, ciphers have been reconstructed through pure deduction;for example, the German Lorenz cipher and the Japanese Purple code, and a variety of classical schemes).:[2]

Ciphertext-only: the cryptanalyst has access only to a collection of ciphertexts or codetexts.

Known-plaintext: the attacker has a set of ciphertexts to which he knows the corresponding plaintext.

Chosen-plaintext (chosen-ciphertext): the attacker can obtain the ciphertexts (plaintexts) corresponding to an arbitrary set of plaintexts (ciphertexts) of his own choosing.

Adaptive chosen-plaintext: like a chosen-plaintext attack, except the attacker can choose subsequent plaintexts based on information learned from previous encryptions. Similarly Adaptive chosen ciphertext attack.

Page 676: Tugas tik di kelas xi ips 3

Related-key attack: Like a chosen-plaintext attack, except the attacker can obtain ciphertexts encrypted under two different keys. The keys are unknown, but the relationship between them is known; for example, two keys that differ in the one bit.

Computational resources required

Attacks can also be characterised by the resources they require. Those resources include:[citation needed]

Time — the number of computation steps (e.g., test encryptions) which must be performed.

Memory — the amount of storage required to perform the attack.

Data — the quantity and type of plaintexts and ciphertexts required for a particular approach.

It's sometimes difficult to predict these quantities precisely, especially when the attack isn't practical to actually implement fortesting. But academic cryptanalysts tend to provide at least the estimated order of magnitude of their attacks' difficulty, saying, for example, "SHA-1 collisions now 252."[3]

Bruce Schneier notes that even computationally impractical attacks can be considered breaks: "Breaking a cipher simply means finding a weakness in the cipher that can be exploited with a complexity less than brute force. Never mind that brute-force might require 2128 encryptions; an attack requiring 2110 encryptions would be considered a break...simply put, a break can just be a certificational weakness: evidence that the cipher does not perform as advertised."[4]

Partial breaks

Page 677: Tugas tik di kelas xi ips 3

The results of cryptanalysis can also vary in usefulness. For example, cryptographer Lars Knudsen (1998) classified various types of attack on block ciphers according to the amount and quality of secret information that was discovered:

Total break — the attacker deduces the secret key.

Global deduction — the attacker discovers a functionally equivalent algorithm for encryption and decryption, but without learning the key.

Instance (local) deduction — the attacker discovers additional plaintexts (or ciphertexts) not previously known.

Information deduction — the attacker gains some Shannon information about plaintexts (or ciphertexts) not previously known.

Distinguishing algorithm — the attacker can distinguish the cipher from a random permutation.

Academic attacks are often against weakened versions of a cryptosystem, such as a block cipher or hash function with some rounds removed. Many, but not all, attacks become exponentially moredifficult to execute as rounds are added to a cryptosystem,[5] so it's possible for the full cryptosystem to be strong even though reduced-round variants are weak. Nonetheless, partial breaks that come close to breaking the original cryptosystem may mean that a full break will follow; the successful attacks on DES, MD5, and SHA-1 were all preceded by attacks on weakened versions.

In academic cryptography, a weakness or a break in a scheme is usually defined quite conservatively: it might require impractical amounts of time, memory, or known plaintexts. It also might require the attacker be able to do things many real-world attackers can't: for example, the attacker may need to choose particular plaintexts to be encrypted or even to ask for plaintexts to be encrypted using several keys related to the secret key. Furthermore, it might only reveal a small amount of information, enough to prove the

Page 678: Tugas tik di kelas xi ips 3

cryptosystem imperfect but too little to be useful to real-world attackers. Finally, an attack might only apply to a weakened versionof cryptographic tools, like a reduced-round block cipher, as a steptowards breaking of the full system.[4]

History

Main article: History of cryptography

The decrypted Zimmermann Telegram.

Cryptanalysis has coevolved together with cryptography, and the contest can be traced through the history of cryptography—new ciphers being designed to replace old broken designs, and new cryptanalytic techniques invented to crack the improved schemes. In practice, they are viewed as two sides of the same coin: in order tocreate secure cryptography, you have to design against possible cryptanalysis.[citation needed]

Successful cryptanalysis has undoubtedly influenced history; the ability to read the presumed-secret thoughts and plans of others canbe a decisive advantage. For example, in England in 1587, Mary, Queen of Scots was tried and executed for treason as a result of herinvolvement in three plots to assassinate Elizabeth I of England. The plans came to light after her coded correspondence with fellow conspirators was deciphered by Thomas Phelippes.

In World War I, the breaking of the Zimmermann Telegram was instrumental in bringing the United States into the war. In World War II, the Allies benefitted enormously from their joint success cryptanalysis of the German ciphers — including the Enigma machine and the Lorenz cipher — and Japanese ciphers, particularly 'Purple' and JN-25. 'Ultra' intelligence has been credited with everything between shortening the end of the European war by up to two years, to determining the eventual result. The war in the Pacific was similarly helped by 'Magic' intelligence.[6]

Page 679: Tugas tik di kelas xi ips 3

Governments have long recognized the potential benefits of cryptanalysis for intelligence, both military and diplomatic, and established dedicated organizations devoted to breaking the codes and ciphers of other nations, for example, GCHQ and the NSA, organizations which are still very active today. In 2004, it was reported that the United States had broken Iranian ciphers. (It is unknown, however, whether this was pure cryptanalysis, or whether other factors were involved:[7]).

Classical ciphers

First page of Al-Kindi's 9th century Manuscript on Deciphering Cryptographic Messages

See also: Frequency analysis, Index of coincidence and Kasiski examination

Although the actual word "cryptanalysis" is relatively recent (it was coined by William Friedman in 1920), methods for breaking codes and ciphers are much older. The first known recorded explanation of cryptanalysis was given by 9th-century Arabian polymath, Al-Kindi (also known as "Alkindus" in Europe), in A Manuscript on DecipheringCryptographic Messages. This treatise includes a description of the method of frequency analysis (Ibrahim Al-Kadi, 1992- ref-3). Italianscholar Giambattista della Porta was author of a seminal work on cryptanalysis "De Furtivis Literarum Notis".[8]

Frequency analysis is the basic tool for breaking most classical ciphers. In natural languages, certain letters of the alphabet appear more frequently than others; in English, "E" is likely to be the most common letter in any sample of plaintext. Similarly, the digraph "TH" is the most likely pair of letters in English, and so on. Frequency analysis relies on a cipher failing to hide these statistics. For example, in a simple substitution cipher (where eachletter is simply replaced with another), the most frequent letter inthe ciphertext would be a likely candidate for "E". Frequency analysis of such a cipher is therefore relatively easy, provided

Page 680: Tugas tik di kelas xi ips 3

that the ciphertext is long enough to give a reasonably representative count of the letters of the alphabet that it contains.[9]

In Europe during the 15th and 16th centuries, the idea of a polyalphabetic substitution cipher was developed, among others by the French diplomat Blaise de Vigenère (1523–96).[10] For some threecenturies, the Vigenère cipher, which uses a repeating key to selectdifferent encryption alphabets in rotation, was considered to be completely secure (le chiffre indéchiffrable—"the indecipherable cipher"). Nevertheless, Charles Babbage (1791–1871) and later, independently, Friedrich Kasiski (1805–81) succeeded in breaking this cipher.[11] During World War I, inventors in several countries developed rotor cipher machines such as Arthur Scherbius' Enigma, inan attempt to minimise the repetition that had been exploited to break the Vigenère system.[12]

Ciphers from World War I and World War II

See also: Cryptanalysis of the Enigma and Cryptanalysis of the Lorenz cipher

Cryptanalysis of enemy messages played a significant part in the Allied victory in World War II. F. W. Winterbotham, quoted the western Supreme Allied Commander, Dwight D. Eisenhower, at the war'send as describing Ultra intelligence as having been "decisive" to Allied victory.[13] Sir Harry Hinsley, official historian of BritishIntelligence in World War II, made a similar assessment about Ultra,saying that it shortened the war "by not less than two years and probably by four years"; moreover, he said that in the absence of Ultra, it is uncertain how the war would have ended.[14]

In practice, frequency analysis relies as much on linguistic knowledge as it does on statistics, but as ciphers became more complex, mathematics became more important in cryptanalysis. This change was particularly evident before and during World War II, where efforts to crack Axis ciphers required new levels of

Page 681: Tugas tik di kelas xi ips 3

mathematical sophistication. Moreover, automation was first applied to cryptanalysis in that era with the Polish Bomba device, the British Bombe, the use of punched card equipment, and in the Colossus computers — the first electronic digital computers to be controlled by a program.[15][16]

Indicator

With reciprocal machine ciphers such as the Lorenz cipher and the Enigma machine used by Nazi Germany during World War II, each message had its own key. Usually, the transmitting operator informedthe receiving operator of this message key by transmitting some plaintext and/or ciphertext before the enciphered message. This is termed the indicator, as it indicates to the receiving operator how to set his machine to decipher the message.[17]

Poorly designed and implemented indicator systems allowed first the Poles[18] and then the British at Bletchley Park[19] to break the Enigma cipher system. Similar poor indicator systems allowed the British to identify depths that led to the diagnosis of the Lorenz SZ40/42 cipher system, and the comprehensive breaking of its messages without the cryptanalysts seeing the cipher machine.[20]

Depth

Sending two or more messages with the same key is an insecure process. To a cryptanalyst the messages are then said to be "in depth."[21] This may be detected by the messages having the same indicator by which the sending operator informs the receiving operator about the key generator initial settings for the message.[22]

Generally, the cryptanalyst may benefit from lining up identical enciphering operations among a set of messages. For example, the Vernam cipher enciphers by bit-for-bit combining plaintext with a

Page 682: Tugas tik di kelas xi ips 3

long key using the "exclusive or" operator, which is also known as "modulo-2 addition" (symbolized by ⊕ ):

Plaintext ⊕ Key = Ciphertext

Deciphering combines the same key bits with the ciphertext to reconstruct the plaintext:

Ciphertext ⊕ Key = Plaintext

(In modulo-2 arithmetic, addition is the same as subtraction.) When two such ciphertexts are aligned in depth, combining them eliminatesthe common key, leaving just a combination of the two plaintexts:

Ciphertext1 ⊕ Ciphertext2 = Plaintext1 ⊕ Plaintext2

The individual plaintexts can then be worked out linguistically by trying probable words (or phrases), also known as "cribs," at various locations; a correct guess, when combined with the merged plaintext stream, produces intelligible text from the other plaintext component:

(Plaintext1 ⊕ Plaintext2) ⊕ Plaintext1 = Plaintext2

The recovered fragment of the second plaintext can often be extendedin one or both directions, and the extra characters can be combined with the merged plaintext stream to extend the first plaintext. Working back and forth between the two plaintexts, using the intelligibility criterion to check guesses, the analyst may recover

Page 683: Tugas tik di kelas xi ips 3

much or all of the original plaintexts. (With only two plaintexts indepth, the analyst may not know which one corresponds to which ciphertext, but in practice this is not a large problem.) When a recovered plaintext is then combined with its ciphertext, the key isrevealed:

Plaintext1 ⊕ Ciphertext1 = Key

Knowledge of a key of course allows the analyst to read other messages encrypted with the same key, and knowledge of a set of related keys may allow cryptanalysts to diagnose the system used forconstructing them.[20]

Development of modern cryptography

The Bombe replicated the action of several Enigma machines wired together. Each of the rapidly rotating drums, pictured above in a Bletchley Park museum mockup, simulated the action of an Enigma rotor.

Even though computation was used to great effect in Cryptanalysis ofthe Lorenz cipher and other systems during World War II, it also made possible new methods of cryptography orders of magnitude more complex than ever before. Taken as a whole, modern cryptography has become much more impervious to cryptanalysis than the pen-and-paper systems of the past, and now seems to have the upper hand against pure cryptanalysis.[citation needed] The historian David Kahn notes:

Many are the cryptosystems offered by the hundreds of commercialvendors today that cannot be broken by any known methods of cryptanalysis. Indeed, in such systems even a chosen plaintext attack, in which a selected plaintext is matched against its ciphertext, cannot yield the key that unlock[s] other messages. In asense, then, cryptanalysis is dead. But that is not the end of the story. Cryptanalysis may be dead, but there is - to mix my metaphors- more than one way to skin a cat.

Page 684: Tugas tik di kelas xi ips 3

—[23]

Kahn goes on to mention increased opportunities for interception, bugging, side channel attacks, and quantum computers as replacementsfor the traditional means of cryptanalysis. In 2010, former NSA technical director Brian Snow said that both academic and governmentcryptographers are "moving very slowly forward in a mature field."[24]

However, any postmortems for cryptanalysis may be premature. While the effectiveness of cryptanalytic methods employed by intelligence agencies remains unknown, many serious attacks against both academicand practical cryptographic primitives have been published in the modern era of computer cryptography:[citation needed]

The block cipher Madryga, proposed in 1984 but not widely used, was found to be susceptible to ciphertext-only attacks in 1998.

FEAL-4, proposed as a replacement for the DES standard encryption algorithm but not widely used, was demolished by a spate of attacks from the academic community, many of which are entirely practical.

The A5/1, A5/2, CMEA, and DECT systems used in mobile and wireless phone technology can all be broken in hours, minutes or even in real-time using widely available computing equipment.

Brute-force keyspace search has broken some real-world ciphers and applications, including single-DES (see EFF DES cracker), 40-bit"export-strength" cryptography, and the DVD Content Scrambling System.

In 2001, Wired Equivalent Privacy (WEP), a protocol used to secure Wi-Fi wireless networks, was shown to be breakable in practice because of a weakness in the RC4 cipher and aspects of the WEP design that made related-key attacks practical. WEP was later replaced by Wi-Fi Protected Access.

Page 685: Tugas tik di kelas xi ips 3

In 2008, researchers conducted a proof-of-concept break of SSL using weaknesses in the MD5 hash function and certificate issuer practices that made it possible to exploit collision attacks on hashfunctions. The certificate issuers involved changed their practices to prevent the attack from being repeated.

Thus, while the best modern ciphers may be far more resistant to cryptanalysis than the Enigma, cryptanalysis and the broader field of information security remain quite active.[citation needed]

Symmetric ciphers

Boomerang attack

Brute force attack

Davies' attack

Differential cryptanalysis

Impossible differential cryptanalysis

Improbable differential cryptanalysis

Integral cryptanalysis

Linear cryptanalysis

Meet-in-the-middle attack

Mod-n cryptanalysis

Related-key attack

Sandwich attack

Slide attack

XSL attack

Asymmetric ciphers

Page 686: Tugas tik di kelas xi ips 3

Asymmetric cryptography (or public key cryptography) is cryptographythat relies on using two (mathematically related) keys; one private,and one public. Such ciphers invariably rely on "hard" mathematical problems as the basis of their security, so an obvious point of attack is to develop methods for solving the problem. The security of two-key cryptography depends on mathematical questions in a way that single-key cryptography generally does not, and conversely links cryptanalysis to wider mathematical research in a new way.[citation needed]

Asymmetric schemes are designed around the (conjectured) difficulty of solving various mathematical problems. If an improved algorithm can be found to solve the problem, then the system is weakened. For example, the security of the Diffie-Hellman key exchange scheme depends on the difficulty of calculating the discrete logarithm. In 1983, Don Coppersmith found a faster way to find discrete logarithms(in certain groups), and thereby requiring cryptographers to use larger groups (or different types of groups). RSA's security depends(in part) upon the difficulty of integer factorization — a breakthrough in factoring would impact the security of RSA.[citationneeded]

In 1980, one could factor a difficult 50-digit number at an expense of 1012 elementary computer operations. By 1984 the state of the artin factoring algorithms had advanced to a point where a 75-digit number could be factored in 1012 operations. Advances in computing technology also meant that the operations could be performed much faster, too. Moore's law predicts that computer speeds will continueto increase. Factoring techniques may continue to do so as well, butwill most likely depend on mathematical insight and creativity, neither of which has ever been successfully predictable. 150-digit numbers of the kind once used in RSA have been factored. The effort was greater than above, but was not unreasonable on fast modern computers. By the start of the 21st century, 150-digit numbers were no longer considered a large enough key size for RSA. Numbers with several hundred digits were still considered too hard to factor in 2005, though methods will probably continue to improve over time,

Page 687: Tugas tik di kelas xi ips 3

requiring key size to keep pace or other methods such as elliptic curve cryptography to be used.[citation needed]

Another distinguishing feature of asymmetric schemes is that, unlikeattacks on symmetric cryptosystems, any cryptanalysis has the opportunity to make use of knowledge gained from the public key.[25]

Attacking cryptographic hash systems

[icon] This section requires expansion. (April 2012)

Birthday attack

Rainbow table

Side-channel attacks

Main article: Side channel attack

[icon] This section requires expansion. (April 2012)

Black-bag cryptanalysis

Man-in-the-middle attack

Power analysis

Replay attack

Rubber-hose cryptanalysis

Timing analysis

Quantum computing applications for cryptanalysis

Quantum computers, which are still in the early phases of research, have potential use in cryptanalysis. For example, Shor's Algorithm

Page 688: Tugas tik di kelas xi ips 3

could factor large numbers in polynomial time, in effect breaking some commonly used forms of public-key encryption.[citation needed]

By using Grover's algorithm on a quantum computer, brute-force key search can be made quadratically faster. How

[Docs] [txt|pdf] [draft-shirey-secu...] [Diff1] [Diff2]

Obsoleted by: 4949 INFORMATIONAL

Network Working Group R. Shirey

Request for Comments: 2828 GTE / BBN Technologies

FYI: 36 May 2000

Category: Informational

Internet Security Glossary

Status of this Memo

This memo provides information for the Internet community. It does

not specify an Internet standard of any kind. Distribution of this

Page 689: Tugas tik di kelas xi ips 3

memo is unlimited.

Copyright Notice

Copyright (C) The Internet Society (2000). All Rights Reserved.

Abstract

This Glossary (191 pages of definitions and 13 pages of references)

provides abbreviations, explanations, and recommendations for useof

information system security terminology. The intent is to improvethe

comprehensibility of writing that deals with Internet security,

particularly Internet Standards documents (ISDs). To avoid confusion,

ISDs should use the same term or definition whenever the same concept

is mentioned. To improve international understanding, ISDs shoulduse

terms in their plainest, dictionary sense. ISDs should use terms

established in standards documents and other well-founded

publications and should avoid substituting private or newly made-up

terms. ISDs should avoid terms that are proprietary or otherwise

favor a particular vendor, or that create a bias toward a particular

Page 690: Tugas tik di kelas xi ips 3

security technology or mechanism versus other, competing techniques

that already exist or might be developed in the future.

Shirey Informational [Page 1]

RFC 2828 Internet Security Glossary May 2000

Page 691: Tugas tik di kelas xi ips 3

Table of Contents

1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . .2

2. Explanation of Paragraph Markings . . . . . . . . . . . . . .4

2.1 Recommended Terms with an Internet Basis ("I") . . . . . .4

2.2 Recommended Terms with a Non-Internet Basis ("N") . . . .5

2.3 Other Definitions ("O") . . . . . . . . . . . . . . . . .5

2.4 Deprecated Terms, Definitions, and Uses ("D") . . . . . .6

2.5 Commentary and Additional Guidance ("C") . . . . . . . . .6

3. Definitions . . . . . . . . . . . . . . . . . . . . . . . . .6

4. References . . . . . . . . . . . . . . . . . . . . . . . . . .197

5. Security Considerations . . . . . . . . . . . . . . . . . . .211

6. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . .211

7. Author's Address . . . . . . . . . . . . . . . . . . . . . . .211

8. Full Copyright Statement . . . . . . . . . . . . . . . . . . .212

Page 692: Tugas tik di kelas xi ips 3

1. Introduction

This Glossary provides an internally consistent, complementary set of

abbreviations, definitions, explanations, and recommendations foruse

of terminology related to information system security. The intentof

this Glossary is to improve the comprehensibility of Internet

Standards documents (ISDs)--i.e., RFCs, Internet-Drafts, and other

material produced as part of the Internet Standards Process [R2026]--

and of all other Internet material, too. Some non-security terms are

included to make the Glossary self-contained, but more complete lists

of networking terms are available elsewhere [R1208, R1983].

Some glossaries (e.g., [Raym]) list terms that are not listed here

but could be applied to Internet security. However, those terms have

not been included in this Glossary because they are not appropriate

for ISDs.

This Glossary marks terms and definitions as being either endorsed or

Page 693: Tugas tik di kelas xi ips 3

deprecated for use in ISDs, but this Glossary is not an Internet

standard. The key words "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY",

and "OPTIONAL" are intended to be interpreted the same way as in an

Internet Standard [R2119], but this guidance represents only the

recommendations of this author. However, this Glossary includes

reasons for the recommendations--particularly for the SHOULD NOTs--so

that readers can judge for themselves whether to follow the

recommendations.

Shirey Informational [Page 2]

RFC 2828 Internet Security Glossary May 2000

Page 694: Tugas tik di kelas xi ips 3

This Glossary supports the goals of the Internet Standards Process:

o Clear, Concise, and Easily Understood Documentation

This Glossary seeks to improve comprehensibility of security-

related content of ISDs. That requires wording to be clear and

understandable, and requires the set of security-related termsand

definitions to be consistent and self-supporting. Also, the

terminology needs to be uniform across all ISDs; i.e., the same

term or definition needs to be used whenever and wherever the same

concept is mentioned. Harmonization of existing ISDs need not be

done immediately, but it is desirable to correct and standardize

the terminology when new versions are issued in the normal course

of standards development and evolution.

o Technical Excellence

Just as Internet Standard (STD) protocols should operate

effectively, ISDs should use terminology accurately, precisely,

Page 695: Tugas tik di kelas xi ips 3

and unambiguously to enable Internet Standards to be implemented

correctly.

o Prior Implementation and Testing

Just as STD protocols require demonstrated experience and

stability before adoption, ISDs need to use well-established

language. Using terms in their plainest, dictionary sense (when

appropriate) helps to ensure international understanding. ISDs

need to avoid using private, made-up terms in place of generally-

accepted terms from standards and other publications. ISDs need to

avoid substituting new definitions that conflict with established

ones. ISDs need to avoid using "cute" synonyms (e.g., see: Green

Book); no matter how popular a nickname may be in one community,

it is likely to cause confusion in another.

o Openness, Fairness, and Timeliness

ISDs need to avoid terms that are proprietary or otherwise favor a

particular vendor, or that create a bias toward a particular

Page 696: Tugas tik di kelas xi ips 3

security technology or mechanism over other, competing techniques

that already exist or might be developed in the future. The set of

terminology used across the set of ISDs needs to be flexible and

adaptable as the state of Internet security art evolves.

Shirey Informational [Page 3]

RFC 2828 Internet Security Glossary May 2000

2. Explanation of Paragraph Markings

Section 3 marks terms and definitions as follows:

Page 697: Tugas tik di kelas xi ips 3

o Capitalization: Only terms that are proper nouns are capitalized.

o Paragraph Marking: Definitions and explanations are stated in

paragraphs that are marked as follows:

- "I" identifies a RECOMMENDED Internet definition.

- "N" identifies a RECOMMENDED non-Internet definition.

- "O" identifies a definition that is not recommended as the first

choice for Internet documents but is something that authors of

Internet documents need to know.

- "D" identifies a term or definition that SHOULD NOT be used in

Internet documents.

- "C" identifies commentary or additional usage guidance.

The rest of Section 2 further explains these five markings.

2.1 Recommended Terms with an Internet Basis ("I")

The paragraph marking "I" (as opposed to "O") indicates a definition

that SHOULD be the first choice for use in ISDs. Most terms and

Page 698: Tugas tik di kelas xi ips 3

definitions of this type MAY be used in ISDs; however, some "I"

definitions are accompanied by a "D" paragraph that recommends

against using the term. Also, some "I" definitions are preceded by an

indication of a contextual usage limitation (e.g., see:

certification), and ISDs should not the term and definition outside

that context

An "I" (as opposed to an "N") also indicates that the definition has

an Internet basis. That is, either the Internet Standards Processis

authoritative for the term, or the term is sufficiently generic that

this Glossary can freely state a definition without contradictinga

non-Internet authority (e.g., see: attack).

Many terms with "I" definitions are proper nouns (e.g., see:

Internet Protocol). For such terms, the "I" definition is intended

only to provide basic information; the authoritative definition is

found elsewhere.

For a proper noun identified as an "Internet protocol", please refer

Page 699: Tugas tik di kelas xi ips 3

to the current edition of "Internet Official Protocol Standards" (STD

1) for the standardization state and status of the protocol.

Shirey Informational [Page 4]

RFC 2828 Internet Security Glossary May 2000

2.2 Recommended Terms with a Non-Internet Basis ("N")

The paragraph marking "N" (as opposed to "O") indicates a definition

that SHOULD be the first choice for the term, if the term is usedat

all in Internet documents. Terms and definitions of this type MAYbe

used in Internet documents (e.g., see: X.509 public-key certificate).

Page 700: Tugas tik di kelas xi ips 3

However, an "N" (as opposed to an "I") also indicates a definition

that has a non-Internet basis or origin. Many such definitions are

preceded by an indication of a contextual usage limitation, and this

Glossary's endorsement does not apply outside that context. Also,

some contexts are rarely if ever expected to occur in a Internet

document (e.g., see: baggage). In those cases, the listing existsto

make Internet authors aware of the non-Internet usage so that they

can avoid conflicts with non-Internet documents.

Many terms with "N" definitions are proper nouns (e.g., see:

Computer Security Objects Register). For such terms, the "N"

definition is intended only to provide basic information; the

authoritative definition is found elsewhere.

2.3 Other Definitions ("O")

The paragraph marking "O" indicates a definition that has a non-

Internet basis, but indicates that the definition SHOULD NOT be used

Page 701: Tugas tik di kelas xi ips 3

in ISDs *except* in cases where the term is specifically identified

as non-Internet.

For example, an ISD might mention "BCA" (see: brand certification

authority) or "baggage" as an example to illustrate some concept;in

that case, the document should specifically say "SET(trademark) BCA"

or "SET(trademark) baggage" and include the definition of the term.

For some terms that have a definition published by a non-Internet

authority--government (see: object reuse), industry (see: Secure Data

Exchange), national (see: Data Encryption Standard), or international

(see: data confidentiality)--this Glossary marks the definition "N",

recommending its use in Internet documents. In other cases, the non-

Internet definition of a term is inadequate or inappropriate for

ISDs. For example, it may be narrow or outdated, or it may need

clarification by substituting more careful or more explanatory

wording using other terms that are defined in this Glossary. In those

cases, this Glossary marks the tern "O" and provides an "I"

definition (or sometimes a different "N" definition), which precedes

Page 702: Tugas tik di kelas xi ips 3

and supersedes the definition marked "O".

Shirey Informational [Page 5]

RFC 2828 Internet Security Glossary May 2000

In most of the cases where this Glossary provides a definition to

supersede one from a non-Internet standard, the substitute is

intended to subsume the meaning of the superseded "O" definition and

not conflict with it. For the term "security service", for example,

the "O" definition deals narrowly with only communication services

provided by layers in the OSI model and is inadequate for the full

range of ISD usage; the "I" definition can be used in more situations

Page 703: Tugas tik di kelas xi ips 3

and for more kinds of service. However, the "O" definition is also

provided here so that ISD authors will be aware of the context in

which the term is used more narrowly.

When making substitutions, this Glossary attempts to use

understandable English that does not contradict any non-Internet

authority. Still, terminology differs between the standards of the

American Bar Association, OSI, SET, the U.S. Department of Defense,

and other authorities, and this Glossary probably is not exactly

aligned with all of them.

2.4 Deprecated Terms, Definitions, and Uses ("D")

If this Glossary recommends that a term or definition SHOULD NOT be

used in ISDs, then either the definition has the paragraph marking

"D", or the restriction is stated in a "D" paragraph that immediately

follows the term or definition.

2.5 Commentary and Additional Guidance ("C")

Page 704: Tugas tik di kelas xi ips 3

The paragraph marking "C" identifies text that is advisory or

tutorial. This text MAY be reused in other Internet documents. This

text is not intended to be authoritative, but is provided to clarify

the definitions and to enhance this Glossary so that Internet

security novices can use it as a tutorial.

3. Definitions

Note: Each acronym or other abbreviation (except items of common

English usage, such as "e.g.", "etc.", "i.e.", "vol.", "pp.", "U.S.")

that is used in this Glossary, either in a definition or as a subpart

of a defined term, is also defined in this Glossary.

$ 3DES

See: triple DES.

$ *-property

(N) (Pronounced "star property".) See: "confinement property"

under Bell-LaPadula Model.

Page 705: Tugas tik di kelas xi ips 3

Shirey Informational [Page 6]

RFC 2828 Internet Security Glossary May 2000

$ ABA Guidelines

(N) "American Bar Association (ABA) Digital Signature Guidelines"

[ABA], a framework of legal principles for using digital

signatures and digital certificates in electronic commerce.

$ Abstract Syntax Notation One (ASN.1)

(N) A standard for describing data objects. [X680]

(C) OSI standards use ASN.1 to specify data formats for protocols.

OSI defines functionality in layers. Information objects at higher

layers are abstractly defined to be implemented with objects at

lower layers. A higher layer may define transfers of abstract

objects between computers, and a lower layer may define transfers

Page 706: Tugas tik di kelas xi ips 3

concretely as strings of bits. Syntax is needed to define abstract

objects, and encoding rules are needed to transform between

abstract objects and bit strings. (See: Basic Encoding Rules.)

(C) In ASN.1, formal names are written without spaces, and

separate words in a name are indicated by capitalizing the first

letter of each word except the first word. For example, the name

of a CRL is "certificateRevocationList".

$ ACC

See: access control center.

$ access

(I) The ability and means to communicate with or otherwise

interact with a system in order to use system resources to either

handle information or gain knowledge of the information the system

contains.

(O) "A specific type of interaction between a subject and an

object that results in the flow of information from one to the

other." [NCS04]

Page 707: Tugas tik di kelas xi ips 3

(C) In this Glossary, "access" is intended to cover any ability to

communicate with a system, including one-way communication in

either direction. In actual practice, however, entities outside a

security perimeter that can receive output from the system but

cannot provide input or otherwise directly interact with the

system, might be treated as not having "access" and, therefore, be

exempt from security policy requirements, such as the need fora

security clearance.

$ access control

(I) Protection of system resources against unauthorized access; a

process by which use of system resources is regulated according to

a security policy and is permitted by only authorized entities

Shirey Informational [Page 7]

RFC 2828 Internet Security Glossary May 2000

Page 708: Tugas tik di kelas xi ips 3

(users, programs, processes, or other systems) according to that

policy. (See: access, access control service.)

(O) "The prevention of unauthorized use of a resource, including

the prevention of use of a resource in an unauthorized manner."

[I7498 Part 2]

$ access control center (ACC)

(I) A computer containing a database with entries that define a

security policy for an access control service.

(C) An ACC is sometimes used in conjunction with a key center to

implement access control in a key distribution system for

symmetric cryptography.

$ access control list (ACL)

(I) A mechanism that implements access control for a system

resource by enumerating the identities of the system entities that

are permitted to access the resource. (See: capability.)

Page 709: Tugas tik di kelas xi ips 3

$ access control service

(I) A security service that protects against a system entity using

a system resource in a way not authorized by the system's security

policy; in short, protection of system resources against

unauthorized access. (See: access control, discretionary access

control, identity-based security policy, mandatory access control,

rule-based security policy.)

(C) This service includes protecting against use of a resourcein

an unauthorized manner by an entity that is authorized to use the

resource in some other manner. The two basic mechanisms for

implementing this service are ACLs and tickets.

$ access mode

(I) A distinct type of data processing operation--e.g., read,

write, append, or execute--that a subject can potentially perform

on an object in a computer system.

$ accountability

(I) The property of a system (including all of its system

Page 710: Tugas tik di kelas xi ips 3

resources) that ensures that the actions of a system entity may be

traced uniquely to that entity, which can be held responsible for

its actions. (See: audit service.)

(C) Accountability permits detection and subsequent investigation

of security breaches.

Shirey Informational [Page 8]

RFC 2828 Internet Security Glossary May 2000

$ accredit

$ accreditation

(I) An administrative declaration by a designated authority that

an information system is approved to operate in a particular

security configuration with a prescribed set of safeguards.

Page 711: Tugas tik di kelas xi ips 3

[FP102] (See: certification.)

(C) An accreditation is usually based on a technical certification

of the system's security mechanisms. The terms "certification"and

"accreditation" are used more in the U.S. Department of Defense

and other government agencies than in commercial organizations.

However, the concepts apply any place where managers are required

to deal with and accept responsibility for security risks. The

American Bar Association is developing accreditation criteria for

CAs.

$ ACL

See: access control list.

$ acquirer

(N) SET usage: "The financial institution that establishes an

account with a merchant and processes payment card authorizations

and payments." [SET1]

(O) "The institution (or its agent) that acquires from the card

Page 712: Tugas tik di kelas xi ips 3

acceptor the financial data relating to the transaction and

initiates that data into an interchange system." [SET2]

$ active attack

See: (secondary definition under) attack.

$ active wiretapping

See: (secondary definition under) wiretapping.

$ add-on security

(I) "The retrofitting of protection mechanisms, implemented by

hardware or software, after the [automatic data processing] system

has become operational." [FP039]

$ administrative security

(I) Management procedures and constraints to prevent unauthorized

access to a system. (See: security architecture.)

(O) "The management constraints, operational procedures,

accountability procedures, and supplemental controls established

to provide an acceptable level of protection for sensitive data."

[FP039]

Page 713: Tugas tik di kelas xi ips 3

Shirey Informational [Page 9]

RFC 2828 Internet Security Glossary May 2000

(C) Examples include clear delineation and separation of duties,

and configuration control.

$ Advanced Encryption Standard (AES)

(N) A future FIPS publication being developed by NIST to succeed

DES. Intended to specify an unclassified, publicly-disclosed,

symmetric encryption algorithm, available royalty-free worldwide.

$ adversary

(I) An entity that attacks, or is a threat to, a system.

$ aggregation

(I) A circumstance in which a collection of information items is

Page 714: Tugas tik di kelas xi ips 3

required to be classified at a higher security level than any of

the individual items that comprise it.

$ AH

See: Authentication Header

$ algorithm

(I) A finite set of step-by-step instructions for a problem-

solving or computation procedure, especially one that can be

implemented by a computer. (See: cryptographic algorithm.)

$ alias

(I) A name that an entity uses in place of its real name, usually

for the purpose of either anonymity or deception.

$ American National Standards Institute (ANSI)

(N) A private, not-for-profit association of users, manufacturers,

and other organizations, that administers U.S. private sector

voluntary standards.

(C) ANSI is the sole U.S. representative to the two major non-

treaty international standards organizations, ISO and, via the

U.S. National Committee (USNC), the International Electrotechnical

Page 715: Tugas tik di kelas xi ips 3

Commission (IEC).

$ anonymous

(I) The condition of having a name that is unknown or concealed.

(See: anonymous login.)

(C) An application may require security services that maintain

anonymity of users or other system entities, perhaps to preserve

their privacy or hide them from attack. To hide an entity's real

name, an alias may be used. For example, a financial institution

may assign an account number. Parties to a transaction can thus

remain relatively anonymous, but can also accept the transaction

Shirey Informational [Page10]

RFC 2828 Internet Security Glossary May 2000

Page 716: Tugas tik di kelas xi ips 3

as legitimate. Real names of the parties cannot be easily

determined by observers of the transaction, but an authorized

third party may be able to map an alias to a real name, such as by

presenting the institution with a court order. In other

applications, anonymous entities may be completely untraceable.

$ anonymous login

(I) An access control feature (or, rather, an access control

weakness) in many Internet hosts that enables users to gain access

to general-purpose or public services and resources on a host

(such as allowing any user to transfer data using File Transfer

Protocol) without having a pre-established, user-specific account

(i.e., user name and secret password).

(C) This feature exposes a system to more threats than when all

the users are known, pre-registered entities that are individually

accountable for their actions. A user logs in using a special,

publicly known user name (e.g., "anonymous", "guest", or "ftp").

To use the public login name, the user is not required to knowa

Page 717: Tugas tik di kelas xi ips 3

secret password and may not be required to input anything at all

except the name. In other cases, to complete the normal sequence

of steps in a login protocol, the system may require the user to

input a matching, publicly known password (such as "anonymous") or

may ask the user for an e-mail address or some other arbitrary

character string.

$ APOP

See: POP3 APOP.

$ archive

(I) (1.) Noun: A collection of data that is stored for a

relatively long period of time for historical and other purposes,

such as to support audit service, availability service, or system

integrity service. (See: backup.) (2.) Verb: To store data in such

a way. (See: back up.)

(C) A digital signature may need to be verified many years after

the signing occurs. The CA--the one that issued the certificate

Page 718: Tugas tik di kelas xi ips 3

containing the public key needed to verify that signature--maynot

stay in operation that long. So every CA needs to provide for

long-term storage of the information needed to verify the

signatures of those to whom it issues certificates.

$ ARPANET

(N) Advanced Research Projects Agency Network, a pioneer packet-

switched network that was built in the early 1970s under contract

to the U.S. Government, led to the development of today's

Internet, and was decommissioned in June 1990.

Shirey Informational [Page11]

RFC 2828 Internet Security Glossary May 2000

$ ASN.1

See: Abstract Syntax Notation One.

Page 719: Tugas tik di kelas xi ips 3

$ association

(I) A cooperative relationship between system entities, usually

for the purpose of transferring information between them. (See:

security association.)

$ assurance

(I) (1.) An attribute of an information system that provides

grounds for having confidence that the system operates such that

the system security policy is enforced. (2.) A procedure that

ensures a system is developed and operated as intended by the

system's security policy.

$ assurance level

(I) Evaluation usage: A specific level on a hierarchical scale

representing successively increased confidence that a target of

evaluation adequately fulfills the requirements. (E.g., see:

TCSEC.)

$ asymmetric cryptography

(I) A modern branch of cryptography (popularly known as "public-

key cryptography") in which the algorithms employ a pair of keys

Page 720: Tugas tik di kelas xi ips 3

(a public key and a private key) and use a different componentof

the pair for different steps of the algorithm. (See: key pair.)

(C) Asymmetric algorithms have key management advantages over

equivalently strong symmetric ones. First, one key of the pair

does not need to be known by anyone but its owner; so it can more

easily be kept secret. Second, although the other key of the pair

is shared by all entities that use the algorithm, that key does

not need to be kept secret from other, non-using entities; so the

key distribution part of key management can be done more easily.

(C) For encryption: In an asymmetric encryption algorithm (e.g.,

see: RSA), when Alice wants to ensure confidentiality for datashe

sends to Bob, she encrypts the data with a public key providedby

Bob. Only Bob has the matching private key that is needed to

decrypt the data.

(C) For signature: In an asymmetric digital signature algorithm

(e.g., see: DSA), when Alice wants to ensure data integrity or

Page 721: Tugas tik di kelas xi ips 3

provide authentication for data she sends to Bob, she uses her

private key to sign the data (i.e., create a digital signature

based on the data). To verify the signature, Bob uses the matching

public key that Alice has provided.

Shirey Informational [Page12]

RFC 2828 Internet Security Glossary May 2000

(C) For key agreement: In an asymmetric key agreement algorithm

(e.g., see: Diffie-Hellman), Alice and Bob each send their own

public key to the other person. Then each uses their own private

key and the other's public key to compute the new key value.

$ attack

(I) An assault on system security that derives from an intelligent

threat, i.e., an intelligent act that is a deliberate attempt

Page 722: Tugas tik di kelas xi ips 3

(especially in the sense of a method or technique) to evade

security services and violate the security policy of a system.

(See: penetration, violation, vulnerability.)

- Active vs. passive: An "active attack" attempts to alter system

resources or affect their operation. A "passive attack"

attempts to learn or make use of information from the system

but does not affect system resources. (E.g., see: wiretapping.)

- Insider vs. outsider: An "inside attack" is an attack initiated

by an entity inside the security perimeter (an "insider"),

i.e., an entity that is authorized to access system resources

but uses them in a way not approved by those who granted the

authorization. An "outside attack" is initiated from outside

the perimeter, by an unauthorized or illegitimate user of the

system (an "outsider"). In the Internet, potential outside

attackers range from amateur pranksters to organized criminals,

international terrorists, and hostile governments.

Page 723: Tugas tik di kelas xi ips 3

(C) The term "attack" relates to some other basic security terms

as shown in the following diagram:

+ - - - - - - - - - - - - + + - - - - + + - - - - - - - - - - -+

| An Attack: | |Counter- | | A System Resource:|

| i.e., A Threat Action | | measure | | Target of the Attack |

| +----------+ | | | | +-----------------+ |

| | Attacker |<==================||<========= | |

| | i.e., | Passive | | | | | Vulnerability | |

| | A Threat |<=================>||<========> | |

| | Agent | or Active | | | | +-------|||-------+ |

| +----------+ Attack | | | | VVV |

| | | | | Threat Consequences |

+ - - - - - - - - - - - - + + - - - - + + - - - - - - - - - - -+

$ attribute authority

(I) A CA that issues attribute certificates.

Page 724: Tugas tik di kelas xi ips 3

(O) "An authority, trusted by the verifier to delegate privilege,

which issues attribute certificates." [FPDAM]

Shirey Informational [Page13]

RFC 2828 Internet Security Glossary May 2000

$ attribute certificate

(I) A digital certificate that binds a set of descriptive data

items, other than a public key, either directly to a subject name

or to the identifier of another certificate that is a public-key

certificate. [X509]

(O) "A set of attributes of a user together with some other

information, rendered unforgeable by the digital signature created

using the private key of the CA which issued it." [X509]

Page 725: Tugas tik di kelas xi ips 3

(O) "A data structure that includes some attribute values and

identification information about the owner of the attribute

certificate, all digitally signed by an Attribute Authority. This

authority's signature serves as the guarantee of the binding

between the attributes and their owner." [FPDAM]

(C) A public-key certificate binds a subject name to a public key

value, along with information needed to perform certain

cryptographic functions. Other attributes of a subject, such as a

security clearance, may be certified in a separate kind of digital

certificate, called an attribute certificate. A subject may have

multiple attribute certificates associated with its name or with

each of its public-key certificates.

(C) An attribute certificate might be issued to a subject in the

following situations:

- Different lifetimes: When the lifetime of an attribute binding

is shorter than that of the related public-key certificate,or

Page 726: Tugas tik di kelas xi ips 3

when it is desirable not to need to revoke a subject's public

key just to revoke an attribute.

- Different authorities: When the authority responsible for the

attributes is different than the one that issues the public-key

certificate for the subject. (There is no requirement that an

attribute certificate be issued by the same CA that issued the

associated public-key certificate.)

$ audit service

(I) A security service that records information needed to

establish accountability for system events and for the actionsof

system entities that cause them. (See: security audit.)

$ audit trail

See: security audit trail.

Page 727: Tugas tik di kelas xi ips 3

Shirey Informational [Page14]

RFC 2828 Internet Security Glossary May 2000

$ AUTH

See: POP3 AUTH.

$ authentic signature

(I) A signature (particularly a digital signature) that can be

trusted because it can be verified. (See: validate vs. verify.)

$ authenticate

(I) Verify (i.e., establish the truth of) an identity claimed by

or for a system entity. (See: authentication.)

(D) In general English usage, this term usually means "to prove

genuine" (e.g., an art expert authenticates a Michelangelo

painting). But the recommended definition carries a much narrower

meaning. For example, to be precise, an ISD SHOULD NOT say "the

Page 728: Tugas tik di kelas xi ips 3

host authenticates each received datagram". Instead, the ISD

SHOULD say "the host authenticates the origin of each received

datagram". In most cases, we also can say "and verifies the

datagram's integrity", because that is usually implied. (See:

("relationship between data integrity service and authentication

services" under) data integrity service.)

(D) ISDs SHOULD NOT talk about authenticating a digital signature

or digital certificate. Instead, we "sign" and then "verify"

digital signatures, and we "issue" and then "validate" digital

certificates. (See: validate vs. verify.)

$ authentication

(I) The process of verifying an identity claimed by or for a

system entity. (See: authenticate, authentication exchange,

authentication information, credential, data origin

authentication, peer entity authentication.)

(C) An authentication process consists of two steps:

1. Identification step: Presenting an identifier to the security

system. (Identifiers should be assigned carefully, because

authenticated identities are the basis for other security

services, such as access control service.)

Page 729: Tugas tik di kelas xi ips 3

2. Verification step: Presenting or generating authentication

information that corroborates the binding between the entity

and the identifier. (See: verification.)

(C) See: ("relationship between data integrity service and

authentication services" under) data integrity service.

Shirey Informational [Page15]

RFC 2828 Internet Security Glossary May 2000

$ authentication code

(D) ISDs SHOULD NOT use this term as a synonym for any form of

checksum, whether cryptographic or not. The word "authentication"

is misleading because the mechanism involved usually serves a data

Page 730: Tugas tik di kelas xi ips 3

integrity function rather than an authentication function, andthe

word "code" is misleading because it implies that either encoding

or encryption is involved or that the term refers to computer

software. (See: message authentication code.)

$ authentication exchange

(I) A mechanism to verify the identity of an entity by means of

information exchange.

(O) "A mechanism intended to ensure the identity of an entity by

means of information exchange." [I7498 Part 2]

$ Authentication Header (AH)

(I) An Internet IPsec protocol [R2402] designed to provide

connectionless data integrity service and data origin

authentication service for IP datagrams, and (optionally) to

provide protection against replay attacks.

(C) Replay protection may be selected by the receiver when a

security association is established. AH authenticates upper-layer

protocol data units and as much of the IP header as possible.

However, some IP header fields may change in transit, and the

Page 731: Tugas tik di kelas xi ips 3

value of these fields, when the packet arrives at the receiver,

may not be predictable by the sender. Thus, the values of such

fields cannot be protected end-to-end by AH; protection of theIP

header by AH is only partial when such fields are present.

(C) AH may be used alone, or in combination with the IPsec ESP

protocol, or in a nested fashion with tunneling. Security services

can be provided between a pair of communicating hosts, betweena

pair of communicating security gateways, or between a host anda

gateway. ESP can provide the same security services as AH, andESP

can also provide data confidentiality service. The main difference

between authentication services provided by ESP and AH is the

extent of the coverage; ESP does not protect IP header fields

unless they are encapsulated by AH.

$ authentication information

(I) Information used to verify an identity claimed by or for an

entity. (See: authentication, credential.)

(C) Authentication information may exist as, or be derived from,

Page 732: Tugas tik di kelas xi ips 3

one of the following:

Shirey Informational [Page16]

RFC 2828 Internet Security Glossary May 2000

- Something the entity knows. (See: password).

- Something the entity possesses. (See: token.)

- Something the entity is. (See: biometric authentication.)

$ authentication service

(I) A security service that verifies an identity claimed by orfor

an entity. (See: authentication.)

(C) In a network, there are two general forms of authentication

service: data origin authentication service and peer entity

authentication service.

Page 733: Tugas tik di kelas xi ips 3

$ authenticity

(I) The property of being genuine and able to be verified and be

trusted. (See: authenticate, authentication, validate vs. verify)

$ authority

(D) "An entity, responsible for the issuance of certificates."

[FPDAM]

(C) ISDs SHOULD NOT use this term as a synonym for AA, CA, RA,

ORA, or similar terms, because it may cause confusion. Instead,

use the full term at the first instance of usage and then, if it

is necessary to shorten text, use the style of abbreviation

defined in this Glossary.

(C) ISDs SHOULD NOT use this definition for any PKI entity,

because the definition is ambiguous with regard to whether the

entity actually issues certificates (e.g., attribute authorityor

certification authority) or just has accountability for processes

that precede or follow signing (e.g., registration authority).

(See: issue.)

$ authority certificate

Page 734: Tugas tik di kelas xi ips 3

(D) "A certificate issued to an authority (e.g. either to a

certification authority or to an attribute authority)." [FPDAM]

(See: authority.)

(C) ISDs SHOULD NOT use this term or definition because they are

ambiguous with regard to which specific types of PKI entities they

address.

$ authority revocation list (ARL)

(I) A data structure that enumerates digital certificates that

were issued to CAs but have been invalidated by their issuer prior

to when they were scheduled to expire. (See: certificate

expiration, X.509 authority revocation list.)

Shirey Informational [Page17]

RFC 2828 Internet Security Glossary May 2000

Page 735: Tugas tik di kelas xi ips 3

(O) "A revocation list containing a list of public-key

certificates issued to authorities, which are no longer considered

valid by the certificate issuer." [FPDAM]

$ authorization

$ authorize

(I) (1.) An "authorization" is a right or a permission that is

granted to a system entity to access a system resource. (2.) An

"authorization process" is a procedure for granting such rights.

(3.) To "authorize" means to grant such a right or permission.

(See: privilege.)

(O) SET usage: "The process by which a properly appointed person

or persons grants permission to perform some action on behalf of

an organization. This process assesses transaction risk, confirms

that a given transaction does not raise the account holder's debt

above the account's credit limit, and reserves the specified

amount of credit. (When a merchant obtains authorization, payment

for the authorized amount is guaranteed--provided, of course, that

Page 736: Tugas tik di kelas xi ips 3

the merchant followed the rules associated with the authorization

process.)" [SET2]

$ automated information system

(I) An organized assembly of resources and procedures--i.e.,

computing and communications equipment and services, with their

supporting facilities and personnel--that collect, record,

process, store, transport, retrieve, or display information to

accomplish a specified set of functions.

$ availability

(I) The property of a system or a system resource being accessible

and usable upon demand by an authorized system entity, according

to performance specifications for the system; i.e., a system is

available if it provides services according to the system design

whenever users request them. (See: critical, denial of service,

reliability, survivability.)

(O) "The property of being accessible and usable upon demand by an

authorized entity." [I7498 Part 2]

Page 737: Tugas tik di kelas xi ips 3

$ availability service

(I) A security service that protects a system to ensure its

availability.

(C) This service addresses the security concerns raised by denial-

of-service attacks. It depends on proper management and control of

system resources, and thus depends on access control service and

other security services.

Shirey Informational [Page18]

RFC 2828 Internet Security Glossary May 2000

$ back door

(I) A hardware or software mechanism that (a) provides access to a

system and its resources by other than the usual procedure, (b)

Page 738: Tugas tik di kelas xi ips 3

was deliberately left in place by the system's designers or

maintainers, and (c) usually is not publicly known. (See: trap

door.)

(C) For example, a way to access a computer other than througha

normal login. Such access paths do not necessarily have malicious

intent; e.g., operating systems sometimes are shipped by the

manufacturer with privileged accounts intended for use by field

service technicians or the vendor's maintenance programmers. (See:

trap door.)

$ back up vs. backup

(I) Verb "back up": To store data for the purpose of creating a

backup copy. (See: archive.)

(I) Noun/adjective "backup": (1.) A reserve copy of data that is

stored separately from the original, for use if the original

becomes lost or damaged. (See: archive.) (2.) Alternate means to

permit performance of system functions despite a disaster to

system resources. (See: contingency plan.)

Page 739: Tugas tik di kelas xi ips 3

$ baggage

(D) ISDs SHOULD NOT use this term to describe a data element

except when stated as "SET(trademark) baggage" with the following

meaning:

(O) SET usage: An "opaque encrypted tuple, which is included in a

SET message but appended as external data to the PKCS encapsulated

data. This avoids superencryption of the previously encrypted

tuple, but guarantees linkage with the PKCS portion of the

message." [SET2]

$ bandwidth

(I) Commonly used to mean the capacity of a communication channel

to pass data through the channel in a given amount of time.

Usually expressed in bits per second.

$ bank identification number (BIN)

(N) The digits of a credit card number that identify the issuing

bank. (See: primary account number.)

(O) SET usage: The first six digits of a primary account number.

Page 740: Tugas tik di kelas xi ips 3

Shirey Informational [Page19]

RFC 2828 Internet Security Glossary May 2000

$ Basic Encoding Rules (BER)

(I) A standard for representing ASN.1 data types as strings of

octets. [X690] (See: Distinguished Encoding Rules.)

$ bastion host

(I) A strongly protected computer that is in a network protected

by a firewall (or is part of a firewall) and is the only host (or

one of only a few hosts) in the network that can be directly

accessed from networks on the other side of the firewall.

(C) Filtering routers in a firewall typically restrict traffic

Page 741: Tugas tik di kelas xi ips 3

from the outside network to reaching just one host, the bastion

host, which usually is part of the firewall. Since only this one

host can be directly attacked, only this one host needs to be very

strongly protected, so security can be maintained more easily and

less expensively. However, to allow legitimate internal and

external users to access application resources through the

firewall, higher layer protocols and services need to be relayed

and forwarded by the bastion host. Some services (e.g., DNS and

SMTP) have forwarding built in; other services (e.g., TELNET and

FTP) require a proxy server on the bastion host.

$ BCA

See: brand certification authority.

$ BCI

See: brand CRL identifier.

$ Bell-LaPadula Model

(N) A formal, mathematical, state-transition model of security

policy for multilevel-secure computer systems. [Bell]

Page 742: Tugas tik di kelas xi ips 3

(C) The model separates computer system elements into a set of

subjects and a set of objects. To determine whether or not a

subject is authorized for a particular access mode on an object,

the clearance of the subject is compared to the classificationof

the object. The model defines the notion of a "secure state", in

which the only permitted access modes of subjects to objects are

in accordance with a specified security policy. It is proven that

each state transition preserves security by moving from secure

state to secure state, thereby proving that the system is secure.

(C) In this model, a multilevel-secure system satisfies several

rules, including the following:

Shirey Informational [Page20]

Page 743: Tugas tik di kelas xi ips 3

RFC 2828 Internet Security Glossary May 2000

- "Confinement property" (also called "*-property", pronounced

"star property"): A subject has write access to an object only

if classification of the object dominates the clearance of the

subject.

- "Simple security property": A subject has read access to an

object only if the clearance of the subject dominates the

classification of the object.

- "Tranquillity property": The classification of an object does

not change while the object is being processed by the system.

$ BER

See: Basic Encoding Rules.

$ beyond A1

Page 744: Tugas tik di kelas xi ips 3

(O) (1.) Formally, a level of security assurance that is beyond

the highest level of criteria specified by the TCSEC. (2.)

Informally, a level of trust so high that it cannot be provided or

verified by currently available assurance methods, and

particularly not by currently available formal methods.

$ BIN

See: bank identification number.

$ bind

(I) To inseparably associate by applying some mechanism, such as

when a CA uses a digital signature to bind together a subject and

a public key in a public-key certificate.

$ biometric authentication

(I) A method of generating authentication information for a person

by digitizing measurements of a physical characteristic, such as a

fingerprint, a hand shape, a retina pattern, a speech pattern

(voiceprint), or handwriting.

$ bit

Page 745: Tugas tik di kelas xi ips 3

(I) The smallest unit of information storage; a contraction ofthe

term "binary digit"; one of two symbols--"0" (zero) and "1" (one)

--that are used to represent binary numbers.

$ BLACK

(I) Designation for information system equipment or facilities

that handle (and for data that contains) only ciphertext (or,

depending on the context, only unclassified information), and for

such data itself. This term derives from U.S. Government COMSEC

terminology. (See: RED, RED/BLACK separation.)

Shirey Informational [Page21]

RFC 2828 Internet Security Glossary May 2000

$ block cipher

Page 746: Tugas tik di kelas xi ips 3

(I) An encryption algorithm that breaks plaintext into fixed-size

segments and uses the same key to transform each plaintext segment

into a fixed-size segment of ciphertext. (See: mode, stream

cipher.)

(C) For example, Blowfish, DEA, IDEA, RC2, and SKIPJACK. However,

a block cipher can be adapted to have a different external

interface, such as that of a stream cipher, by using a mode of

operation to "package" the basic algorithm.

$ Blowfish

(N) A symmetric block cipher with variable-length key (32 to 448

bits) designed in 1993 by Bruce Schneier as an unpatented,

license-free, royalty-free replacement for DES or IDEA. [Schn]

$ brand

(I) A distinctive mark or name that identifies a product or

business entity.

(O) SET usage: The name of a payment card. Financial institutions

and other companies have founded payment card brands, protect and

advertise the brands, establish and enforce rules for use and

Page 747: Tugas tik di kelas xi ips 3

acceptance of their payment cards, and provide networks to

interconnect the financial institutions. These brands combine the

roles of issuer and acquirer in interactions with cardholders and

merchants. [SET1]

$ brand certification authority (BCA)

(O) SET usage: A CA owned by a payment card brand, such as

MasterCard, Visa, or American Express. [SET2] (See: certification

hierarchy, SET.)

$ brand CRL identifier (BCI)

(O) SET usage: A digitally signed list, issued by a BCA, of the

names of CAs for which CRLs need to be processed when verifying

signatures in SET messages. [SET2]

$ break

(I) Cryptographic usage: To successfully perform cryptanalysisand

thus succeed in decrypting data or performing some other

cryptographic function, without initially having knowledge of the

key that the function requires. (This term applies to encrypted

Page 748: Tugas tik di kelas xi ips 3

data or, more generally, to a cryptographic algorithm or

cryptographic system.)

Shirey Informational [Page22]

RFC 2828 Internet Security Glossary May 2000

$ bridge

(I) A computer that is a gateway between two networks (usuallytwo

LANs) at OSI layer 2. (See: router.)

$ British Standard 7799

(N) Part 1 is a standard code of practice and provides guidance on

how to secure an information system. Part 2 specifies the

management framework, objectives, and control requirements for

Page 749: Tugas tik di kelas xi ips 3

information security management systems [B7799]. The certification

scheme works like ISO 9000. It is in use in the UK, the

Netherlands, Australia, and New Zealand and might be proposed as

an ISO standard or adapted to be part of the Common Criteria.

$ browser

(I) An client computer program that can retrieve and display

information from servers on the World Wide Web.

(C) For example, Netscape's Navigator and Communicator, and

Microsoft's Explorer.

$ brute force

(I) A cryptanalysis technique or other kind of attack method

involving an exhaustive procedure that tries all possibilities,

one-by-one.

(C) For example, for ciphertext where the analyst already knows

the decryption algorithm, a brute force technique to finding the

original plaintext is to decrypt the message with every possible

key.

Page 750: Tugas tik di kelas xi ips 3

$ BS7799

See: British Standard 7799.

$ byte

(I) A fundamental unit of computer storage; the smallest

addressable unit in a computer's architecture. Usually holds one

character of information and, today, usually means eight bits.

(See: octet.)

(C) Larger than a "bit", but smaller than a "word". Although

"byte" almost always means "octet" today, bytes had other sizes

(e.g., six bits, nine bits) in earlier computer architectures.

$ CA

See: certification authority.

Shirey Informational [Page23]

Page 751: Tugas tik di kelas xi ips 3

RFC 2828 Internet Security Glossary May 2000

$ CA certificate

(I) "A [digital] certificate for one CA issued by another CA."

[X509]

(C) That is, a digital certificate whose holder is able to issue

digital certificates. A v3 X.509 public-key certificate may have a

"basicConstraints" extension containing a "cA" value that

specifically "indicates whether or not the public key may be used

to verify certificate signatures."

$ call back

(I) An authentication technique for terminals that remotely access

a computer via telephone lines. The host system disconnects the

caller and then calls back on a telephone number that was

previously authorized for that terminal.

$ capability

(I) A token, usually an unforgeable data value (sometimes called a

Page 752: Tugas tik di kelas xi ips 3

"ticket") that gives the bearer or holder the right to access a

system resource. Possession of the token is accepted by a system

as proof that the holder has been authorized to access the

resource named or indicated by the token. (See: access control

list, credential, digital certificate.)

(C) This concept can be implemented as a digital certificate.

(See: attribute certificate.)

$ CAPI

See: cryptographic application programming interface.

$ CAPSTONE chip

(N) An integrated circuit (the Mykotronx, Inc. MYK-82) with a Type

II cryptographic processor that implements SKIPJACK, KEA, DSA,

SHA, and basic mathematical functions to support asymmetric

cryptography, and includes the key escrow feature of the CLIPPER

chip. (See: FORTEZZA card.)

$ card

See: cryptographic card, FORTEZZA card, payment card, PC card,

smart card, token.

Page 753: Tugas tik di kelas xi ips 3

$ card backup

See: token backup.

$ card copy

See: token copy.

Shirey Informational [Page24]

RFC 2828 Internet Security Glossary May 2000

$ card restore

See: token restore.

$ cardholder

(I) An entity that has been issued a card.

(O) SET usage: "The holder of a valid payment card account and

user of software supporting electronic commerce." [SET2] A

Page 754: Tugas tik di kelas xi ips 3

cardholder is issued a payment card by an issuer. SET ensures that

in the cardholder's interactions with merchants, the payment card

account information remains confidential. [SET1]

$ cardholder certificate

(O) SET usage: A digital certificate that is issued to a

cardholder upon approval of the cardholder's issuing financial

institution and that is transmitted to merchants with purchase

requests and encrypted payment instructions, carrying assurance

that the account number has been validated by the issuing

financial institution and cannot be altered by a third party.

[SET1]

$ cardholder certification authority (CCA)

(O) SET usage: A CA responsible for issuing digital certificates

to cardholders and operated on behalf of a payment card brand,an

issuer, or another party according to brand rules. A CCA maintains

relationships with card issuers to allow for the verification of

cardholder accounts. A CCA does not issue a CRL but does

distribute CRLs issued by root CAs, brand CAs, geopolitical CAs,

Page 755: Tugas tik di kelas xi ips 3

and payment gateway CAs. [SET2]

$ CAST

(N) A design procedure for symmetric encryption algorithms, and a

resulting family of algorithms, invented by C.A. (Carlisle Adams)

and S.T. (Stafford Tavares). [R2144, R2612]

$ category

(I) A grouping of sensitive information items to which a non-

hierarchical restrictive security label is applied to increase

protection of the data. (See: compartment.)

$ CAW

See: certification authority workstation.

$ CBC

See: cipher block chaining.

$ CCA

See: cardholder certification authority.

Shirey Informational [Page25]

Page 756: Tugas tik di kelas xi ips 3

RFC 2828 Internet Security Glossary May 2000

$ CCITT

(N) Acronym for French translation of International Telephone and

Telegraph Consultative Committee. Now renamed ITU-T.

$ CERT

See: computer emergency response team.

$ certificate

(I) General English usage: A document that attests to the truth of

something or the ownership of something.

(C) Security usage: See: capability, digital certificate.

(C) PKI usage: See: attribute certificate, public-key certificate.

$ certificate authority

(D) ISDs SHOULD NOT use this term because it looks like sloppyuse

Page 757: Tugas tik di kelas xi ips 3

of "certification authority", which is the term standardized by

X.509.

$ certificate chain

(D) ISDs SHOULD NOT use this term because it duplicates the

meaning of a standardized term. Instead, use "certification path".

$ certificate chain validation

(D) ISDs SHOULD NOT use this term because it duplicates the

meaning of standardized terms and mixes concepts in a potentially

misleading way. Instead, use "certificate validation" or "path

validation", depending on what is meant. (See: validate vs.

verify.)

$ certificate creation

(I) The act or process by which a CA sets the values of a digital

certificate's data fields and signs it. (See: issue.)

$ certificate expiration

(I) The event that occurs when a certificate ceases to be valid

because its assigned lifetime has been exceeded. (See: certificate

revocation, validity period.)

Page 758: Tugas tik di kelas xi ips 3

$ certificate extension

See: extension.

Shirey Informational [Page26]

RFC 2828 Internet Security Glossary May 2000

$ certificate holder

(D) ISDs SHOULD NOT use this term as a synonym for the subjectof

a digital certificate because the term is potentially ambiguous.

For example, the term could also refer to a system entity, such as

Page 759: Tugas tik di kelas xi ips 3

a repository, that simply has possession of a copy of the

certificate. (See: certificate owner.)

$ certificate management

(I) The functions that a CA may perform during the life cycle of a

digital certificate, including the following:

- Acquire and verify data items to bind into the certificate.

- Encode and sign the certificate.

- Store the certificate in a directory or repository.

- Renew, rekey, and update the certificate.

- Revoke the certificate and issue a CRL.

(See: archive management, certificate management, key management,

security architecture, token management.)

$ certificate owner

(D) ISDs SHOULD NOT use this term as a synonym for the subjectof

a digital certificate because the term is potentially ambiguous.

For example, the term could also refer to a system entity, such as

a corporation, that has acquired a certificate to operate some

other entity, such as a Web server. (See: certificate holder.)

Page 760: Tugas tik di kelas xi ips 3

$ certificate policy

(I) "A named set of rules that indicates the applicability of a

certificate to a particular community and/or class of application

with common security requirements." [X509] (See: certification

practice statement.)

(C) A certificate policy can help a certificate user decide

whether a certificate should be trusted in a particular

application. "For example, a particular certificate policy might

indicate applicability of a type of certificate for the

authentication of electronic data interchange transactions forthe

trading goods within a given price range." [R2527]

(C) A v3 X.509 public-key certificate may have a

"certificatePolicies" extension that lists certificate policies,

recognized by the issuing CA, that apply to the certificate and

govern its use. Each policy is denoted by an object identifierand

may optionally have certificate policy qualifiers.

Page 761: Tugas tik di kelas xi ips 3

Shirey Informational [Page27]

RFC 2828 Internet Security Glossary May 2000

(C) SET usage: Every SET certificate specifies at least one

certificate policy, that of the SET root CA. SET uses certificate

policy qualifiers to point to the actual policy statement and to

add qualifying policies to the root policy. (See: SET qualifier.)

$ certificate policy qualifier

(I) Information that pertains to a certificate policy and is

included in a "certificatePolicies" extension in a v3 X.509

public-key certificate.

$ certificate reactivation

(I) The act or process by which a digital certificate, which aCA

Page 762: Tugas tik di kelas xi ips 3

has designated for revocation but not yet listed on a CRL, is

returned to the valid state.

$ certificate rekey

(I) The act or process by which an existing public-key certificate

has its public key value changed by issuing a new certificate with

a different (usually new) public key. (See: certificate renewal,

certificate update, rekey.)

(C) For an X.509 public-key certificate, the essence of rekey is

that the subject stays the same and a new public key is bound to

that subject. Other changes are made, and the old certificate is

revoked, only as required by the PKI and CPS in support of the

rekey. If changes go beyond that, the process is a "certificate

update".

(O) MISSI usage: To rekey a MISSI X.509 public-key certificate

means that the issuing authority creates a new certificate that is

identical to the old one, except the new one has a new, different

Page 763: Tugas tik di kelas xi ips 3

KEA key; or a new, different DSS key; or new, different KEA and

DSS keys. The new certificate also has a different serial number

and may have a different validity period. A new key creation date

and maximum key lifetime period are assigned to each newly

generated key. If a new KEA key is generated, that key is assigned

a new KMID. The old certificate remains valid until it expires,

but may not be further renewed, rekeyed, or updated.

$ certificate renewal

(I) The act or process by which the validity of the data binding

asserted by an existing public-key certificate is extended in time

by issuing a new certificate. (See: certificate rekey, certificate

update.)

(C) For an X.509 public-key certificate, this term means that the

validity period is extended (and, of course, a new serial number

is assigned) but the binding of the public key to the subject and

Page 764: Tugas tik di kelas xi ips 3

Shirey Informational [Page28]

RFC 2828 Internet Security Glossary May 2000

to other data items stays the same. The other data items are

changed, and the old certificate is revoked, only as required by

the PKI and CPS to support the renewal. If changes go beyond that,

the process is a "certificate rekey" or "certificate update".

$ certificate request

(D) ISDs SHOULD NOT use this term because it looks like imprecise

use of a term standardized by PKCS #10 and used in PKIX. Instead,

use the standard term, "certification request".

$ certificate revocation

(I) The event that occurs when a CA declares that a previously

valid digital certificate issued by that CA has become invalid;

usually stated with a revocation date.

Page 765: Tugas tik di kelas xi ips 3

(C) In X.509, a revocation is announced to potential certificate

users by issuing a CRL that mentions the certificate. Revocation

and listing on a CRL is only necessary before certificate

expiration.

$ certificate revocation list (CRL)

(I) A data structure that enumerates digital certificates that

have been invalidated by their issuer prior to when they were

scheduled to expire. (See: certificate expiration, X.509

certificate revocation list.)

(O) "A signed list indicating a set of certificates that are no

longer considered valid by the certificate issuer. After a

certificate appears on a CRL, it is deleted from a subsequent CRL

after the certificate's expiry. CRLs may be used to identify

revoked public-key certificates or attribute certificates and may

represent revocation of certificates issued to authorities or to

users. The term CRL is also commonly used as a generic term

applying to all the different types of revocation lists, including

CRLs, ARLs, ACRLs, etc." [FPDAM]

Page 766: Tugas tik di kelas xi ips 3

$ certificate revocation tree

(I) A mechanism for distributing notice of certificate

revocations; uses a tree of hash results that is signed by the

tree's issuer. Offers an alternative to issuing a CRL, but is not

supported in X.509. (See: certificate status responder.)

$ certificate serial number

(I) An integer value that (a) is associated with, and may be

carried in, a digital certificate; (b) is assigned to the

certificate by the certificate's issuer; and (c) is unique among

all the certificates produced by that issuer.

Shirey Informational [Page29]

RFC 2828 Internet Security Glossary May 2000

(O) "An integer value, unique within the issuing CA, which is

Page 767: Tugas tik di kelas xi ips 3

unambiguously associated with a certificate issued by that CA."

[X509]

$ certificate status responder

(N) FPKI usage: A trusted on-line server that acts for a CA to

provide authenticated certificate status information to

certificate users. [FPKI] Offers an alternative to issuing a CRL,

but is not supported in X.509. (See: certificate revocation tree.)

$ certificate update

(I) The act or process by which non-key data items bound in an

existing public-key certificate, especially authorizations granted

to the subject, are changed by issuing a new certificate. (See:

certificate rekey, certificate renewal.)

(C) For an X.509 public-key certificate, the essence of this

process is that fundamental changes are made in the data that is

bound to the public key, such that it is necessary to revoke the

old certificate. (Otherwise, the process is only a "certificate

rekey" or "certificate renewal".)

Page 768: Tugas tik di kelas xi ips 3

$ certificate user

(I) A system entity that depends on the validity of information

(such as another entity's public key value) provided by a digital

certificate. (See: relying party.)

(O) "An entity that needs to know, with certainty, the public key

of another entity." [X509]

(C) The system entity may be a human being or an organization,or

a device or process under the control of a human or an

organization.

(D) ISDs SHOULD NOT use this term as a synonym for the "subject"

of a certificate.

$ certificate validation

(I) An act or process by which a certificate user establishes that

the assertions made by a digital certificate can be trusted. (See:

valid certificate, validate vs. verify.)

Page 769: Tugas tik di kelas xi ips 3

(O) "The process of ensuring that a certificate is valid including

possibly the construction and processing of a certification path,

and ensuring that all certificates in that path have not expired

or been revoked." [FPDAM]

Shirey Informational [Page30]

RFC 2828 Internet Security Glossary May 2000

(C) To validate a certificate, a certificate user checks that the

certificate is properly formed and signed and currently in force:

- Checks the signature: Employs the issuer's public key to verify

the digital signature of the CA who issued the certificate in

Page 770: Tugas tik di kelas xi ips 3

question. If the verifier obtains the issuer's public key from

the issuer's own public-key certificate, that certificate

should be validated, too. That validation may lead to yet

another certificate to be validated, and so on. Thus, in

general, certificate validation involves discovering and

validating a certification path.

- Checks the syntax and semantics: Parses the certificate's

syntax and interprets its semantics, applying rules specified

for and by its data fields, such as for critical extensionsin

an X.509 certificate.

- Checks currency and revocation: Verifies that the certificate

is currently in force by checking that the current date and

time are within the validity period (if that is specified in

the certificate) and that the certificate is not listed on a

CRL or otherwise announced as invalid. (CRLs themselves require

a similar validation process.)

$ certification

Page 771: Tugas tik di kelas xi ips 3

(I) Information system usage: Technical evaluation (usually made

in support of an accreditation action) of an information system's

security features and other safeguards to establish the extentto

which the system's design and implementation meet specified

security requirements. [FP102] (See: accreditation.)

(I) Digital certificate usage: The act or process of vouching for

the truth and accuracy of the binding between data items in a

certificate. (See: certify.)

(I) Public key usage: The act or process of vouching for the

ownership of a public key by issuing a public-key certificate that

binds the key to the name of the entity that possesses the

matching private key. In addition to binding a key to a name, a

public-key certificate may bind those items to other restrictive

or explanatory data items. (See: X.509 public-key certificate.)

(O) SET usage: "The process of ascertaining that a set of

requirements or criteria has been fulfilled and attesting to that

fact to others, usually with some written instrument. A system

Page 772: Tugas tik di kelas xi ips 3

that has been inspected and evaluated as fully compliant with the

SET protocol by duly authorized parties and process would be said

to have been certified compliant." [SET2]

Shirey Informational [Page31]

RFC 2828 Internet Security Glossary May 2000

$ certification authority (CA)

(I) An entity that issues digital certificates (especially X.509

certificates) and vouches for the binding between the data items

in a certificate.

(O) "An authority trusted by one or more users to create and

assign certificates. Optionally, the certification authority may

create the user's keys." [X509]

Page 773: Tugas tik di kelas xi ips 3

(C) Certificate users depend on the validity of informationever, this could be countered by doubling the key length.[citation needed]

Steganography

From Wikipedia, the free encyclopedia

Not to be confused with Stenography.

Steganography (US Listeni/ˌstɛ.ɡʌnˈɔː.ɡrʌ.fi/, UK /ˌstɛɡ.ənˈɒɡ.rə.fi/) is the practice of concealing a file, message, image, orvideo within another file, message, image, or video. The word steganography combines the Greek words steganos (στεγανός), meaning "covered, concealed, or protected", and graphein (γράφειν) meaning "writing".

The first recorded use of the term was in 1499 by Johannes Trithemius in his Steganographia, a treatise on cryptography and steganography, disguised as a book on magic. Generally, the hidden messages appear to be (or be part of) something else: images, articles, shopping lists, or some other cover text. For example, thehidden message may be in invisible ink between the visible lines of a private letter. Some implementations of steganography that lack a shared secret are forms of security through obscurity, whereas key-dependent steganographic schemes adhere to Kerckhoffs's principle.[1]

The advantage of steganography over cryptography alone is that the intended secret message does not attract attention to itself as an object of scrutiny. Plainly visible encrypted messages—no matter howunbreakable—arouse interest, and may in themselves be incriminating in countries where encryption is illegal.[2] Thus, whereas cryptography is the practice of protecting the contents of a messagealone, steganography is concerned with concealing the fact that a secret message is being sent, as well as concealing the contents of the message.

Page 774: Tugas tik di kelas xi ips 3

Steganography includes the concealment of information within computer files. In digital steganography, electronic communications may include steganographic coding inside of a transport layer, such as a document file, image file, program or protocol. Media files areideal for steganographic transmission because of their large size. For example, a sender might start with an innocuous image file and adjust the color of every 100th pixel to correspond to a letter in the alphabet, a change so subtle that someone not specifically looking for it is unlikely to notice it.

Contents

1 History

2 Techniques

2.1 Physical

2.2 Digital messages

2.2.1 Digital text

2.2.2 Social steganography

2.3 Network

2.4 Printed

2.5 Using puzzles

3 Additional terminology

4 Countermeasures and detection

5 Applications

5.1 Use in modern printers

5.2 Example from modern practice

5.3 Alleged use by intelligence services

Page 775: Tugas tik di kelas xi ips 3

5.4 Distributed steganography

5.5 Online challenge

6 See also

7 Citations

8 References

9 External links

History

The first recorded uses of steganography can be traced back to 440 BC when Herodotus mentions two examples in his Histories.[3] Demaratus sent a warning about a forthcoming attack to Greece by writing it directly on the wooden backing of a wax tablet before applying its beeswax surface. Wax tablets were in common use then asreusable writing surfaces, sometimes used for shorthand.

In his work Polygraphiae Johannes Trithemius developed his so-called"Ave-Maria-Cipher" that can hide information in a Latin praise of God. "Auctor Sapientissimus Conseruans Angelica Deferat Nobis Charitas Potentissimi Creatoris" for example contains the concealed word VICIPEDIA.[4]

Techniques

Physical

Steganography has been widely used, including in recent historical times and the present day. Known examples include:

Hidden messages within wax tablet—in ancient Greece, people wrote messages on wood and covered it with wax that bore an innocentcovering message.

Page 776: Tugas tik di kelas xi ips 3

Hidden messages on messenger's body—also used in ancient Greece.Herodotus tells the story of a message tattooed on the shaved head of a slave of Histiaeus, hidden by the hair that afterwards grew over it, and exposed by shaving the head. The message allegedly carried a warning to Greece about Persian invasion plans. This method has obvious drawbacks, such as delayed transmission while waiting for the slave's hair to grow, and restrictions on the numberand size of messages that can be encoded on one person's scalp.

During World War II, the French Resistance sent some messages written on the backs of couriers in invisible ink.

Hidden messages on paper written in secret inks, under other messages or on the blank parts of other messages

Messages written in Morse code on yarn and then knitted into a piece of clothing worn by a courier.

Messages written on envelopes in the area covered by postage stamps.

In the early days of the printing press, it was common to mix different typefaces on a printed page due to the printer not having enough copies of some letters in one typeface. Because of this, a message could be hidden using two (or more) different typefaces, such as normal or italic.

During and after World War II, espionage agents used photographically produced microdots to send information back and forth. Microdots were typically minute (less than the size of the period produced by a typewriter). World War II microdots were embedded in the paper and covered with an adhesive, such as collodion. This was reflective, and thus detectable by viewing against glancing light. Alternative techniques included inserting microdots into slits cut into the edge of post cards.

During WWII, Velvalee Dickinson, a spy for Japan in New York City, sent information to accommodation addresses in neutral South America. She was a dealer in dolls, and her letters discussed the quantity and type of doll to ship. The stegotext was the doll orders, while the concealed "plaintext" was itself encoded and gave information about ship movements, etc. Her case became somewhat famous and she became known as the Doll Woman.

Page 777: Tugas tik di kelas xi ips 3

Jeremiah Denton repeatedly blinked his eyes in Morse Code duringthe 1966 televised press conference that he was forced into as an American POW by his North Vietnamese captors, spelling out "T-O-R-T-U-R-E". This confirmed for the first time to the U.S. Military (naval intelligence) and Americans that the North Vietnamese were torturing American POWs.

Cold War counter-propaganda. In 1968, crew members of the USS Pueblo intelligence ship held as prisoners by North Korea, communicated in sign language during staged photo opportunities, informing the United States they were not defectors, but captives ofthe North Koreans. In other photos presented to the US, crew membersgave "the finger" to the unsuspecting North Koreans, in an attempt to discredit photos that showed them smiling and comfortable.

Digital messages

Image of a tree with a steganographically hidden image. The hidden image is revealed by removing all but the two least significant bitsof each color component and a subsequent normalization. The hidden image is shown below.

Image of a cat extracted from the tree image above.

Modern steganography entered the world in 1985 with the advent of personal computers being applied to classical steganography problems.[5] Development following that was very slow, but has sincetaken off, going by the large number of steganography software available:

Concealing messages within the lowest bits of noisy images or sound files.

Concealing data within encrypted data or within random data. Themessage to conceal is encrypted, then used to overwrite part of a much larger block of encrypted data or a block of random data (an unbreakable cipher like the one-time pad generates ciphertexts that look perfectly random without the private key).

Page 778: Tugas tik di kelas xi ips 3

Chaffing and winnowing.

Mimic functions convert one file to have the statistical profileof another. This can thwart statistical methods that help brute-force attacks identify the right solution in a ciphertext-only attack.

Concealed messages in tampered executable files, exploiting redundancy in the targeted instruction set.

Pictures embedded in video material (optionally played at sloweror faster speed).

Injecting imperceptible delays to packets sent over the network from the keyboard. Delays in keypresses in some applications (telnetor remote desktop software) can mean a delay in packets, and the delays in the packets can be used to encode data.

Changing the order of elements in a set.

Content-Aware Steganography hides information in the semantics ahuman user assigns to a datagram. These systems offer security against a nonhuman adversary/warden.

Blog-Steganography. Messages are fractionalized and the (encrypted) pieces are added as comments of orphaned web-logs (or pin boards on social network platforms). In this case the selection of blogs is the symmetric key that sender and recipient are using; the carrier of the hidden message is the whole blogosphere.

Modifying the echo of a sound file (Echo Steganography).[6]

Steganography for audio signals.[7]

Image bit-plane complexity segmentation steganography

Including data in ignored sections of a file, such as after the logical end of the carrier file.

Digital text

Page 779: Tugas tik di kelas xi ips 3

Making text the same color as the background in word processor documents, e-mails, and forum posts.

Using Unicode characters that look like the standard ASCII character set. On most systems, there is no visual difference from ordinary text. Some systems may display the fonts differently, and the extra information would then be easily spotted, of course.

Using hidden (control) characters, and redundant use of markup (e.g., empty bold, underline or italics) to embed information withinHTML, which is visible by examining the document source. HTML pages can contain code for extra blank spaces and tabs at the end of lines, and colours, fonts and sizes, which are not visible when displayed.

Using non-printing Unicode characters Zero-Width Joiner (ZWJ) and Zero-Width Non-Joiner (ZWNJ).[8] These characters are used for joining and disjoining letters in Arabic and Persian, but can be used in Roman alphabets for hiding information because they have no meaning in Roman alphabets: because they are "zero-width" they are not displayed. ZWJ and ZWNJ can represent "1" and "0".

Social steganography

In communities with social or government taboos or censorship, people use cultural steganography—hiding messages in idiom, pop culture references, and other messages they share publicly and assume are monitored. This relies on social context to make the underlying messages visible only to certain readers.[9][10] Examplesinclude:

Hiding a message in the title and context of a shared video or image

Misspelling names or words that are popular in the media in a given week, to suggest an alternate meaning

Page 780: Tugas tik di kelas xi ips 3

Network

All information hiding techniques that may be used to exchange steganograms in telecommunication networks can be classified under the general term of network steganography. This nomenclature was originally introduced by Krzysztof Szczypiorski in 2003.[11] Contrary to typical steganographic methods that use digital media (images, audio and video files) to hide data, network steganography uses communication protocols' control elements and their intrinsic functionality. As a result, such methods are harder to detect and eliminate.[12]

Typical network steganography methods involve modification of the properties of a single network protocol. Such modification can be applied to the PDU (Protocol Data Unit),[13][14][15] to the time relations between the exchanged PDUs,[16] or both (hybrid methods).[17]

Moreover, it is feasible to utilize the relation between two or moredifferent network protocols to enable secret communication. These applications fall under the term inter-protocol steganography.[18]

Network steganography covers a broad spectrum of techniques, which include, among others:

Steganophony — the concealment of messages in Voice-over-IP conversations, e.g. the employment of delayed or corrupted packets that would normally be ignored by the receiver (this method is called LACK — Lost Audio Packets Steganography), or, alternatively, hiding information in unused header fields.[19]

WLAN Steganography – transmission of steganograms in Wireless Local Area Networks. A practical example of WLAN Steganography is the HICCUPS system (Hidden Communication System for Corrupted Networks)[20]

Page 781: Tugas tik di kelas xi ips 3

Printed

Digital steganography output may be in the form of printed documents. A message, the plaintext, may be first encrypted by traditional means, producing a ciphertext. Then, an innocuous covertext is modified in some way so as to contain the ciphertext, resulting in the stegotext. For example, the letter size, spacing, typeface, or other characteristics of a covertext can be manipulatedto carry the hidden message. Only a recipient who knows the technique used can recover the message and then decrypt it. Francis Bacon developed Bacon's cipher as such a technique.

The ciphertext produced by most digital steganography methods, however, is not printable. Traditional digital methods rely on perturbing noise in the channel file to hide the message, as such, the channel file must be transmitted to the recipient with no additional noise from the transmission. Printing introduces much noise in the ciphertext, generally rendering the message unrecoverable. There are techniques that address this limitation, one notable example is ASCII Art Steganography.[21]

Using puzzles

The art of concealing data in a puzzle can take advantage of the degrees of freedom in stating the puzzle, using the starting information to encode a key within the puzzle / puzzle image.

For instance, steganography using sudoku puzzles has as many keys asthere are possible solutions of a sudoku puzzle, which is 6.71×1021.This is equivalent to around 70 bits, making it much stronger than the DES method, which uses a 56 bit key.[22]

Additional terminology

Page 782: Tugas tik di kelas xi ips 3

Discussions of steganography generally use terminology analogous to (and consistent with) conventional radio and communications technology. However, some terms show up in software specifically, and are easily confused. These are most relevant to digital steganographic systems.

The payload is the data covertly communicated. The carrier is the signal, stream, or data file that hides the payload—which differs from the channel (which typically means the type of input, such as aJPEG image). The resulting signal, stream, or data file with the encoded payload is sometimes called the package, stego file, or covert message. The percentage of bytes, samples, or other signal elements modified to encode the payload is called the encoding density, and is typically expressed as a number between 0 and 1.

In a set of files, those files considered likely to contain a payload are suspects. A suspect identified through some type of statistical analysis might be referred to as a candidate.

Countermeasures and detection

Detecting physical steganography requires careful physical examination—including the use of magnification, developer chemicals and ultraviolet light. It is a time-consuming process with obvious resource implications, even in countries that employ large numbers of people to spy on their fellow nationals. However, it is feasible to screen mail of certain suspected individuals or institutions, such as prisons or prisoner-of-war (POW) camps.

During World War II, prisoner of war camps gave prisoners specially treated paper that would reveal invisible ink. An article in the 24 June 1948 issue of Paper Trade Journal by the Technical Director of the United States Government Printing Office, Morris S. Kantrowitz, describes, in general terms, the development of this paper. They used three prototype papers named Sensicoat, Anilith, and Coatalith.These were for the manufacture of post cards and stationery provided

Page 783: Tugas tik di kelas xi ips 3

to German prisoners of war in the US and Canada. If POWs tried to write a hidden message, the special paper rendered it visible. The U.S. granted at least two patents related to this technology—one to Kantrowitz, U.S. Patent 2,515,232, "Water-Detecting paper and Water-Detecting Coating Composition Therefor," patented 18 July 1950, and an earlier one, "Moisture-Sensitive Paper and the Manufacture Thereof", U.S. Patent 2,445,586, patented 20 July 1948. A similar strategy is to issue prisoners with writing paper ruled with a water-soluble ink that runs in contact with water-based invisible ink.

In computing, steganographically encoded package detection is calledsteganalysis. The simplest method to detect modified files, however,is to compare them to known originals. For example, to detect information being moved through the graphics on a website, an analyst can maintain known clean-copies of these materials and compare them against the current contents of the site. The differences, assuming the carrier is the same, comprise the payload.In general, using extremely high compression rates makes steganography difficult, but not impossible. Compression errors provide a hiding place for data, but high compression reduces the amount of data available to hold the payload, raising the encoding density, which facilitates easier detection (in extreme cases, even by casual observation).

Applications

Use in modern printers

Main article: Printer steganography

Some modern computer printers use steganography, including HP and Xerox brand color laser printers. These printers add tiny yellow dots to each page. The barely-visible dots contain encoded printer serial numbers and date and time stamps.[23]

Example from modern practice

Page 784: Tugas tik di kelas xi ips 3

The larger the cover message (in binary data, the number of bits) relative to the hidden message, the easier it is to hide the latter.For this reason, digital pictures (which contain large amounts of data) are used to hide messages on the Internet and on other communication media. It is not clear how common this actually is. For example: a 24-bit bitmap uses 8 bits to represent each of the three color values (red, green, and blue) at each pixel. The blue alone has 28 different levels of blue intensity. The difference between 11111111 and 11111110 in the value for blue intensity is likely to be undetectable by the human eye. Therefore, the least significant bit can be used more or less undetectably for something else other than color information. If this is repeated for the greenand the red elements of each pixel as well, it is possible to encodeone letter of ASCII text for every three pixels.

Stated somewhat more formally, the objective for making steganographic encoding difficult to detect is to ensure that the changes to the carrier (the original signal) due to the injection ofthe payload (the signal to covertly embed) are visually (and ideally, statistically) negligible; that is to say, the changes are indistinguishable from the noise floor of the carrier. Any medium can be a carrier, but media with a large amount of redundant or compressible information are better suited.

From an information theoretical point of view, this means that the channel must have more capacity than the "surface" signal requires; that is, there must be redundancy. For a digital image, this may be noise from the imaging element; for digital audio, it may be noise from recording techniques or amplification equipment. In general, electronics that digitize an analog signal suffer from several noisesources such as thermal noise, flicker noise, and shot noise. This noise provides enough variation in the captured digital information that it can be exploited as a noise cover for hidden data. In addition, lossy compression schemes (such as JPEG) always introduce some error into the decompressed data; it is possible to exploit this for steganographic use as well.

Page 785: Tugas tik di kelas xi ips 3

Steganography can be used for digital watermarking, where a message (being simply an identifier) is hidden in an image so that its source can be tracked or verified (for example, Coded Anti-Piracy), or even just to identify an image (as in the EURion constellation).

Alleged use by intelligence services

In 2010, the Federal Bureau of Investigation alleged that the Russian foreign intelligence service uses customized steganography software for embedding encrypted text messages inside image files for certain communications with "illegal agents" (agents under non-diplomatic cover) stationed abroad.[24]

Distributed steganography

There are distributed steganography methods,[25] including methodologies that distribute the payload through multiple carrier files in diverse locations to make detection more difficult. For example, U.S. Patent 8,527,779 by cryptographer William Easttom (Chuck Easttom).

Online challenge

The online mechanism Cicada 3301 incorporates steganography with cryptography and other solving techniques since 2012.[26]

History of cryptography

From Wikipedia, the free encyclopedia

Cryptography, the use of codes and ciphers to protect secrets, beganthousands of years ago. Until recent decades, it has been the story of what might be called classic cryptography — that is, of methods of encryption that use pen and paper, or perhaps simple mechanical aids. In the early 20th century, the invention of complex mechanicaland electromechanical machines, such as the Enigma rotor machine, provided more sophisticated and efficient means of encryption; and the subsequent introduction of electronics and computing has allowed

Page 786: Tugas tik di kelas xi ips 3

elaborate schemes of still greater complexity, most of which are entirely unsuited to pen and paper.

The development of cryptography has been paralleled by the development of cryptanalysis — the "breaking" of codes and ciphers. The discovery and application, early on, of frequency analysis to the reading of encrypted communications has, on occasion, altered the course of history. Thus the Zimmermann Telegram triggered the United States' entry into World War I; and Allied reading of Nazi Germany's ciphers shortened World War II, in some evaluations by as much as two years.

Until the 1970s, secure cryptography was largely the preserve of governments. Two events have since brought it squarely into the public domain: the creation of a public encryption standard (DES), and the invention of public-key cryptography.

Contents

1 Classical cryptography

2 Medieval cryptography

3 Cryptography from 1800 to World War II

4 World War II cryptography

5 Modern cryptography

5.1 Claude Shannon

5.2 An encryption standard

5.3 Public key

5.4 Hashing

5.5 Cryptography politics

5.6 Modern cryptanalysis

Page 787: Tugas tik di kelas xi ips 3

6 See also

7 References

8 External links

Classical cryptography

See also: Classical cipher

The earliest known use of cryptography is found in non-standard hieroglyphs carved into monuments from the Old Kingdom of Egypt circa 1900 BC.[1] These are not thought to be serious attempts at secret communications, however, but rather to have been attempts at mystery, intrigue, or even amusement for literate onlookers.[1] These are examples of still other uses of cryptography, or of something that looks (impressively if misleadingly) like it. Some clay tablets from Mesopotamia somewhat later are clearly meant to protect information—one dated near 1500 BCE was found to encrypt a craftsman's recipe for pottery glaze, presumably commercially valuable.[2][3] Later still, Hebrew scholars made use of simple monoalphabetic substitution ciphers (such as the Atbash cipher) beginning perhaps around 500 to 600 BC.[4][5]

A Scytale, an early device for encryption.

The ancient Greeks are said to have known of ciphers. The scytale transposition cipher was used by the Spartan military,[5] however itis disputed whether the scytale was for encryption, authentication, or avoiding bad omens in speech.[6][7] Herodotus tells us of secret messages physically concealed beneath wax on wooden tablets or as a tattoo on a slave's head concealed by regrown hair, though these arenot properly examples of cryptography per se as the message, once known, is directly readable; this is known as steganography. AnotherGreek method was developed by Polybius (now called the "Polybius Square").[5] The Romans knew something of cryptography (e.g., the Caesar cipher and its variations).

Medieval cryptography

Page 788: Tugas tik di kelas xi ips 3

The first page of al-Kindi's manuscript On Deciphering CryptographicMessages, containing the first descriptions of cryptanalysis and frequency analysis.

See also: Voynich Manuscript

David Kahn notes in The Codebreakers that modern cryptology originated among the Arabs, the first people to systematically document the methods of cryptanalysis.[8] It was probably[original research?] religiously motivated textual analysis of the Qur'an which led to the invention of the frequency-analysis technique for breaking monoalphabetic substitution ciphers, by Al-Kindi, an Arab mathematician, sometime around AD 800. It proved the most fundamental cryptanalytic advance until WWII. Al-Kindi wrote a book on cryptography entitled Risalah fi Istikhraj al-Mu'amma (Manuscriptfor the Deciphering Cryptographic Messages), in which he described the first cryptanalysis techniques, including some for polyalphabetic ciphers, cipher classification, Arabic phonetics and syntax, and, most importantly, gave the first descriptions on frequency analysis.[9] He also covered methods of encipherments, cryptanalysis of certain encipherments, and statistical analysis of letters and letter combinations in Arabic.[10][11]

Ahmad al-Qalqashandi (AD 1355–1418) wrote the Subh al-a 'sha, a 14-volume encyclopedia which included a section on cryptology. This information was attributed to Ibn al-Durayhim who lived from AD 1312to 1361, but whose writings on cryptography have been lost. The listof ciphers in this work included both substitution and transposition, and for the first time, a cipher with multiple substitutions for each plaintext letter. Also traced to Ibn al-Durayhim is an exposition on and worked example of cryptanalysis, including the use of tables of letter frequencies and sets of letters which cannot occur together in one word.

Essentially all ciphers remained vulnerable to the cryptanalytic technique of frequency analysis until the development of the polyalphabetic cipher, and many remained so thereafter. The

Page 789: Tugas tik di kelas xi ips 3

polyalphabetic cipher was most clearly explained by Leon Battista Alberti around the year AD 1467, for which he was called the "fatherof Western cryptology".[1] Johannes Trithemius, in his work Poligraphia, invented the tabula recta, a critical component of the Vigenère cipher. The French cryptographer Blaise de Vigenere deviseda practical polyalphabetic system which bears his name, the Vigenèrecipher.[1]

In Europe, cryptography became (secretly) more important as a consequence of political competition and religious revolution. For instance, in Europe during and after the Renaissance, citizens of the various Italian states—the Papal States and the Roman Catholic Church included—were responsible for rapid proliferation of cryptographic techniques, few of which reflect understanding (or even knowledge) of Alberti's polyalphabetic advance. 'Advanced ciphers', even after Alberti, weren't as advanced as their inventors/ developers / users claimed (and probably even themselves believed). They were regularly broken. This over-optimism may be inherent in cryptography, for it was then - and remains today - fundamentally difficult to accurately know how vulnerable one's system actually is. In the absence of knowledge, guesses and hopes, predictably, are common.

Cryptography, cryptanalysis, and secret-agent/courier betrayal featured in the Babington plot during the reign of Queen Elizabeth Iwhich led to the execution of Mary, Queen of Scots.

The chief cryptographer of King Louis XIV of France was Antoine Rossignol and he and his family created what is known as the Great Cipher because it remained unsolved from its initial use until 1890,when French military cryptanalyst, Étienne Bazeries solved it.[12] An encrypted message from the time of the Man in the Iron Mask (decrypted just prior to 1900 by Étienne Bazeries) has shed some, regrettably non-definitive, light on the identity of that real, if legendary and unfortunate, prisoner.

Page 790: Tugas tik di kelas xi ips 3

Outside of Europe, after the Mongols brought about the end of the Muslim Golden Age, cryptography remained comparatively undeveloped. Cryptography in Japan seems not to have been used until about 1510, and advanced techniques were not known until after the opening of the country to the West beginning in the 1860s.

Cryptography from 1800 to World War II

Main article: World War I cryptography

Although cryptography has a long and complex history, it wasn't until the 19th century that it developed anything more than ad hoc approaches to either encryption or cryptanalysis (the science of finding weaknesses in crypto systems). Examples of the latter include Charles Babbage's Crimean War era work on mathematical cryptanalysis of polyalphabetic ciphers, redeveloped and published somewhat later by the Prussian Friedrich Kasiski. Understanding of cryptography at this time typically consisted of hard-won rules of thumb; see, for example, Auguste Kerckhoffs' cryptographic writings in the latter 19th century. Edgar Allan Poe used systematic methods to solve ciphers in the 1840s. In particular he placed a notice of his abilities in the Philadelphia paper Alexander's Weekly (Express)Messenger, inviting submissions of ciphers, of which he proceeded tosolve almost all. His success created a public stir for some months.[13] He later wrote an essay on methods of cryptography which proveduseful as an introduction for novice British cryptanalysts attempting to break German codes and ciphers during World War I, anda famous story, The Gold-Bug, in which cryptanalysis was a prominentelement.

Cryptography, and its misuse, were involved in the execution of MataHari and in the Dreyfus' conviction and imprisonment, both in the early 20th century. Cryptographers were also involved in exposing the machinations which had led to the Dreyfus affair; Mata Hari, in contrast, was shot.

Page 791: Tugas tik di kelas xi ips 3

In World War I the Admiralty's Room 40 broke German naval codes and played an important role in several naval engagements during the war, notably in detecting major German sorties into the North Sea that led to the battles of Dogger Bank and Jutland as the British fleet was sent out to intercept them. However its most important contribution was probably in decrypting the Zimmermann Telegram, a cable from the German Foreign Office sent via Washington to its ambassador Heinrich von Eckardt in Mexico which played a major part in bringing the United States into the war.

In 1917, Gilbert Vernam proposed a teleprinter cipher in which a previously prepared key, kept on paper tape, is combined character by character with the plaintext message to produce the cyphertext. This led to the development of electromechanical devices as cipher machines, and to the only unbreakable cipher, the one time pad.

During the 1920s, Polish naval-officers assisted the Japanese military with code and cipher development.

Mathematical methods proliferated in the period prior to World War II (notably in William F. Friedman's application of statistical techniques to cryptanalysis and cipher development and in Marian Rejewski's initial break into the German Army's version of the Enigma system) in 1932.

World War II cryptography

See also: World War II cryptography, Cryptanalysis and List of cryptographers

The Enigma machine was widely used by Nazi Germany; its cryptanalysis by the Allies provided vital Ultra intelligence.

By World War II, mechanical and electromechanical cipher machines were in wide use, although—where such machines were impractical—manual systems continued in use. Great advances were made in both cipher design and cryptanalysis, all in secrecy. Information about

Page 792: Tugas tik di kelas xi ips 3

this period has begun to be declassified as the official British 50-year secrecy period has come to an end, as US archives have slowly opened, and as assorted memoirs and articles have appeared.

The Germans made heavy use, in several variants, of an electromechanical rotor machine known as Enigma.[14] Mathematician Marian Rejewski, at Poland's Cipher Bureau, in December 1932 deducedthe detailed structure of the German Army Enigma, using mathematics and limited documentation supplied by Captain Gustave Bertrand of French military intelligence. This was the greatest breakthrough in cryptanalysis in a thousand years and more, according to historian David Kahn. Rejewski and his mathematical Cipher Bureau colleagues, Jerzy Różycki and Henryk Zygalski, continued reading Enigma and keeping pace with the evolution of the German Army machine's components and encipherment procedures. As the Poles' resources became strained by the changes being introduced by the Germans, and as war loomed, the Cipher Bureau, on the Polish General Staff's instructions, on 25 July 1939, at Warsaw, initiated French and British intelligence representatives into the secrets of Enigma decryption.

Soon after the Invasion of Poland by Germany on 1 September 1939, key Cipher Bureau personnel were evacuated southeastward; on 17 September, as the Soviet Union attacked Poland from the East, they crossed into Romania. From there they reached Paris, France; at PC Bruno, near Paris, they continued breaking Enigma, collaborating with British cryptologists at Bletchley Park as the British got up to speed on breaking Enigma. In due course, the British cryptographers - whose ranks included many chess masters and mathematics dons such as Gordon Welchman, Max Newman, and Alan Turing (the conceptual founder of modern computing) - substantially advanced the scale and technology of Enigma decryption.

German code breaking in World War II also had some success, most importantly by breaking the Naval Cipher No. 3. This enabled them totrack and sink Atlantic convoys. It was only Ultra intelligence thatfinally persuaded the admiralty to change their codes in June 1943.

Page 793: Tugas tik di kelas xi ips 3

This is surprising given the success of the British Room 40 code breakers in the previous world war.

At the end of the War, on 19 April 1945, Britain's top military officers were told that they could never reveal that the German Enigma cipher had been broken because it would give the defeated enemy the chance to say they "were not well and fairly beaten".[15]

US Navy cryptographers (with cooperation from British and Dutch cryptographers after 1940) broke into several Japanese Navy crypto systems. The break into one of them, JN-25, famously led to the US victory in the Battle of Midway; and to the publication of that factin the Chicago Tribune shortly after the battle, though the Japaneseseem not to have noticed for they kept using the JN-25 system. A US Army group, the SIS, managed to break the highest security Japanese diplomatic cipher system (an electromechanical 'stepping switch' machine called Purple by the Americans) even before WWII began. The Americans referred to the intelligence resulting from cryptanalysis,perhaps especially that from the Purple machine, as 'Magic'. The British eventually settled on 'Ultra' for intelligence resulting from cryptanalysis, particularly that from message traffic protectedby the various Enigmas. An earlier British term for Ultra had been 'Boniface' in an attempt to suggest, if betrayed, that it might havean individual agent as a source.

The German military also deployed several mechanical attempts at a one-time pad. Bletchley Park called them the Fish ciphers, and Max Newman and colleagues designed and deployed the Heath Robinson, and then the world's first programmable digital electronic computer, theColossus, to help with their cryptanalysis. The German Foreign Office began to use the one-time pad in 1919; some of this traffic was read in WWII partly as the result of recovery of some key material in South America that was discarded without sufficient careby a German courier.

Page 794: Tugas tik di kelas xi ips 3

The Japanese Foreign Office used a locally developed electrical stepping switch based system (called Purple by the US), and also hadused several similar machines for attaches in some Japanese embassies. One of the electrical stepping switch based systems referred to earlier as Purple was called the 'M-machine' by the US, another was referred to as 'Red'. All were broken, to one degree or another, by the Allies.

SIGABA is described in U.S. Patent 6,175,625, filed in 1944 but not issued until 2001.

Allied cipher machines used in WWII included the British TypeX and the American SIGABA; both were electromechanical rotor designs similar in spirit to the Enigma, albeit with major improvements. Neither is known to have been broken by anyone during the War. The Poles used the Lacida machine, but its security was found to be lessthan intended (by Polish Army cryptographers in the UK), and its usewas discontinued. US troops in the field used the M-209 and the still less secure M-94 family machines. British SOE agents initiallyused 'poem ciphers' (memorized poems were the encryption/decryption keys), but later in the War, they began to switch to one-time pads.

The VIC cipher (used at least until 1957 in connection with Rudolf Abel's NY spy ring) was a very complex hand cipher, and is claimed to be the most complicated known to have been used by the Soviets, according to David Kahn in Kahn on Codes. For the decrypting of Soviet ciphers (particularly when one-time pads were reused), see Venona project.

Modern cryptography

Encryption in modern times is achieved by using algorithms that havea key to encrypt and decrypt information. These keys convert the messages and data into “digital gibberish” through encryption and then return them to the original form through decryption. In general, the longer the key is, the more difficult it is to crack the code. This holds true because deciphering an encrypted message

Page 795: Tugas tik di kelas xi ips 3

by brute force would require the attacker to try every possible key.To put this in context, each binary unit of information, or bit, hasa value of 0 or 1. An 8-bit key would then have 256 or 2^8 possible keys. A 56-bit key would have 2^56, or 72 quadrillion, possible keysto try and decipher the message. With modern technology, these numbers are becoming easier to decipher; however, as technology advances, so does the quality of encryption. Since WWII, one of the most notable advances in the study of cryptography is the introduction of the public-key. These are algorithms that use a public key to encrypt, but a particular, private key to decrypt.[16]

Beginning around the 1990s, the use of the Internet for commercial purposes and the introduction of e-commerce called for a widespread standard for encryption. Before the introduction of the Advanced Encryption Standard (AES), information sent over the Internet, such as financial data, was encrypted using the Data Encryption Standard (DES), a symmetric-key cipher. This was used for its speed, as DES could scramble massive amounts of data at high speeds. The problem with this was that over time, more users knew the key, and the risk of security breaches increased. Around the late 1990s to early 2000s, the use of the public-key became a more common approach for encryption, and soon a hybrid of the two schemes became the way for e-commerce operations to proceed. Additionally, the creation of a new protocol known as the Secure Socket Layer, or SSL, led the way for online transactions to take place. Transactions ranging from purchasing goods to online bill pay and banking used SSL. Furthermore, as wireless Internet connections became more common among households, the need for encryption grew, as a level of security was needed in these everyday situations.[17]

Claude Shannon

Claude E. Shannon is considered by many to be the father of mathematical cryptography. Shannon worked for several years at Bell Labs, and during his time there, he produced an article entitled “A mathematical theory of cryptography”. This article was written in 1945 and eventually was published in the Bell System Technical Journal in 1949. Shannon continued his work by producing another

Page 796: Tugas tik di kelas xi ips 3

article entitled “A mathematical theory of communication”. Shannon was inspired during the war to address “[t]he problems of cryptography [because] secrecy systems furnish an interesting application of communication theory”. It is commonly accepted that this paper, published in 1949, was the starting point for development of modern cryptography. Shannon provided the two main goals of cryptography: secrecy and authenticity. His focus was on exploring secrecy and thirty-five years later, G.J. Simmons would address the issue of authenticity. “A mathematical theory of communication” highlights one of the most significant aspects of Shannon’s work: cryptography’s transition from art to science.[18]

In his works, Shannon described the two basic types of systems for secrecy. The first are those designed with the intent to protect against hackers and attackers who have infinite resources with whichto decode a message (theoretical secrecy, now unconditional security), and the second are those designed to protect against hackers and attacks with finite resources with which to decode a message (practical secrecy, now computational security). Most of Shannon’s work focused around theoretical secrecy; here, Shannon introduced a definition for the “unbreakability” of a cipher. If a cipher was determined “unbreakable”, it was considered to have “perfect secrecy”. In proving “perfect secrecy”, Shannon determined that this could only be obtained with a secret key whose length given in binary digits was greater than or equal to the number of bits contained in the information being encrypted. Furthermore, Shannon developed the “unicity distance”, defined as the “amount of plaintext that… determines the secret key.”[18]

Shannon’s work influenced further cryptography research in the 1970s, as the public-key cryptography developers, M. E. Hellman and W. Diffie cited Shannon’s research as a major influence. His work also impacted modern designs of secret-key ciphers. At the end of Shannon’s work with cryptography, progress slowed until Hellman and Diffie introduced their paper involving “public-key cryptography”.[18]

An encryption standard

Page 797: Tugas tik di kelas xi ips 3

The mid-1970s saw two major public (i.e., non-secret) advances. First was the publication of the draft Data Encryption Standard in the U.S. Federal Register on 17 March 1975. The proposed DES cipher was submitted by a research group at IBM, at the invitation of the National Bureau of Standards (now NIST), in an effort to develop secure electronic communication facilities for businesses such as banks and other large financial organizations. After 'advice' and modification by NSA, acting behind the scenes, it was adopted and published as a Federal Information Processing Standard Publication in 1977 (currently at FIPS 46-3). DES was the first publicly accessible cipher to be 'blessed' by a national agency such as NSA. The release of its specification by NBS stimulated an explosion of public and academic interest in cryptography.

The aging DES was officially replaced by the Advanced Encryption Standard (AES) in 2001 when NIST announced FIPS 197. After an open competition, NIST selected Rijndael, submitted by two Belgian cryptographers, to be the AES. DES, and more secure variants of it (such as Triple DES), are still used today, having been incorporatedinto many national and organizational standards. However, its 56-bitkey-size has been shown to be insufficient to guard against brute force attacks (one such attack, undertaken by the cyber civil-rightsgroup Electronic Frontier Foundation in 1997, succeeded in 56 hours.[19]) As a result, use of straight DES encryption is now without doubt insecure for use in new cryptosystem designs, and messages protected by older cryptosystems using DES, and indeed all messages sent since 1976 using DES, are also at risk. Regardless of DES' inherent quality, the DES key size (56-bits) was thought to be too small by some even in 1976, perhaps most publicly by Whitfield Diffie. There was suspicion that government organizations even then had sufficient computing power to break DES messages; clearly othershave achieved this capability.

Public key

The second development, in 1976, was perhaps even more important, for it fundamentally changed the way cryptosystems might work. This

Page 798: Tugas tik di kelas xi ips 3

was the publication of the paper New Directions in Cryptography by Whitfield Diffie and Martin Hellman. It introduced a radically new method of distributing cryptographic keys, which went far toward solving one of the fundamental problems of cryptography, key distribution, and has become known as Diffie-Hellman key exchange. The article also stimulated the almost immediate public development of a new class of enciphering algorithms, the asymmetric key algorithms.

Prior to that time, all useful modern encryption algorithms had beensymmetric key algorithms, in which the same cryptographic key is used with the underlying algorithm by both the sender and the recipient, who must both keep it secret. All of the electromechanical machines used in WWII were of this logical class, as were the Caesar and Atbash ciphers and essentially all cipher systems throughout history. The 'key' for a code is, of course, the codebook, which must likewise be distributed and kept secret, and soshares most of the same problems in practice.

Of necessity, the key in every such system had to be exchanged between the communicating parties in some secure way prior to any use of the system (the term usually used is 'via a secure channel') such as a trustworthy courier with a briefcase handcuffed to a wrist, or face-to-face contact, or a loyal carrier pigeon. This requirement is never trivial and very rapidly becomes unmanageable as the number of participants increases, or when secure channels aren't available for key exchange, or when, as is sensible cryptographic practice, keys are frequently changed. In particular, if messages are meant to be secure from other users, a separate key is required for each possible pair of users. A system of this kind is known as a secret key, or symmetric key cryptosystem. D-H key exchange (and succeeding improvements and variants) made operation of these systems much easier, and more secure, than had ever been possible before in all of history.

In contrast, asymmetric key encryption uses a pair of mathematicallyrelated keys, each of which decrypts the encryption performed using

Page 799: Tugas tik di kelas xi ips 3

the other. Some, but not all, of these algorithms have the additional property that one of the paired keys cannot be deduced from the other by any known method other than trial and error. An algorithm of this kind is known as a public key or asymmetric key system. Using such an algorithm, only one key pair is needed per user. By designating one key of the pair as private (always secret),and the other as public (often widely available), no secure channel is needed for key exchange. So long as the private key stays secret,the public key can be widely known for a very long time without compromising security, making it safe to reuse the same key pair indefinitely.

For two users of an asymmetric key algorithm to communicate securelyover an insecure channel, each user will need to know their own public and private keys as well as the other user's public key. Takethis basic scenario: Alice and Bob each have a pair of keys they've been using for years with many other users. At the start of their message, they exchange public keys, unencrypted over an insecure line. Alice then encrypts a message using her private key, and then re-encrypts that result using Bob's public key. The double-encryptedmessage is then sent as digital data over a wire from Alice to Bob. Bob receives the bit stream and decrypts it using his own private key, and then decrypts that bit stream using Alice's public key. If the final result is recognizable as a message, Bob can be confident that the message actually came from someone who knows Alice's private key (presumably actually her if she's been careful with her private key), and that anyone eavesdropping on the channel will needBob's private key in order to understand the message.

Asymmetric algorithms rely for their effectiveness on a class of problems in mathematics called one-way functions, which require relatively little computational power to execute, but vast amounts of power to reverse, if reversal is possible at all. A classic example of a one-way function is multiplication of very large prime numbers. It's fairly quick to multiply two large primes, but very difficult to find the factors of the product of two large primes. Because of the mathematics of one-way functions, most possible keys are bad choices as cryptographic keys; only a small fraction of the

Page 800: Tugas tik di kelas xi ips 3

possible keys of a given length are suitable, and so asymmetric algorithms require very long keys to reach the same level of security provided by relatively shorter symmetric keys. The need to both generate the key pairs, and perform the encryption/decryption operations make asymmetric algorithms computationally expensive, compared to most symmetric algorithms. Since symmetric algorithms can often use any sequence of (random, or at least unpredictable) bits as a key, a disposable session key can be quickly generated forshort-term use. Consequently, it is common practice to use a long asymmetric key to exchange a disposable, much shorter (but just as strong) symmetric key. The slower asymmetric algorithm securely sends a symmetric session key, and the faster symmetric algorithm takes over for the remainder of the message.

Asymmetric key cryptography, Diffie-Hellman key exchange, and the best known of the public key / private key algorithms (i.e., what isusually called the RSA algorithm), all seem to have been independently developed at a UK intelligence agency before the public announcement by Diffie and Hellman in 1976. GCHQ has releaseddocuments claiming they had developed public key cryptography beforethe publication of Diffie and Hellman's paper.[citation needed] Various classified papers were written at GCHQ during the 1960s and 1970s which eventually led to schemes essentially identical to RSA encryption and to Diffie-Hellman key exchange in 1973 and 1974. Someof these have now been published, and the inventors (James H. Ellis,Clifford Cocks, and Malcolm Williamson) have made public (some of) their work.

Hashing

Hashing is a common technique used in cryptography to encode information quickly using typical algorithms. Generally, an algorithm is applied to a string of text, and the resulting string becomes the “hash value”. This creates a “digital fingerprint” of the message, as the specific hash value is used to identify a specific message. The output from the algorithm is also referred to as a “message digest” or a “check sum”. Hashing is good for determining if information has been changed in transmission. If the

Page 801: Tugas tik di kelas xi ips 3

hash value is different upon reception than upon sending, there is evidence the message has been altered. Once the algorithm has been applied to the data to be hashed, the hash function produces a fixed-length output. Essentially, anything passed through the hash function should resolve to the same length output as anything else passed through the same hash function. It is important to note that hashing is not the same as encrypting. Hashing is a one-way operation that is used to transform data into the compressed messagedigest. Additionally, the integrity of the message can be measured with hashing. Conversely, encryption is a two-way operation that is used to transform plaintext into cipher-text and then vice versa. Inencryption, the confidentiality of a message is guaranteed.[20]

Hash functions can be used to verify digital signatures, so that when signing documents via the Internet, the signature is applied toone particular individual. Much like a hand-written signature, thesesignatures are verified by assigning their exact hash code to a person. Furthermore, hashing is applied to passwords for computer systems. Hashing for passwords began with the UNIX operating system.A user on the system would first create a password. That password would be hashed, using an algorithm or key, and then stored in a password file. This is still prominent today, as web applications that require passwords will often hash user’s passwords and store them in a database.[21]

Cryptography politics

The public developments of the 1970s broke the near monopoly on highquality cryptography held by government organizations (see S Levy's Crypto for a journalistic account of some of the policy controversy of the time in the US). For the first time ever, those outside government organizations had access to cryptography not readily breakable by anyone (including governments). Considerable controversy, and conflict, both public and private, began more or less immediately, sometimes called the crypto wars. It has not yet subsided. In many countries, for example, export of cryptography is subject to restrictions. Until 1996 export from the U.S. of cryptography using keys longer than 40 bits (too small to be very

Page 802: Tugas tik di kelas xi ips 3

secure against a knowledgeable attacker) was sharply limited. As recently as 2004, former FBI Director Louis Freeh, testifying beforethe 9/11 Commission, called for new laws against public use of encryption.

One of the most significant people favoring strong encryption for public use was Phil Zimmermann. He wrote and then in 1991 released PGP (Pretty Good Privacy), a very high quality crypto system. He distributed a freeware version of PGP when he felt threatened by legislation then under consideration by the US Government that wouldrequire backdoors to be included in all cryptographic products developed within the US. His system was released worldwide shortly after he released it in the US, and that began a long criminal investigation of him by the US Government Justice Department for thealleged violation of export restrictions. The Justice Department eventually dropped its case against Zimmermann, and the freeware distribution of PGP has continued around the world. PGP even eventually became an open Internet standard (RFC 2440 or OpenPGP).

Modern cryptanalysis

While modern ciphers like AES and the higher quality asymmetric ciphers are widely considered unbreakable, poor designs and implementations are still sometimes adopted and there have been important cryptanalytic breaks of deployed crypto systems in recent years. Notable examples of broken crypto designs include the first Wi-Fi encryption scheme WEP, the Content Scrambling System used for encrypting and controlling DVD use, the A5/1 and A5/2 ciphers used in GSM cell phones, and the CRYPTO1 cipher used in the widely deployed MIFARE Classic smart cards from NXP Semiconductors, a spun off division of Philips Electronics. All of these are symmetric ciphers. Thus far, not one of the mathematical ideas underlying public key cryptography has been proven to be 'unbreakable', and so some future mathematical analysis advance might render systems relying on them insecure. While few informed observers foresee such a breakthrough, the key size recommended for security as best practice keeps increasing as increased computing power required for breaking codes becomes cheaper and more available.

Page 803: Tugas tik di kelas xi ips 3

Information

From Wikipedia, the free encyclopedia

For other uses, see Information (disambiguation).

The ASCII codes for the word "Wikipedia" represented in binary, the numeral system most commonly used for encoding textual computer information

Information (shortened as info or info.) is that which informs, i.e.an answer to a question, as well as that from which knowledge and data can be derived (as data represents values attributed to parameters, and knowledge signifies understanding of real things or abstract concepts).[1] As it regards data, the information's existence is not necessarily coupled to an observer (it exists beyond an event horizon, for example), while in the case of knowledge, the information requires a cognitive observer.

At its most fundamental, information is any propagation of cause andeffect within a system. Information is conveyed either as the content of a message or through direct or indirect observation of some thing. That which is perceived can be construed as a message inits own right, and in that sense, information is always conveyed as the content of a message.

Information can be encoded into various forms for transmission and interpretation (for example, information may be encoded into a sequence of signs, or transmitted via a sequence of signals). It canalso be encrypted for safe storage and communication.

Information resolves uncertainty. The uncertainty of an event is measured by its probability of occurrence and is inversely proportional to that. The more uncertain an event, the more information is required to resolve uncertainty of that event. The bit is a typical unit of information, but other units such as the

Page 804: Tugas tik di kelas xi ips 3

nat may be used. Example: information in one "fair" coin ip: fllog2(2/1) = 1 bit, and in two fair coin flips is log2(4/1) = 2 bits.

The concept that information is the message has different meanings in different contexts.[2] Thus the concept of information becomes closely related to notions of constraint, communication, control, data, form, education, knowledge, meaning, understanding, mental stimuli, pattern, perception, representation, and entropy.

Contents

1 Etymology

2 Information theory approach

3 As sensory input

4 As representation and complexity

5 As an influence which leads to a transformation

6 As a property in physics

7 The application of information study

8 Technologically mediated information

9 As records

10 Semiotics

11 See also

12 References

13 Further reading

14 External links

Etymology

Page 805: Tugas tik di kelas xi ips 3

See also: History of the word and concept "information"

The English word was apparently derived from the Latin stem (information-) of the nominative (informatio): this noun is derived from the verb informare (to inform) in the sense of "to give form tothe mind", "to discipline", "instruct", "teach". Inform itself comes(via French informer) from the Latin verb informare, which means to give form, or to form an idea of. Furthermore, Latin itself already contained the word informatio meaning concept or idea, but the extent to which this may have influenced the development of the wordinformation in English is not clear.

The ancient Greek word for form was μορφή (morphe; cf. morph) and also εἶδος (eidos) "kind, idea, shape, set", the latter word was famously used in a technical philosophical sense by Plato (and laterAristotle) to denote the ideal identity or essence of something (seeTheory of Forms). "Eidos" can also be associated with thought, proposition, or even concept.

The ancient Greek word for information is πληροφορία, which transliterates (plērophoria) from πλήρης (plērēs) "fully" and φέρω (phorein) frequentative of (pherein) to carry-through. It literally means "fully bears" or "conveys fully". In modern Greek language theword Πληροφορία is still in daily use and has the same meaning as the word information in English. Unfortunately biblical scholars have translated (plērophoria) into "full assurance" creating a connotative meaning of the word. In addition to its primary meaning,the word Πληροφορία as a symbol has deep roots in Aristotle's semiotic triangle. In this regard it can be interpreted to communicate information to the one decoding that specific type of sign. This is something that occurs frequently with the etymology ofmany words in ancient and modern Greek language where there is a very strong denotative relationship between the signifier, e.g. the word symbol that conveys a specific encoded interpretation, and the signified, e.g. a concept whose meaning the interpretant attempts todecode.

Page 806: Tugas tik di kelas xi ips 3

Information theory approach

Main article: Information theory

From the stance of information theory, information is taken as an ordered sequence of symbols from an alphabet, say an input alphabet χ, and an output alphabet ϒ. Information processing consists of an input-output function that maps any input sequence from χ into an output sequence from ϒ. The mapping may be probabilistic or deterministic. It may have memory or be memoryless.[3]

As sensory input

Often information can be viewed as a type of input to an organism orsystem. Inputs are of two kinds; some inputs are important to the function of the organism (for example, food) or system (energy) by themselves. In his book Sensory Ecology[4] Dusenbery called these causal inputs. Other inputs (information) are important only becausethey are associated with causal inputs and can be used to predict the occurrence of a causal input at a later time (and perhaps another place). Some information is important because of associationwith other information but eventually there must be a connection to a causal input. In practice, information is usually carried by weak stimuli that must be detected by specialized sensory systems and amplified by energy inputs before they can be functional to the organism or system. For example, light is often a causal input to plants but provides information to animals. The colored light reflected from a flower is too weak to do much photosynthetic work but the visual system of the bee detects it and the bee's nervous system uses the information to guide the bee to the flower, where the bee often finds nectar or pollen, which are causal inputs, serving a nutritional function.

As representation and complexity

The cognitive scientist and applied mathematician Ronaldo Vigo argues that information is a concept that involves at least two related entities in order to make quantitative sense. These are, any

Page 807: Tugas tik di kelas xi ips 3

dimensionally defined category of objects S, and any of its subsets R. R, in essence, is a representation of S, or, in other words, conveys representational (and hence, conceptual) information about S. Vigo then defines the amount of information that R conveys about S as the rate of change in the complexity of S whenever the objects in R are removed from S. Under "Vigo information", pattern, invariance, complexity, representation, and information—five fundamental constructs of universal science—are unified under a novel mathematical framework.[5][6][7] Among other things, the framework aims to overcome the limitations of Shannon-Weaver information when attempting to characterize and measure subjective information.

As an influence which leads to a transformation

Information is any type of pattern that influences the formation or transformation of other patterns.[8][9] In this sense, there is no need for a conscious mind to perceive, much less appreciate, the pattern.[citation needed] Consider, for example, DNA. The sequence of nucleotides is a pattern that influences the formation and development of an organism without any need for a conscious mind.

Systems theory at times seems to refer to information in this sense,assuming information does not necessarily involve any conscious mind, and patterns circulating (due to feedback) in the system can be called information. In other words, it can be said that information in this sense is something potentially perceived as representation, though not created or presented for that purpose. For example, Gregory Bateson defines "information" as a "difference that makes a difference".[10]

If, however, the premise of "influence" implies that information hasbeen perceived by a conscious mind and also interpreted by it, the specific context associated with this interpretation may cause the transformation of the information into knowledge. Complex definitions of both "information" and "knowledge" make such semanticand logical analysis difficult, but the condition of

Page 808: Tugas tik di kelas xi ips 3

"transformation" is an important point in the study of information as it relates to knowledge, especially in the business discipline ofknowledge management. In this practice, tools and processes are usedto assist a knowledge worker in performing research and making decisions, including steps such as:

reviewing information in order to effectively derive value and meaning

referencing metadata if any is available

establishing a relevant context, often selecting from many possible contexts

deriving new knowledge from the information

making decisions or recommendations from the resulting knowledge.

Stewart (2001) argues that the transformation of information into knowledge is a critical one, lying at the core of value creation andcompetitive advantage for the modern enterprise.

The Danish Dictionary of Information Terms[11] argues that information only provides an answer to a posed question. Whether theanswer provides knowledge depends on the informed person. So a generalized definition of the concept should be: "Information" = An answer to a specific question".

When Marshall McLuhan speaks of media and their effects on human cultures, he refers to the structure of artifacts that in turn shapeour behaviors and mindsets. Also, pheromones are often said to be "information" in this sense.

As a property in physics

Main article: Physical information

Page 809: Tugas tik di kelas xi ips 3

Information has a well-defined meaning in physics. In 2003 J. D. Bekenstein claimed that a growing trend in physics was to define thephysical world as being made up of information itself (and thus information is defined in this way) (see Digital physics). Examples of this include the phenomenon of quantum entanglement, where particles can interact without reference to their separation or the speed of light. Material information itself cannot travel faster than light even if that information is transmitted indirectly. This could lead to all attempts at physically observing a particle with an "entangled" relationship to another being slowed down, even though the particles are not connected in any other way other than by the information they carry.

The mathematical universe hypothesis suggests a new paradigm, in which virtually everything, from particles and fields, through biological entities and consciousness, to the multiverse itself, could be described by mathematical patterns of information. By the same token, the cosmic void can be conceived of as the absence of material information in space (setting aside the virtual particles that pop in and out of existence due to quantum fluctuations, as well as the gravitational field and the dark energy). Nothingness can be understood then as that within which no space, time, energy, matter, or any other type of information could exist, which would bepossible if symmetry and structure break within the manifold of the multiverse (i.e. the manifold would have tears or holes).

Another link is demonstrated by the Maxwell's demon thought experiment. In this experiment, a direct relationship between information and another physical property, entropy, is demonstrated.A consequence is that it is impossible to destroy information without increasing the entropy of a system; in practical terms this often means generating heat. Another more philosophical outcome is that information could be thought of as interchangeable with energy.Toyabe et al. experimentally showed in nature that information can be converted into work.[12] Thus, in the study of logic gates, the theoretical lower bound of thermal energy released by an AND gate is

Page 810: Tugas tik di kelas xi ips 3

higher than for the NOT gate (because information is destroyed in anAND gate and simply converted in a NOT gate). Physical information is of particular importance in the theory of quantum computers.

In Thermodynamics, information is any kind of event that affects thestate of a dynamic system that can interpret the information.

The application of information study

The information cycle (addressed as a whole or in its distinct components) is of great concern to Information Technology, Information Systems, as well as Information Science. These fields deal with those processes and techniques pertaining to information capture (through sensors) and generation (through computation), processing (including encoding, encryption, compression, packaging),transmission (including all telecommunication methods), presentation(including visualization / display methods), storage (including magnetic, optical, holographic methods), etc. Information does not cease to exist, it may only get scrambled beyond any possibility of retrieval (within Information Theory, see lossy compression, for example; while in Physics, the black hole information paradox gets solved with the aid of the holographic principle).

Information Visualization (shortened as InfoVis) depends on the computation and digital representation of data, and assists users inpattern recognition and anomaly detection.

Partial map of the Internet, with nodes representing IP addresses

Galactic (including dark) matter distribution in a cubic sectionof the Universe

Page 811: Tugas tik di kelas xi ips 3

Information embedded in an abstract mathematical object with symmetry breaking nucleus

Visual representation of a strange attractor, with converted data of its fractal structure

Information Security (shortened as InfoSec) is the ongoing process of exercising due diligence to protect information, and information systems, from unauthorized access, use, disclosure, destruction, modification, disruption or distribution, through algorithms and procedures focused on monitoring and detection, as well as incident response and repair.

Information Analysis is the process of inspecting, transforming, andmodelling information, by converting raw data into actionable knowledge, in support of the decision-making process.

Information Communication represents the convergence of informatics,telecommunication and audio-visual media & content.

Technologically mediated information

It is estimated that the world's technological capacity to store information grew from 2.6 (optimally compressed) exabytes in 1986 – which is the informational equivalent to less than one 730-MB CD-ROMper person (539 MB per person) – to 295 (optimally compressed) exabytes in 2007.[13] This is the informational equivalent of almost61 CD-ROM per person in 2007.[14]

The world’s combined technological capacity to receive information through one-way broadcast networks was the informational equivalent of 174 newspapers per person per day in 2007.[13]

Page 812: Tugas tik di kelas xi ips 3

The world's combined effective capacity to exchange information through two-way telecommunication networks was the informational equivalent of 6 newspapers per person per day in 2007.[14]

As records

Records are specialized forms of information. Essentially, records are information produced consciously or as by-products of business activities or transactions and retained because of their value. Primarily, their value is as evidence of the activities of the organization but they may also be retained for their informational value. Sound records management ensures that the integrity of records is preserved for as long as they are required.

The international standard on records management, ISO 15489, definesrecords as "information created, received, and maintained as evidence and information by an organization or person, in pursuance of legal obligations or in the transaction of business". The International Committee on Archives (ICA) Committee on electronic records defined a record as, "a specific piece of recorded information generated, collected or received in the initiation, conduct or completion of an activity and that comprises sufficient content, context and structure to provide proof or evidence of that activity".

Records may be maintained to retain corporate memory of the organization or to meet legal, fiscal or accountability requirementsimposed on the organization. Willis (2005) expressed the view that sound management of business records and information delivered "...six key requirements for good corporate governance...transparency; accountability; due process; compliance; meeting statutory and common law requirements; and security of personal and corporate information."

Semiotics

Page 813: Tugas tik di kelas xi ips 3

Beynon-Davies[15][16] explains the multi-faceted concept of information in terms of signs and signal-sign systems. Signs themselves can be considered in terms of four inter-dependent levels, layers or branches of semiotics: pragmatics, semantics, syntax, and empirics. These four layers serve to connect the social world on the one hand with the physical or technical world on the other.

Pragmatics is concerned with the purpose of communication. Pragmatics links the issue of signs with the context within which signs are used. The focus of pragmatics is on the intentions of living agents underlying communicative behaviour. In other words, pragmatics link language to action.

Semantics is concerned with the meaning of a message conveyed in a communicative act. Semantics considers the content of communication.Semantics is the study of the meaning of signs - the association between signs and behaviour. Semantics can be considered as the study of the link between symbols and their referents or concepts – particularly the way in which signs relate to human behavior.

Syntax is concerned with the formalism used to represent a message. Syntax as an area studies the form of communication in terms of the logic and grammar of sign systems. Syntax is devoted to the study ofthe form rather than the content of signs and sign-systems.

Empirics[17] is the study of the signals used to carry a message; the physical characteristics of the medium of communication. Empirics is devoted to the study of communication channels and theircharacteristics, e.g., sound, light, electronic transmission etc..

Nielsen (2008) discusses the relationship between semiotics and information in relation to dictionaries. The concept of lexicographic information costs is introduced and refers to the

Page 814: Tugas tik di kelas xi ips 3

efforts users of dictionaries need to make in order to, first, find the data sought and, secondly, understand the data so that they can generate information.

Communication normally exists within the context of some social situation. The social situation sets the context for the intentions conveyed (pragmatics) and the form in which communication takes place. In a communicative situation intentions are expressed throughmessages which comprise collections of inter-related signs taken from a language which is mutually understood by the agents involved in the communication. Mutual understanding implies that agents involved understand the chosen language in terms of its agreed syntax (syntactics) and semantics. The sender codes the message in the language and sends the message as signals along some communication channel (empirics). The chosen communication channel will have inherent properties which determine outcomes such as the speed with which communication can take place and over what distance.

SecrecyFrom Wikipedia, the free encyclopediaFor other uses, see Secrecy (disambiguation)."Secret", "Secrets", and "Covert" redirect here. For other uses, see Secret (disambiguation), Secrets (disambiguation), and Covert (disambiguation)."Clandestinity" redirects here. For the diriment impediment inthe canon law of the Roman Catholic Church, see Clandestinity (canon law).

[hide]This article has multiple issues. Please help improve it or discuss these issues on the talk page.This article includes a list of references, related reading or external links, but its sources remain unclear because it lacks inline citations. (February 2008)This article contains weasel words: vague phrasing that often accompanies biased or unverifiable information. (March 2009)

Page 815: Tugas tik di kelas xi ips 3

Secrecy is sometimes considered of life or death importance. U.S. soldier at camp during World War II.

Secrecy (also called clandestinity or furtiveness) is the practice of hiding information from certain individuals or groups, who do not have the "need to know" perhaps while sharing it with other individuals. That which is kept hidden is known as the secret.

Secrecy is often controversial, depending on the content or nature of the secret, the group or people keeping the secret, and the motivation for secrecy. Secrecy by government entitiesis often decried as excessive or in promotion of poor operation[by whom?]; excessive revelation of information on individuals can conflict with virtues of privacy and confidentiality. It is often contrasted with social transparency.

Contents1 Secrecy in sociology and zoology

1.1 Secret sharing (anthropology)

2 Government secrecy

3 Corporate secrecy

4 Technology secrecy

Page 816: Tugas tik di kelas xi ips 3

5 Military secrecy

6 Views on secrecy

7 See also

8 References

9 External links

Secrecy in sociology and zoologyMain article: Sociological aspects of secrecy

Animals conceal the location of their den or nest from predators. Squirrels bury nuts, hiding them, and they try to remember their locations later.

Humans attempt to consciously conceal aspects of themselves from others due to shame, or from fear of violence, rejection,harassment, loss of acceptance, or loss of employment. Humans may also attempt to conceal aspects of their own self which they are not capable of incorporating psychologically into their conscious being. Families sometimes maintain "family secrets", obliging family members never to discuss disagreeable issues concerning the family with outsiders or sometimes even within the family. Many "family secrets" are maintained by using a mutually agreed-upon construct (an official family story) when speaking with outside members. Agreement to maintain the secret is often coerced through "shaming" and reference to family honor. The information may even be something as trivial as a recipe.

Secrets are sometimes kept to provide the pleasure of surprise. This includes keeping secret about a surprise party,not telling spoilers of a story, and avoiding exposure of a magic trick.

Keeping one's strategy secret is important in many aspects of game theory.

Secret sharing (anthropology)

Page 817: Tugas tik di kelas xi ips 3

In anthropology secret sharing is one way for men and women to establish traditional relations with other men and women.[citation

needed] A commonly used[citation needed] narrative that describes this kindof behavior is Joseph Conrad's short story "The Secret Sharer".

Government secrecy

A burn bag and security classification stickers on a laptop computer, between U.S. President Barack Obama and Vice President Joe Biden during updates on Operation Geronimo, a mission against Osama bin Laden, in the Situation Room of the White House, May 1, 2011.

Governments often attempt to conceal information from other governments and the public. These state secrets can include weapon designs, military plans, diplomatic negotiation tactics, and secrets obtained illicitly from others ("intelligence"). Most nations have some form of Official Secrets Act (the Espionage Act in the U.S.) and classify material according to the level of protection needed (hence the term "classified information"). An individual needs a security clearance for access and other protection methods, such as keeping documents in a safe, are stipulated.

Few people dispute the desirability of keeping Critical Nuclear Weapon Design Information secret, but many believe government secrecy to be excessive and too often employed for political purposes. Many countries have laws that attempt to limit government secrecy, such as the U.S. Freedom of Information Act and sunshine laws. Government officials sometimes leak information they are supposed to keep secret. (For a recent (2005) example, see Plame affair.)

Secrecy in elections is a growing issue, particularly secrecy of vote counts on computerized vote counting machines. While

Page 818: Tugas tik di kelas xi ips 3

voting, citizens are acting in a unique sovereign or "owner" capacity (instead of being a subject of the laws, as is true outside of elections) in selecting their government servants. It is argued that secrecy is impermissible as against the public in the area of elections where the government gets all of its power and taxing authority. In any event, permissible secrecy varies significantly with the context involved.

Corporate secrecyOrganizations, ranging from multi-national for profit corporations to nonprofit charities, keep secrets for competitive advantage, to meet legal requirements, or, in somecases, to conceal nefarious behavior. New products under development, unique manufacturing techniques, or simply lists of customers are types of information protected by trade secret laws. The patent system encourages inventors to publishinformation in exchange for a limited time monopoly on its use, though patent applications are initially secret. Secret societies use secrecy as a way to attract members by creating a sense of importance.

Shell companies may be used to launder money from criminal activity, to finance terrorism, or to evade taxes. Registers of beneficial ownership aim at fighting corporate secrecy in that sense.

Other laws require organizations to keep certain information secret, such as medical records (HIPAA in the U.S.), or financial reports that are under preparation (to limit insidertrading). Europe has particularly strict laws about database privacy.

In many countries, neoliberal reforms of government have included expanding the outsourcing of government tasks and functions to private businesses with the aim of improving efficiency and effectiveness in government administration. However, among the criticisms of these reforms is the claim that the pervasive use of "Commercial-in-confidence" (or secrecy) clauses in contracts between government and private providers further limits public accountability of governments and prevents proper public scrutiny of the performance and

Page 819: Tugas tik di kelas xi ips 3

probity of the private companies. Concerns have been raised that 'commercial-in-confidence' is open to abuse because it can be deliberately used to hide corporate or government maladministration and even corruption.

Technology secrecySee also: Full disclosure (computer security), Kerckhoffs' principle and security through obscurity

Preservation of secrets is one of the goals of information security. Techniques used include physical security and cryptography. The latter depends on the secrecy of cryptographic keys. Many believe that security technology can be more effective if it itself is not kept secret.

Information hiding is a design principle in much software engineering. It is considered easier to verify software reliability if one can be sure that different parts of the program can only access (and therefore depend on) a known limited amount of information.

Military secrecySee also: Military intelligence and Born secret

A military secret is information about martial affairs that ispurposely not made available to the general public and hence to any enemy, in order to gain an advantage or to not reveal aweakness, to avoid embarrassment, or to help in propaganda efforts. Most military secrets are tactical in nature, such asthe strengths and weaknesses of weapon systems, tactics, training methods, plans, and the number and location of specific weapons. Some secrets involve information in broader areas, such as secure communications, cryptography, intelligence operations, and cooperation with third parties.

Views on secrecyWikiquote has quotations related to: Secrets

Page 820: Tugas tik di kelas xi ips 3

Excessive secrecy is often cited [1] as a source of much human conflict. One may have to lie in order to hold a secret, whichmight lead to psychological repercussions.[original research?] The alternative, declining to answer when asked something, may suggest the answer and may therefore not always be suitable for keeping a secret. Also, the other may insist that one answer the question.[improper synthesis?] Nearly 2500 years ago, Sophocles wrote, "Do nothing secretly; for Time sees and hearsall things, and discloses all." And Gautama Siddhartha, the Buddha, once said "Three things cannot long stay hidden: the sun, the moon and the truth".

Secrecy in art

Das Geheimnis (The Scret), Felix Nussbaum

A Stolen Interview, Edmund Blair Leighton

Page 821: Tugas tik di kelas xi ips 3

First secret confidence to Venus, François Jouffroy

A Secret from on High, Hippolyte Moulin

The Secret, Moritz Stifter

CommunicationFrom Wikipedia, the free encyclopedia  (Redirected from Communications)

Page 822: Tugas tik di kelas xi ips 3

This article is about communication between living things. Fortechnological aspects, see telecommunications. For other uses,see Communication (disambiguation)."Communicate" redirects here. For other uses, see Communicate (disambiguation).

Communication

Portal · History

General Aspects

Communication theory

Information

Semiotics

Logic

Sociology

FieldsDiscourse analysis

Linguistics

Mass communication

Organizational communication

Pragmatics

Page 823: Tugas tik di kelas xi ips 3

Semiotics

Sociolinguistics

DisciplinesPublic speaking

Interaction

Discourse

Culture

Argumentation

Persuasion

Research

Rhetoric

Literature

Philosophy

CategoriesOutline

v

t

e

Communication (from Latin commūnicāre, meaning "to share"[1]) isthe purposeful activity of information exchange between two ormore participants in order to convey or receive the intended meanings through a shared system of signs and semiotic rules.

Communication takes place inside and between three main subject categories: human beings, living organisms in general and communication-enabled devices (for example sensor networksand control systems).[2] Communication in living organisms (studied in the field of biosemiotics) often occurs through

Page 824: Tugas tik di kelas xi ips 3

visual, auditory, or biochemical means. Human communication isunique for its extensive use of language.

Contents1 Nonverbal communication

2 Verbal communication

3 Written communication and its historical development

4 Business communication

5 Effective communication

6 Barriers to effective human communication

7 Nonhuman communication

7.1 Animals

7.2 Plants and fungi

7.3 Bacteria quorum sensing

8 Models of communication

9 Noise

10 Communication as academic discipline

11 See also

12 References

13 Further reading

Nonverbal communicationMain article: Nonverbal communication

Nonverbal communication describes the process of conveying meaning in the form of non-word messages. Examples of nonverbal communication include haptic communication,

Page 825: Tugas tik di kelas xi ips 3

chronemic communication, gestures, body language, facial expression, eye contact, and how one dresses. Speech also contains nonverbal elements known as paralanguage, e.g. rhythm, intonation, tempo, and stress. Research has shown thatup to 55% of human communication may occur through non verbal facial expressions, and a further 38% through paralanguage.[3] Likewise, written texts include nonverbal elements such as handwriting style, spatial arrangement of words and the use ofemoticons to convey emotional

Verbal communicationEffective verbal or spoken communication is dependent on a number of factors and cannot be fully isolated from other important interpersonal skills such as non-verbal communication, listening skills and clarification. Human language can be defined as a system of symbols (sometimes known as lexemes) and the grammars (rules) by which the symbols are manipulated. The word "language" also refers to common properties of languages. Language learning normally occurs most intensively during human childhood. Most of the thousands of human languages use patterns of sound or gesture for symbols which enable communication with others around them. Languages tend to share certain properties, although there are exceptions. There is no defined line between a language and a dialect. Constructed languages such as Esperanto, programming languages, and various mathematical formalisms are not necessarily restricted to the properties shared by human languages. The communication is two way process instead of one way.

Written communication and its historical developmentOver time the forms of and ideas about communication have evolved through the continuing progression of technology. Advances include communications psychology and media psychology, an emerging field of study.

The progression of written communication can be divided into three "information communication revolutions":[4]

Page 826: Tugas tik di kelas xi ips 3

Written communication first emerged through the use of pictographs. The pictograms were made in stone, hence written communication was not yet mobile.

The next step occurred when writing began to appear on paper, papyrus, clay, wax, etc. with common alphabets. Communication became mobile.

The final stage is characterized by the transfer of information through controlled waves of electromagnetic radiation (i.e., radio, microwave, infrared) and other electronic signals.

Communication is thus a process by which meaning is assigned and conveyed in an attempt to create shared understanding. This process, which requires a vast repertoire of skills in interpersonal processing, listening, observing, speaking, questioning, analyzing, gestures, and evaluating enables collaboration and cooperation.[5]

Misunderstandings can be anticipated and solved through formulations, questions and answers, paraphrasing, examples, and stories of strategic talk. Written communication can be clarified by planning follow-up talks on critical written communication as part of the everyday way of doing business. Afew minutes spent talking in the present will save valuable time later by avoiding misunderstandings in advance. A frequent method for this purpose is reiterating what one heardin one's own words and asking the other person if that really was what was meant.[6]

Business communicationMain article: Business communication

Business communications is a term for a wide variety of activities including but not limited to: strategic communications planning, media relations, public relations (which can include social media, broadcast and written communications, and more), brand management, reputation management, speech-writing, customer-client relations, and internal/employee communications.

Page 827: Tugas tik di kelas xi ips 3

Companies with limited resources may only choose to engage in a few of these activities while larger organizations may employ a full spectrum of communications. Since it is difficult to develop such a broad range of skills, communications professionals often specialize in one or two ofthese areas but usually have at least a working knowledge of most of them. By far, the most important qualifications communications professionals can possess are excellent writingability, good 'people' skills, and the capacity to think critically and strategically.

Effective communicationEffective communication occurs when a desired thought is the result of intentional or unintentional information sharing, which is interpreted between multiple entities and acted on ina desired way. This effect also ensures that messages are not distorted during the communication process. Effective communication should generate the desired effect and maintain the effect, with the potential to increase the effect of the message. Therefore, effective communication serves the purposefor which it was planned or designed. Possible purposes might be to elicit change, generate action, create understanding, inform or communicate a certain idea or point of view. When the desired effect is not achieved, factors such as barriers to communication are explored, with the intention being to discover how the communication has been ineffective.

Barriers to effective human communicationBarriers to effective communication can retard or distort the message and intention of the message being conveyed which may result in failure of the communication process or an effect that is undesirable. These include filtering, selective perception, information overload, emotions, language, silence,communication apprehension, gender differences and political correctness[7]

This also includes a lack of expressing "knowledge-appropriate" communication, which occurs when a person uses ambiguous or complex legal words, medical jargon, or

Page 828: Tugas tik di kelas xi ips 3

descriptions of a situation or environment that is not understood by the recipient.

Physical barriers. Physical barriers are often due to the nature of the environment. An example of this is the natural barrier which exists if staff are located in different buildings or on different sites. Likewise, poor or outdated equipment, particularly the failure of management to introducenew technology, may also cause problems. Staff shortages are another factor which frequently causes communication difficulties for an organization.

System design. System design faults refer to problems with thestructures or systems in place in an organization. Examples might include an organizational structure which is unclear andtherefore makes it confusing to know whom to communicate with.Other examples could be inefficient or inappropriate information systems, a lack of supervision or training, and a lack of clarity in roles and responsibilities which can lead to staff being uncertain about what is expected of them.

Attitudinal barriers. Attitudinal barriers come about as a result of problems with staff in an organization. These may bebrought about, for example, by such factors as poor management, lack of consultation with employees, personality conflicts which can result in people delaying or refusing to communicate, the personal attitudes of individual employees which may be due to lack of motivation or dissatisfaction at work, brought about by insufficient training to enable them tocarry out particular tasks, or simply resistance to change dueto entrenched attitudes and ideas.[citation needed]

Ambiguity of words/phrases. Words sounding the same but havingdifferent meaning can convey a different meaning altogether. Hence the communicator must ensure that the receiver receives the same meaning. It is better if such words are avoided by using alternatives whenever possible.

Individual linguistic ability. The use of jargon, difficult orinappropriate words in communication can prevent the recipients from understanding the message. Poorly explained ormisunderstood messages can also result in confusion. However,

Page 829: Tugas tik di kelas xi ips 3

research in communication has shown that confusion can lend legitimacy to research when persuasion fails.[8][9]

Physiological barriers. These may result from individuals' personal discomfort, caused—for example—by ill health, poor eyesight or hearing difficulties.

Cultural differences. These may result from the cultural differences of communities around the world, within an individual country (tribal/regional differences, dialects etc.), between religious groups and in organisations or at an organisational level - where companies, teams and units may have different expectations, norms and idiolects. Families andfamily groups may also experience the effect of cultural barriers to communication within and between different family members or groups. For example: words, colours and symbols have different meanings in different cultures. In most parts of the world, nodding your head means agreement, shaking your head means no, except in some parts of the world.[10]

Nonhuman communicationSee also: Biocommunication (science), Interspecies communication and Biosemiotics

Every information exchange between living organisms — i.e. transmission of signals that involve a living sender and receiver can be considered a form of communication; and even primitive creatures such as corals are competent to communicate. Nonhuman communication also include cell signaling, cellular communication, and chemical transmissions between primitive organisms like bacteria and within the plantand fungal kingdoms.

Animals

The broad field of animal communication encompasses most of the issues in ethology. Animal communication can be defined asany behavior of one animal that affects the current or future behavior of another animal. The study of animal communication,called zoo semiotics (distinguishable from anthroposemiotics, thestudy of human communication) has played an important part in the development of ethology, sociobiology, and the study of

Page 830: Tugas tik di kelas xi ips 3

animal cognition. Animal communication, and indeed the understanding of the animal world in general, is a rapidly growing field, and even in the 21st century so far, a great share of prior understanding related to diverse fields such aspersonal symbolic name use, animal emotions, animal culture and learning, and even sexual conduct, long thought to be wellunderstood, has been revolutionized.

Plants and fungi

Communication is observed within the plant organism, i.e. within plant cells and between plant cells, between plants of the same or related species, and between plants and non-plant organisms, especially in the root zone. Plant roots communicate with rhizome bacteria, fungi, and insects within the soil. These interactions are governed by syntactic, pragmatic, and semantic rules,[citation needed] and are possible because of the decentralized "nervous system" of plants. The original meaning of the word "neuron" in Greek is "vegetable fiber" and recent research has shown that most of the microorganism plant communication processes are neuron-like.[11]

Plants also communicate via volatiles when exposed to herbivory attack behavior, thus warning neighboring plants.[12] In parallel they produce other volatiles to attract parasites which attack these herbivores. In stress situations plants canoverwrite the genomes they inherited from their parents and revert to that of their grand- or great-grandparents.[citation needed]

Fungi communicate to coordinate and organize their growth and development such as the formation of Marcelia and fruiting bodies. Fungi communicate with their own and related species as well as with non fungal organisms in a great variety of symbiotic interactions, especially with bacteria, unicellular eukaryote, plants and insects through biochemicals of biotic origin. The biochemicals trigger the fungal organism to react in a specific manner, while if the same chemical molecules arenot part of biotic messages, they do not trigger the fungal organism to react. This implies that fungal organisms can differentiate between molecules taking part in biotic messagesand similar molecules being irrelevant in the situation. So far five different primary signalling molecules are known to coordinate different behavioral patterns such as

Page 831: Tugas tik di kelas xi ips 3

filamentation, mating, growth, and pathogenicity. Behavioral coordination and production of signaling substances is achieved through interpretation processes that enables the organism to differ between self or non-self, a biotic indicator, biotic message from similar, related, or non-related species, and even filter out "noise", i.e. similar molecules without biotic content.[13]

Bacteria quorum sensing

Communication is not a tool used only by humans, plants and animals, but it is also used by microorganisms like bacteria. The process is called quorum sensing. Through quorum sensing, bacteria are able to sense the density of cells, and regulate gene expression accordingly. This can be seen in both gram positive and gram negative bacteria. This was first observed by Fuqua et al. in marine microorganisms like V. harveyi and V. fischeri.[14]

Models of communicationMain article: Models of communication

Shannon and Weaver Model of Communication

Communication major dimensions scheme

Page 833: Tugas tik di kelas xi ips 3

Linear Communication Model

The first major model for communication was introduced by Claude Shannon and Warren Weaver for Bell Laboratories in 1949[15] The original model was designed to mirror the functioning of radio and telephone technologies. Their initialmodel consisted of three primary parts: sender, channel, and receiver. The sender was the part of a telephone a person spoke into, the channel was the telephone itself, and the receiver was the part of the phone where one could hear the other person. Shannon and Weaver also recognized that often there is static that interferes with one listening to a telephone conversation, which they deemed noise.

In a simple model, often referred to as the transmission modelor standard view of communication, information or content (e.g. a message in natural language) is sent in some form (as spoken language) from an emisor/ sender/ encoder to a destination/ receiver/ decoder. This common conception of communication simply views communication as a means of sendingand receiving information. The strengths of this model are simplicity, generality, and quantifiability. Claude Shannon and Warren Weaver structured this model based on the followingelements:

An information source, which produces a message.

A transmitter, which encodes the message into signals

A channel, to which signals are adapted for transmission

A noise source, which distorts the signal while it propagates through the channel

A receiver, which 'decodes' (reconstructs) the message from the signal.

A destination, where the message arrives.

Shannon and Weaver argued that there were three levels of problems for communication within this theory.

The technical problem: how accurately can the message be transmitted?

Page 834: Tugas tik di kelas xi ips 3

The semantic problem: how precisely is the meaning 'conveyed'?The effectiveness problem: how effectively does the received meaning affect behavior?

Daniel Chandler [16] critiques the transmission model by stating:

It assumes communicators are isolated individuals.No allowance for differing purposes.No allowance for differing interpretations.No allowance for unequal power relations.No allowance for situational contexts.

In 1960, David Berlo expanded on Shannon and Weaver's (1949) linear model of communication and created the SMCR Model of Communication.[17] The Sender-Message-Channel-Receiver Model of communication separated the model into clear parts and has been expanded upon by other scholars.

Communication is usually described along a few major dimensions: Message (what type of things are communicated), source / emisor / sender / encoder (by whom), form (in which form), channel (through which medium), destination / receiver / target / decoder (to whom), and Receiver. Wilbur Schram (1954) also indicated that we should also examine the impact that a message has (both desired and undesired) on the target of the message.[18] Between parties, communication includes acts that confer knowledge and experiences, give advice and commands, and ask questions. These acts may take many forms, in one of the various manners of communication. The form depends on the abilities of the group communicating. Together, communication content and form make messages that are sent towards a destination. The target can be oneself, another person or being, another entity (such as a corporationor group of beings).

Communication can be seen as processes of information transmission governed by three levels of semiotic rules:

Pragmatic (concerned with the relations between signs/expressions and their users)

Semantic (study of relationships between signs and symbols andwhat they represent) and

Page 835: Tugas tik di kelas xi ips 3

Syntactic (formal properties of signs and symbols).

Therefore, communication is social interaction where at least two interacting agents share a common set of signs and a common set of semiotic rules. This commonly held rule in some sense ignores autocommunication, including intrapersonal communication via diaries or self-talk, both secondary phenomena that followed the primary acquisition of communicative competences within social interactions.

In light of these weaknesses, Barnlund (2008) proposed a transactional model of communication.[19] The basic premise of the transactional model of communication is that individuals are simultaneously engaging in the sending and receiving of messages.

In a slightly more complex form a sender and a receiver are linked reciprocally. This second attitude of communication, referred to as the constitutive model or constructionist view,focuses on how an individual communicates as the determining factor of the way the message will be interpreted. Communication is viewed as a conduit; a passage in which information travels from one individual to another and this information becomes separate from the communication itself. A particular instance of communication is called a speech act. The sender's personal filters and the receiver's personal filters may vary depending upon different regional traditions,cultures, or gender; which may alter the intended meaning of message contents. In the presence of "communication noise" on the transmission channel (air, in this case), reception and decoding of content may be faulty, and thus the speech act maynot achieve the desired effect. One problem with this encode-transmit-receive-decode model is that the processes of encoding and decoding imply that the sender and receiver each possess something that functions as a codebook, and that thesetwo code books are, at the very least, similar if not identical. Although something like code books is implied by the model, they are nowhere represented in the model, which creates many conceptual difficulties.

Theories of coregulation describe communication as a creative and dynamic continuous process, rather than a discrete exchange of information. Canadian media scholar Harold Innis

Page 836: Tugas tik di kelas xi ips 3

had the theory that people use different types of media to communicate and which one they choose to use will offer different possibilities for the shape and durability of society (Wark, McKenzie 1997). His famous example of this is using ancient Egypt and looking at the ways they built themselves out of media with very different properties stone and papyrus. Papyrus is what he called 'Space Binding'. it made possible the transmission of written orders across space,empires and enables the waging of distant military campaigns and colonial administration. The other is stone and 'Time Binding', through the construction of temples and the pyramidscan sustain their authority generation to generation, through this media they can change and shape communication in their society (Wark, McKenzie 1997).

NoiseIn any communication model, noise is interference with the decoding of messages sent over a channel by an encoder. There are many examples of noise:

Environmental noise. Noise that physically disrupts communication, such as standing next to loud speakers at a party, or the noise from a construction site next to a classroom making it difficult to hear the professor.

Physiological-impairment noise. Physical maladies that preventeffective communication, such as actual deafness or blindness preventing messages from being received as they were intended.

Semantic noise. Different interpretations of the meanings of certain words. For example, the word "weed" can be interpretedas an undesirable plant in a yard, or as a euphemism for marijuana.

Syntactical noise. Mistakes in grammar can disrupt communication, such as abrupt changes in verb tense during a sentence.

Organizational noise. Poorly structured communication can prevent the receiver from accurate interpretation. For example, unclear and badly stated directions can make the receiver even more lost.

Page 837: Tugas tik di kelas xi ips 3

Cultural noise. Stereotypical assumptions can cause misunderstandings, such as unintentionally offending a non-Christian person by wishing them a "Merry Christmas".

Psychological noise. Certain attitudes can also make communication difficult. For instance, great anger or sadness may cause someone to lose focus on the present moment. Disorders such as autism may also severely hamper effective communication.[20]

Communication as academic disciplineMain article: Communication studies

Espionage

From Wikipedia, the free encyclopedia

(Redirected from Spy)

"Spy" and "Secret agent" redirect here. For other uses, see Spy (disambiguation) and Secret agent (disambiguation).

For other uses, see Espionage (disambiguation).

Espionage or, casually, spying involves a spy ring, government and company/firm or individual obtaining information considered secret or confidential without the permission of the holder of the information.[1] Espionage is inherently clandestine, as it is taken for granted that it is unwelcome and in many cases illegal and punishable by law. It is a subset of "intelligence gathering", whichotherwise may be conducted from public sources and using perfectly legal and ethical means. It is crucial to distinguish espionage from"intelligence" gathering, as the latter does not necessarily involveespionage, but often collates open-source information.

Espionage is often part of an institutional effort by a government or commercial concern. However, the term is generally associated with state spying on potential or actual enemies primarily for

Page 838: Tugas tik di kelas xi ips 3

military purposes. Spying involving corporations is known as industrial espionage.

One of the most effective ways to gather data and information about the enemy (or potential enemy) is by infiltrating the enemy's ranks.This is the job of the spy (espionage agent). Spies can bring back all sorts of information concerning the size and strength of enemy forces. They can also find dissidents within the enemy's forces and influence them to defect. In times of crisis, spies can also be usedto steal technology and to sabotage the enemy in various ways. Counterintelligence operatives can feed false information to enemy spies, protecting important domestic secrets, and preventing attempts at subversion. Nearly every country has very strict laws concerning espionage, and the penalty for being caught is often severe. However, the benefits that can be gained through espionage are generally great enough that most governments and many large corporations make use of it to varying degrees.

Further information on clandestine HUMINT (human intelligence) information collection techniques is available, including discussions of operational techniques, asset recruiting, and the tradecraft used to collect this information.

Contents

1 History

1.1 Early history

1.2 Modern development

1.2.1 Military Intelligence

1.2.2 Naval Intelligence

1.2.3 Civil intelligence agencies

1.2.4 Counter-intelligence

Page 839: Tugas tik di kelas xi ips 3

1.3 First World War

1.3.1 Codebreaking

1.4 Russian Revolution

1.5 Today

2 Targets of espionage

3 Methods and terminology

3.1 Technology and techniques

4 Organization

5 Industrial espionage

6 Agents in espionage

7 Law

8 Use against non-spies

9 Espionage laws in the UK

9.1 Government intelligence laws and its distinction from espionage

10 Military conflicts

11 List of famous spies

11.1 World War I

11.2 World War II

11.3 Post World War II

12 Spy fiction

12.1 World War II: 1939–1945

12.2 Cold War era: 1945–1991

13 See also

14 References

15 Further reading

Page 840: Tugas tik di kelas xi ips 3

16 External links

History

Early history

A bamboo version of The Art of War, written by Sun-Tzu and containing advice on espionage tactics.

Events involving espionage are well documented throughout history. The ancient writings of Chinese and Indian military strategists suchas Sun-Tzu and Chanakya contain information on deception and subversion. Chanakya's student Chandragupta Maurya, founder of the Maurya Empire in India, made use of assassinations, spies and secretagents, which are described in Chanakya's Arthasastra. The ancient Egyptians had a thoroughly developed system for the acquisition of intelligence, and the Hebrews used spies as well, as in the story ofRahab. Spies were also prevalent in the Greek and Roman empires.[2] During the 13th and 14th centuries, the Mongols relied heavily on espionage in their conquests in Asia and Europe. Feudal Japan often used ninja to gather intelligence.

Aztecs used Pochtecas, people in charge of commerce, as spies and diplomats, and had diplomatic immunity. Along with the pochteca, before a battle or war, secret agents, quimitchin, were sent to spy amongst enemies usually wearing the local costume and speaking the local language, techniques similar to modern secret agents.[3]

Many modern espionage methods were established by Francis Walsinghamin Elizabethan England.[4] Walsingham's staff in England included the cryptographer Thomas Phelippes, who was an expert in decipheringletters and forgery, and Arthur Gregory, who was skilled at breakingand repairing seals without detection.[5]

Page 841: Tugas tik di kelas xi ips 3

In 1585, Mary, Queen of Scots was placed in the custody of Sir AmiasPaulet, who was instructed to open and read all of Mary's clandestine correspondence.[5] In a successful attempt to entrap her, Walsingham arranged a single exception: a covert means for Mary's letters to be smuggled in and out of Chartley in a beer keg. Mary was misled into thinking these secret letters were secure, while in reality they were deciphered and read by Walsingham's agents.[5] He succeeded in intercepting letters that indicated a conspiracy to displace Elizabeth I with Mary, Queen of Scots.

In foreign intelligence, Walsingham's extensive network of "intelligencers", who passed on general news as well as secrets, spanned Europe and the Mediterranean.[5] While foreign intelligence was a normal part of the principal secretary's activities, Walsingham brought to it flair and ambition, and large sums of his own money.[6] He cast his net more widely than others had done previously: expanding and exploiting links across the continent as well as in Constantinople and Algiers, and building and inserting contacts among Catholic exiles.[5]

Modern development

Political cartoon depicting the Afghan Emir Sher Ali with his "friends" the Russian Bear and British Lion (1878). The Great Game saw the rise of systematic espionage and surveillance throughout theregion by both powers.

Modern tactics of espionage and dedicated government intelligence agencies were developed over the course of the late 19th century. A key background to this development was the Great Game, a period denoting the strategic rivalry and conflict that existed between theBritish Empire and the Russian Empire throughout Central Asia. To counter Russian ambitions in the region and the potential threat it posed to the British position in India, a system of surveillance, intelligence and counterintelligence was built up in the Indian Civil Service. The existence of this shadowy conflict was popularised in Rudyard Kipling's famous spy book, Kim, where he portrayed the Great Game (a phrase he popularised) as an espionage and intelligence conflict that 'never ceases, day or night'.

Page 842: Tugas tik di kelas xi ips 3

Although the techniques originally used were distinctly amateurish -British agents would often pose unconvincingly as botanists or archaeologists - more professional tactics and systems were slowly put in place. In many respects, it was here that a modern intelligence apparatus with permanent bureaucracies for internal andforeign infiltration and espionage, was first developed. A pioneering cryptographic unit was established as early as 1844 in India, which achieved some important successes in decrypting Russiancommunications in the area.[7]

The establishment of dedicated intelligence organizations was directly linked to the colonial rivalries between the major Europeanpowers and the accelerating development of military technology.

An early source of military intelligence was the diplomatic system of military attachés (an officer attached to the diplomatic service operating through the embassy in a foreign country), that became widespread in Europe after the Crimean War. Although officially restricted to a role of transmitting openly received information, they were soon being used to clandestinely gather confidential information and in some cases even to recruit spies and to operate de facto spy rings.

Military Intelligence

Seal of the Evidenzbureau, military intelligence service of the Austrian Empire.

Shaken by the revolutionary years 1848-1849, the Austrian Empire founded the Evidenzbureau in 1850 as the first permanent military intelligence service. It was first used in the 1859 Austro-Sardinianwar and the 1866 campaign against Prussia, albeit with little success. The bureau collected intelligence of military relevance from various sources into daily reports to the Chief of Staff (Generalstabschef) and weekly reports to Emperor Franz Joseph.

Page 843: Tugas tik di kelas xi ips 3

Sections of the Evidenzbureau were assigned different regions, the most important one was aimed against Russia.

During the Crimean War, the Topographical & Statistic Department T&SD was established within the British War Office as an embryonic military intelligence organization. The department initially focusedon the accurate mapmaking of strategically sensitive locations and the collation of militarily relevant statistics. After the deficiencies in the British army's performance during the war becameknown a large-scale reform of army institutions was overseen by the Edward Cardwell. As part of this, the T&SD was reorganized as the Intelligence Branch of the War Office in 1873 with the mission to "collect and classify all possible information relating to the strength, organization etc. of foreign armies... to keep themselves acquainted with the progress made by foreign countries in military art and science..."[8]

The French Ministry of War authorized the creation of the Deuxième Bureau on June 8, 1871, a service charged with performing "research on enemy plans and operations."[9] This was followed a year later bythe creation of a military counter-espionage service. It was this latter service that was discredited through its actions over the notorious Dreyfus Affair, where a French Jewish officer was falsely accused of handing over military secrets to the Germans. As a resultof the political division that ensued, responsibility for counter-espionage was moved to the civilian control of the Ministry of the Interior.

Field Marshal Helmuth von Moltke established a military intelligenceunit, Abteilung (Section) IIIb, to the German General Staff in 1889 which steadily expanded its operations into France and Russia. The Italian Ufficio Informazioni del Commando Supremo was put on a permanent footing in 1900. After Russia's defeat in the Russo-Japanese War of 1904-05, Russian military intelligence was reorganized under the 7th Section of the 2nd Executive Board of the great imperial headquarters.[10]

Page 844: Tugas tik di kelas xi ips 3

Naval Intelligence

It was not just the army that felt a need for military intelligence.Soon, naval establishments were demanding similar capabilities from their national governments to allow them to keep abreast of technological and strategic developments in rival countries.

The Naval Intelligence Division was set up as the independent intelligence arm of the British Admiralty in 1882 (initially as the Foreign Intelligence Committee) and was headed by Captain William Henry Hall.[11] The division was initially responsible for fleet mobilization and war plans as well as foreign intelligence collection; in the 1900s two further responsibilities - issues of strategy and defence and the protection of merchant shipping - were added.

Naval intelligence originated in the same year in the US and was founded by the Secretary of the Navy, William H. Hunt "...for the purpose of collecting and recording such naval information as may beuseful to the Department in time of war, as well as in peace." This was followed in October 1885 by the Military Information Division, the first standing military intelligence agency of the United Stateswith the duty of collecting military data on foreign nations.[12]

In 1900, the Imperial German Navy established the Nachrichten-Abteilung, which was devoted to gathering intelligence on Britain. The navies of Italy, Russia and Austria-Hungary set up similar services as well.

Civil intelligence agencies

William Melville helped establish the first independent intelligenceagency, the British Secret Service, and was appointed as its first chief.

Page 845: Tugas tik di kelas xi ips 3

Integrated intelligence agencies run directly by governments were also established. The British Secret Service Bureau was founded in 1909 as the first independent and interdepartmental agency fully in control over all government espionage activities.

At a time of widespread and growing anti-German feeling and fear, plans were drawn up for an extensive offensive intelligence system to be used an instrument in the event of a European war. Due to intense lobbying from William Melville and after he obtained German mobilization plans and proof of their financial support to the Boers, the government authorized the creation of a new intelligence section in the War Office, MO3 (subsequently redesignated M05) headed by Melville, in 1903. Working under cover from a flat in London, Melville ran both counterintelligence and foreign intelligence operations, capitalizing on the knowledge and foreign contacts he had accumulated during his years running Special Branch.

Due to its success, the Government Committee on Intelligence, with support from Richard Haldane and Winston Churchill, established the Secret Service Bureau in 1909. It consisted of nineteen military intelligence departments - MI1 to MI19, but MI5 and MI6 came to be the most recognized as they are the only ones to have remained active to this day.

The Bureau was a joint initiative of the Admiralty, the War Office and the Foreign Office to control secret intelligence operations in the UK and overseas, particularly concentrating on the activities ofthe Imperial German Government. Its first director was Captain Sir George Mansfield Smith-Cumming alias "C".[13] In 1910, the bureau was split into naval and army sections which, over time, specialisedin foreign espionage and internal counter-espionage activities respectively. The Secret Service initially focused its resources on gathering intelligence on German shipbuilding plans and operations. Espionage activity in France was consciously refrained from, so as not to jeopardize the burgeoning alliance between the two nations.

Page 846: Tugas tik di kelas xi ips 3

For the first time, the government had access to a peace-time, centralized independent intelligence bureaucracy with indexed registries and defined procedures, as opposed to the more ad hoc methods used previously. Instead of a system whereby rival departments and military services would work on their own prioritieswith little to no consultation or cooperation with each other, the newly established Secret Intelligence Service was interdepartmental,and submitted its intelligence reports to all relevant government departments.[14]

Counter-intelligence

The Okhrana was founded in 1880 and was tasked with countering enemyespionage. St. Petersburg Okhrana group photo, 1905.

As espionage became more widely used, it became imperative to expandthe role of existing police and internal security forces into a roleof detecting and countering foreign spies. The Austro-Hungarian Evidenzbureau was entrusted with the role from the late 19th centuryto counter the actions of the Pan-Slavist movement operating out of Serbia.

As mentioned above, after the fallout from the Dreyfus Affair in France, responsibility for military counter-espionage was passed in 1899 to the Sûreté générale - an agency originally responsible for order enforcement and public safety - and overseen by the Ministry of the Interior.[9]

The Okhrana[15] was initially formed in 1880 to combat political terrorism and left-wing revolutionary activity throughout the Russian Empire, but was also tasked with countering enemy espionage.[16] Its main concern was the activities of revolutionaries, who often worked and plotted subversive actions from abroad. It created an antenna in Paris run by Pyotr Rachkovsky to monitor their activities. The agency used many methods to achieve its goals, including covert operations, undercover agents, and "perlustration" — the interception and reading of private correspondence. The

Page 847: Tugas tik di kelas xi ips 3

Okhrana became notorious for its use of agents provocateurs who often succeeded in penetrating the activities of revolutionary groups including the Bolsheviks.[17]

In Britain, the Secret Service Bureau was split into a foreign and counter intelligence domestic service in 1910. The latter was headedby Sir Vernon Kell and was originally aimed at calming public fears of large scale German espionage.[18] As the Service was not authorized with police powers, Kell liased extensively with the Special Branch of Scotland Yard (headed by Basil Thomson), and succeeded in disrupting the work of Indian revolutionaries collaborating with the Germans during the war.

First World War

Cover of the Petit Journal of 20 January 1895, covering the arrest of Captain Alfred Dreyfus for espionage and treason. The case convulsed France and raised public awareness of the rapidly developing world of espionage.

By the outbreak of the First World War all the major powers had highly sophisticated structures in place for the training and handling of spies and for the processing of the intelligence information obtained through espionage. The figure and mystique of the spy had also developed considerably in the public eye. The Dreyfus Affair, which involved international espionage and treason, contributed much to public interest in espionage.[19][20]

The spy novel emerged as a distinct genre in the late 19th century, and dealt with themes such as colonial rivalry, the growing threat of conflict in Europe and the revolutionary and anarchist domestic threat. The "spy novel" was defined by The Riddle of the Sands (1903) by British author Robert Erskine Childers, which played on public fears of a German plan to invade Britain (the nefarious plot is uncovered by an amateur spy). Its success was followed by a floodof imitators, including William Le Queux and E. Phillips Oppenheim.

Page 848: Tugas tik di kelas xi ips 3

It was during the War that modern espionage techniques were honed and refined, as all belligerent powers utilized their intelligence services to obtain military intelligence, commit acts of sabotage and carry out propaganda. As the progress of the war became static and armies dug down in trenches the utility of cavaly reconnaissancebecame of very limited effectiveness.[21]

Information gathered at the battlefront from the interrogation of prisoners-of-war was only capable of giving insight into local enemyactions of limited duration. To obtain high-level information on theenemy's strategic intentions, its military capabilities and deployment required undercover spy rings operating deep in enemy territory. On the Western Front the advantage lay with the Western Allies, as throughout most of the war German armies occupied Belgiumand parts of northern France, thereby providing a large and dissaffected population that could be organized into collecting and transmitting vital intelligence.[21]

British and French intelligence services recruited Belgian or Frenchrefugees and infiltrated these agents behind enemy lines via the Netherlands - a neutral country. Many collaborators were then recruited from the local population, who were mainly driven by patriotism and hatred of the harsh German occupation. By the end of the war, over 250 networks had been created, comprising more than 6,400 Belgian and French citizens. These rings concentrated on infiltrating the German railway network so that the allies could receive advance warning of strategic troop and ammunition movements.[21]

Mata Hari was a famous Dutch dancer who was executed on charges of espionage for Germany. Pictured at her arrest.

The most effective such ring in German-occupied Belgium, was the Dame Blanche ("White Lady") network, founded in 1916 by Walthère Dewé as an underground intelligence network. It supplied as much as 75% of the intelligence collected from occupied Belgium and northernFrance to the Allies. By the end of the war, its 1,300 agents

Page 849: Tugas tik di kelas xi ips 3

covered all of occupied Belgium, northern France and, through a collaboration with Louise de Bettignies' network, occupied Luxembourg. The network was able to provide a crucial few days warning before the launch of the German 1918 Spring Offensive.[22]

German intelligence was only ever able to recruit a very small number of spies. These were trained at an academy run by the Kriegsnachrichtenstelle in Antwerp and headed by Elsbeth Schragmüller, known as "Fräulein Doktor". These agents were generally isolated and unable to rely on a large support network forthe relaying of information. The most famous German spy was Margaretha Geertruida Zelle, an exotic Dutch dancer with the stage name Mata Hari. As a Dutch subject, she was able to cross national borders freely. In 1916, she was arrested and brought to London where she was interrogated at length by Sir Basil Thomson, AssistantCommissioner at New Scotland Yard. She eventually claimed to be working for French intelligence. In fact, she had entered German service from 1915, and sent her reports to the mission in the Germanembassy in Madrid.[23] In January 1917, the German military attaché in Madrid transmitted radio messages to Berlin describing the helpful activities of a German spy code-named H-21. French intelligence agents intercepted the messages and, from the information it contained, identified H-21 as Mata Hari. She was executed by firing squad on 15 October 1917.

German spies in Britain did not meet with much success - the German spy ring operating in Britain was successfully disrupted by MI5 under Vernon Kell on the day after the declaration of the war. Home Secretary, Reginald McKenna, announced that "within the last twenty-four hours no fewer than twenty-one spies, or suspected spies, have been arrested in various places all over the country, chiefly in important military or naval centres, some of them long known to the authorities to be spies",[24][25]

One exception was Jules C. Silber, who evaded MI5 investigations andobtained a position at the censor's office in 1914. Using mailed window envelopes that had already been stamped and cleared he was

Page 850: Tugas tik di kelas xi ips 3

able to forward microfilm to Germany that contained increasingly important information. Silber was regularly promoted and ended up inthe position of chief censor, which enabled him to analyze all suspect documents.[26]

The British economic blockade of Germany was made effective through the support of spy networks operating out of neutral Netherlands. Points of weakness in the naval blockade were determined by agents on the ground and relayed back to the Royal Navy. The blockade led to severe food deprivation in Germany and was a major cause in the collapse of the Central Powers war effort in 1918.[27]

Codebreaking

The interception and decryption of the Zimmermann telegram by Room 40 at the Admiralty was of pivotal importance for the outcome of thewar.

Two new methods for intelligence collection were developed over the course of the war - aerial reconnaissance and photography and the interception and decryption of radio signals.[27] The British rapidly built up great expertise in the newly emerging field of signals intelligence and codebreaking.

In 1911, a committee of the Committee of Imperial Defence on cable communications concluded that in the event of war with Germany, German-owned submarine cables should be destroyed. On the night of 3August 1914, the cable ship Alert located and cut Germany's five trans-Atlantic cables, which ran down the English Channel. Soon after, the six cables running between Britain and Germany were cut.[28] As an immediate consequence, there was a significant increase in cable messages sent via cables belonging to other countries, and cables sent by wireless. These could now be intercepted, but codes and ciphers were naturally used to hide the meaning of the messages,and neither Britain nor Germany had any established organisations todecode and interpret the messages. At the start of the war, the navyhad only one wireless station for intercepting messages, at

Page 851: Tugas tik di kelas xi ips 3

Stockton. However, installations belonging to the Post Office and the Marconi Company, as well as private individuals who had access to radio equipment, began recording messages from Germany.[29]

Room 40, under Director of Naval Education Alfred Ewing and formed in October 1914, was the section in the British Admiralty most identified with the British cryptoanalysis effort during the First World War. The basis of Room 40 operations evolved around a German naval codebook, the Signalbuch der Kaiserlichen Marine (SKM), and around maps (containing coded squares), which were obtained from three different sources in the early months of the war. Alfred Ewingdirected Room 40 until May 1917, when direct control passed to Captain (later Admiral) Reginald 'Blinker' Hall, assisted by WilliamMilbourne James.[30]

A similar organisation began in the Military Intelligence departmentof the War Office, which become known as MI1b, and Colonel Macdonaghproposed that the two organisations should work together, decoding messages concerning the Western Front. A sophisticated interception system (known as 'Y' service), together with the post office and Marconi stationsgrew rapidly to the point it could intercept almost all official German messages.[29]

As the number of intercepted messages increased it became necessary to decide which were unimportant and should just be logged, and which should be passed on outside Room 40. The German fleet was in the habit each day of wirelessing the exact position of each ship and giving regular position reports when at sea. It was possible to build up a precise picture of the normal operation of the High Seas Fleet, indeed to infer from the routes they chose where defensive minefields had been place and where it was safe for ships to operate. Whenever a change to the normal pattern was seen, it immediately signalled that some operation was about to take place and a warning could be given. Detailed information about submarine movements was available.[31]

Page 852: Tugas tik di kelas xi ips 3

Both the British and German interception services began to experiment with direction finding radio equipment in the start of 1915. Captain H. J. Round working for Marconi had been carrying out experiments for the army in France and Hall instructed him to build a direction finding system for the navy. Stations were built along the coast, and by May 1915 the Admiralty was able to track German submarines crossing the North Sea. Some of these stations also actedas 'Y' stations to collect German messages, but a new section was created within Room 40 to plot the positions of ships from the directional reports. No attempts were made by the German fleet to restrict its use of wireless until 1917, and then only in response to perceived British use of direction finding, not because it believed messages were being decoded.[32]

Room 40 played an important role in several naval engagements duringthe war, notably in detecting major German sorties into the North Sea that led to the battles of Dogger Bank and Jutland as the British fleet was sent out to intercept them. However its most important contribution was probably in decrypting the Zimmermann Telegram, a telegram from the German Foreign Office sent via Washington to its ambassador Heinrich von Eckardt in Mexico.

In the telegram's plaintext, Nigel de Grey and William Montgomery learned of the German Foreign Minister Arthur Zimmermann's offer to Mexico of United States' territories of Arizona, New Mexico, and Texas as an enticement to join the war as a German ally. The telegram was passed to the U.S. by Captain Hall, and a scheme was devised (involving a still unknown agent in Mexico and a burglary) to conceal how its plaintext had become available and also how the U.S. had gained possession of a copy. The telegram was made public by the United States, which declared war on Germany on 6 April 1917,entering the war on the Allied side.[33] This effectively demonstrated how the course of a war could be changed by effective intelligence operations.

Russian Revolution

Page 853: Tugas tik di kelas xi ips 3

From the London Evening Standard's Master Spy serial: Reilly, disguised as a member of the Cheka, bluffs his way through a Red Army checkpoint.

The outbreak of revolution in Russia and the subsequent seizure of power by the Bolsheviks, a party deeply hostile towards the capitalist powers, was an important catalyst for the development of modern international espionage techniques. A key figure was Sidney Reilly, a Russian-born adventurer and secret agent employed by Scotland Yard and the Secret Intelligence Service. He set the standard for modern espionage, turning it from a gentleman's amateurish game to a ruthless and professional methodology for the achievement of military and political ends.

Reilly's remarkable and varied career culminated in an audiacious attempt to depose the Bolshevik Government and assassinate Vladimir Ilyich Lenin.[34]

In May 1918, Robert Bruce Lockhart,[35] an agent of the British Secret Intelligence Service, and Reilly repeatedly met Boris Savinkov, head of the counter-revolutionary Union for the Defence ofthe Motherland and Freedom (UDMF). Lockhart and Reilly then contacted anti-Bolshevik groups linked to Savinkov and supported these factions with SIS funds.[36] In June, disillusioned members ofthe Latvian Riflemen began appearing in anti-Bolshevik circles in Petrograd and were eventually directed to Captain Cromie, a British naval attaché, and Mr. Constantine, a Turkish merchant who was actually Reilly. Reilly believed their participation in the pending coup to be vital and arranged their meeting with Lockhart at the British mission in Moscow. At this stage, Reilly planned a coup against the Bolshevik government and drew up a list of Soviet military leaders ready to assume responsibilities on the fall of theBolshevik government.[36]

Paul Dukes was knighted for his achievements in the Secret Intelligence Service.

Page 854: Tugas tik di kelas xi ips 3

On 17 August, Reilly conducted meetings between Latvian regimental leaders and liaised with Captain George Hill, another British agent operating in Russia. Hill had managed to establish a network of 10 secure houses around Moscow, and a professional courier network thatreached across northern Russia and that allowed him to smuggle top secret documents from Moscow to Stockholm to London in days. They agreed the coup would occur the first week of September during a meeting of the Council of People's Commissars and the Moscow Soviet at the Bolshoi Theatre. However, on the eve of the coup, unexpected events thwarted the operation. Fanya Kaplan shot and wounded Lenin triggering the "Red Terror" - the Cheka implicated all malcontents in a grand conspiracy that warranted a full-scale campaign. Using lists supplied by undercover agents, the Cheka arrested those involved in Reilly's pending coup, raided the British Embassy in Petrograd and killed Francis Cromie and arrested Lockhart.[36]

Another pivotal figure was Sir Paul Dukes, arguably the first professional spy of the modern age.[37] Recruited personally by Mansfield Smith-Cumming to act as a secret agent in Imperial Russia,he set up elaborate plans to help prominent White Russians escape from Soviet prisons after the Revolution and smuggled hundreds of them into Finland. Known as the "Man of a Hundred Faces," Dukes continued his use of disguises, which aided him in assuming a numberof identities and gained him access to numerous Bolshevik organizations. He successfully infiltrated the Communist Party of the Soviet Union, the Comintern, and the political police, or CHEKA.Dukes also learned of the inner workings of the Politburo, and passed the information to British intelligence.

In the course of a few months, Dukes, Hill and Reilly succeeded in infiltrating Lenin’s inner circle, and gaining access to the activities of the Cheka and the Communist International at the highest level. This helped to convince the government of the importance of a well-funded secret intelligence service in peace time as a key component in formulating foreign policy.[37] Winston Churchill argued that intercepted communications were more useful

Page 855: Tugas tik di kelas xi ips 3

"as a means of forming a true judgement of public policy than any other source of knowledge at the disposal of the State."[38]

Today

Today, espionage agencies target the illegal drug trade and terrorists as well as state actors. Since 2008 the United States hascharged at least 57 defendants for attempting to spy for China.[39]

Different intelligence services value certain intelligence collection techniques over others. The former Soviet Union, for example, preferred human sources over research in open sources, while the United States has tended to emphasize technological methods such as SIGINT and IMINT. Both Soviet political (KGB) and military intelligence (GRU[40]) officers were judged by the number of agents they recruited.

Targets of espionage

Espionage agents are usually trained experts in a specific targeted field so they can differentiate mundane information from targets of intrinsic value to their own organisational development. Correct identification of the target at its execution is the sole purpose ofthe espionage operation.[41]

Broad areas of espionage targeting expertise include:[42]

Natural resources: strategic production identification and assessment (food, energy, materials). Agents are usually found amongbureaucrats who administer these resources in their own countries

Popular sentiment towards domestic and foreign policies (popular, middle class, elites). Agents often recruited from field journalistic crews, exchange postgraduate students and sociology researchers

Page 856: Tugas tik di kelas xi ips 3

Strategic economic strengths (production, research, manufacture,infrastructure). Agents recruited from science and technology academia, commercial enterprises, and more rarely from among military technologists

Military capability intelligence (offensive, defensive, maneuver, naval, air, space). Agents are trained by special militaryespionage education facilities, and posted to an area of operation with covert identities to minimize prosecution

Counterintelligence operations specifically targeting opponents'intelligence services themselves, such as breaching confidentiality of communications, and recruiting defectors or moles

Methods and terminology

Although the news media may speak of "spy satellites" and the like, espionage is not a synonym for all intelligence-gathering disciplines. It is a specific form of human source intelligence (HUMINT). Codebreaking (cryptanalysis or COMINT), aircraft or satellite photography, (IMINT) and research in open publications (OSINT) are all intelligence gathering disciplines, but none of themare considered espionage. Many HUMINT activities, such as prisoner interrogation, reports from military reconnaissance patrols and fromdiplomats, etc., are not considered espionage. Espionage is the disclosure of sensitive information (classified) to people who are not cleared for that information or access to that sensitive information.

Unlike other forms of intelligence collection disciplines, espionageusually involves accessing the place where the desired information is stored or accessing the people who know the information and will divulge it through some kind of subterfuge. There are exceptions to physical meetings, such as the Oslo Report, or the insistence of Robert Hanssen in never meeting the people who bought his information.

Page 857: Tugas tik di kelas xi ips 3

The US defines espionage towards itself as "The act of obtaining, delivering, transmitting, communicating, or receiving information about the national defense with an intent, or reason to believe, that the information may be used to the injury of the United States or to the advantage of any foreign nation". Black's Law Dictionary (1990) defines espionage as: "... gathering, transmitting, or losing... information related to the national defense". Espionage is a violation of United States law, 18 U.S.C. §§ 792–798 and Article 106a of the Uniform Code of Military Justice".[43] The United States, like most nations, conducts espionage against other nations,under the control of the National Clandestine Service. Britain's espionage activities are controlled by the Secret Intelligence Service.

Technology and techniques

See also: Tradecraft and List of intelligence gathering disciplines

Agent handling

Concealment device

Covert agent

Covert listening device

Cut-out

Cyber spying

Dead drop

False flag operations

Honeypot

Interrogation

Non-official cover

Numbers messaging

Official cover

One-way voice link

Page 858: Tugas tik di kelas xi ips 3

Safe house

Side channel attack

Steganography

Surveillance

Surveillance aircraft

[44]

Organization

An intelligence officer's clothing, accessories, and behavior must be as unremarkable as possible — their lives (and others') may depend on it.

A spy is a person employed to seek out top secret information from asource. Within the United States Intelligence Community, "asset" is a more common usage. A case officer, who may have diplomatic status (i.e., official cover or non-official cover), supports and directs the human collector. Cutouts are couriers who do not know the agent or case officer but transfer messages. A safe house is a refuge for spies. Spies often seek to obtain secret information from another source.

In larger networks the organization can be complex with many methodsto avoid detection, including clandestine cell systems. Often the players have never met. Case officers are stationed in foreign countries to recruit and to supervise intelligence agents, who in turn spy on targets in their countries where they are assigned. A spy need not be a citizen of the target country—hence does not automatically commit treason when operating within it. While the more common practice is to recruit a person already trusted with access to sensitive information, sometimes a person with a well-prepared synthetic identity (cover background), called a legend in tradecraft,[citation needed] may attempt to infiltrate a target organization.

Page 859: Tugas tik di kelas xi ips 3

These agents can be moles (who are recruited before they get access to secrets), defectors (who are recruited after they get access to secrets and leave their country) or defectors in place (who get access but do not leave).

A legend is also employed for an individual who is not an illegal agent, but is an ordinary citizen who is "relocated", for example, a"protected witness". Nevertheless, such a non-agent very likely willalso have a case officer who will act as controller. As in most, if not all synthetic identity schemes, for whatever purpose (illegal orlegal), the assistance of a controller is required.

Spies may also be used to spread disinformation in the organization in which they are planted, such as giving false reports about their country's military movements, or about a competing company's abilityto bring a product to market. Spies may be given other roles that also require infiltration, such as sabotage.

Many governments routinely spy on their allies as well as their enemies, although they typically maintain a policy of not commentingon this. Governments also employ private companies to collect information on their behalf such as SCG International Risk, International Intelligence Limited and others.

Many organizations, both national and non-national, conduct espionage operations. It should not be assumed that espionage is always directed at the most secret operations of a target country. National and terrorist organizations and other groups are also targets.[45] This is because governments want to retrieve information that they can use to be proactive in protecting their nation from potential terrorist attacks.

Page 860: Tugas tik di kelas xi ips 3

Communications both are necessary to espionage and clandestine operations, and also a great vulnerability when the adversary has sophisticated SIGINT detection and interception capability. Agents must also transfer money securely.[45]

Industrial espionage

Main article: Industrial espionage

Reportedly Canada is losing $12 billion[46] and German companies areestimated to be losing about €50 billion ($87 billion) and 30,000 jobs[47] to industrial espionage every year.

Agents in espionage

In espionage jargon, an "agent" is the person who does the spying; acitizen of one country who is recruited by a second country to spy on or work against his own country or a third country. In popular usage, this term is often erroneously applied to a member of an intelligence service who recruits and handles agents; in espionage such a person is referred to as an intelligence officer, intelligence operative or case officer. There are several types of agent in use today.

Double agent, "is a person who engages in clandestine activity for two intelligence or security services (or more in joint operations), who provides information about one or about each to theother, and who wittingly withholds significant information from one on the instructions of the other or is unwittingly manipulated by one so that significant facts are withheld from the adversary. Peddlers, fabricators, and others who work for themselves rather than a service are not double agents because they are not agents. The fact that doubles have an agent relationship with both sides distinguishes them from penetrations, who normally are placed with the target service in a staff or officer capacity."[48]

Re-doubled agent, an agent who gets caught as a double agentand is forced to mislead the foreign intelligence service.

Page 861: Tugas tik di kelas xi ips 3

Unwitting double agent, an agent who offers or is forcedto recruit as a double or re-doubled agent and in the process is recruited by either a third party intelligence service or his own government without the knowledge of the intended target intelligenceservice or the agent. This can be useful in capturing important information from an agent that is attempting to seek allegiance withanother country. The double agent usually has knowledge of both intelligence services and can identify operational techniques of both, thus making third party recruitment difficult or impossible. The knowledge of operational techniques can also affect the relationship between the Operations Officer (or case officer) and the agent if the case is transferred by an Operational Targeting Officer to a new Operations Officer, leaving the new officer vulnerable to attack. This type of transfer may occur when an officer has completed his term of service or when his cover is blown.

Triple agent, an agent that is working for three intelligence services.

Intelligence agent: Provides access to sensitive information through the use of special privileges. If used in corporate intelligence gathering, this may include gathering information of a corporate business venture or stock portfolio. In economic intelligence, "Economic Analysts may use their specialized skills toanalyze and interpret economic trends and developments, assess and track foreign financial activities, and develop new econometric and modeling methodologies."[49] This may also include information of trade or tariff.

Access agent: Provides access to other potential agents by providing profiling information that can help lead to recruitment into an intelligence service.

Agent of influence: Someone who may provide political influence in an area of interest or may even provide publications needed to further an intelligence service agenda. The use of the media to print a story to mislead a foreign service into action, exposing their operations while under surveillance.

Page 862: Tugas tik di kelas xi ips 3

Agent provocateur: This type of agent instigates trouble, or mayprovide information to gather as many people as possible into one location for an arrest.

Facilities agent: A facilities agent may provide access to buildings such as garages or offices used for staging operations, resupply, etc.

Principal agent: This agent functions as a handler for an established network of agents usually "Blue Chip".

Confusion agent: May provide misleading information to an enemy intelligence service or attempt to discredit the operations of the target in an operation.

Sleeper agent: A sleeper agent is a person who is recruited to an intelligence service to wake up and perform a specific set of tasks or functions while living under cover in an area of interest. This type of agent is not the same as a deep cover operative, who continually contacts a case officer to file intelligence reports. A sleeper agent is not in contact with anyone until activated.

Illegal agent: This is a person who is living in another countryunder false credentials that does not report to a local station. A non official cover operative is a type of cover used by an intelligence operative and can be dubbed an "Illegal"[50] when working in another country without diplomatic protection.

Law

Espionage is a crime under the legal code of many nations. In the United States it is covered by the Espionage Act of 1917. The risks of espionage vary. A spy breaking the host country's laws may be deported, imprisoned, or even executed. A spy breaking his/her own country's laws can be imprisoned for espionage or/and treason (whichin the USA and some other jurisdictions can only occur if he or she take ups arms or aids the enemy against his or her own country during wartime), or even executed, as the Rosenbergs were. For example, when Aldrich Ames handed a stack of dossiers of U.S. Central Intelligence Agency (CIA) agents in the Eastern Bloc to his

Page 863: Tugas tik di kelas xi ips 3

KGB-officer "handler", the KGB "rolled up" several networks, and at least ten people were secretly shot. When Ames was arrested by the U.S. Federal Bureau of Investigation (FBI), he faced life in prison;his contact, who had diplomatic immunity, was declared persona non grata and taken to the airport. Ames's wife was threatened with lifeimprisonment if her husband did not cooperate; he did, and she was given a five-year sentence. Hugh Francis Redmond, a CIA officer in China, spent nineteen years in a Chinese prison for espionage—and died there—as he was operating without diplomatic cover and immunity.[51]

In United States law, treason,[52] espionage,[53] and spying[54] areseparate crimes. Treason and espionage have graduated punishment levels.

The United States in World War I passed the Espionage Act of 1917. Over the years, many spies, such as the Soble spy ring, Robert Lee Johnson, the Rosenberg ring, Aldrich Hazen Ames,[55] Robert Philip Hanssen,[56] Jonathan Pollard, John Anthony Walker, James Hall III, and others have been prosecuted under this law.

Use against non-spies

However, espionage laws are also used to prosecute non-spies. In theUnited States, the Espionage Act of 1917 was used against socialist politician Eugene V. Debs (at that time the act had much stricter guidelines and amongst other things banned speech against military recruiting). The law was later used to suppress publication of periodicals, for example of Father Coughlin in World War II. In the early 21st century, the act was used to prosecute whistleblowers such as Thomas Andrews Drake, John Kiriakou, and Edward Snowden, as well as officials who communicated with journalists for innocuous reasons, such as Stephen Jin-Woo Kim.[57][58]

As of 2012, India and Pakistan were holding several hundred prisoners of each other's country for minor violations like trespass

Page 864: Tugas tik di kelas xi ips 3

or visa overstay, often with accusations of espionage attached. Someof these include cases where Pakistan and India both deny citizenship to these people, leaving them stateless. The BBC reported in 2012 on one such case, that of Mohammed Idrees, who was held under Indian police control for approximately 13 years for overstaying his 15-day visa by 2–3 days after seeing his ill parentsin 1999. Much of the 13 years was spent in prison waiting for a hearing, and more time was spent homeless or living with generous families. The Indian People's Union for Civil Liberties and Human Rights Law Network both decried his treatment. The BBC attributed some of the problems to tensions caused by the Kashmir conflict.[59]

Espionage laws in the UK

Espionage is illegal in the UK under the Official Secrets Acts of 1911 and 1920. The UK law under this legislation considers espionageas actions "intend to help an enemy and deliberately harm the security of the nation". According to MI5, a person will be charged with the crime of espionage if they, "for any purpose prejudicial tothe safety or interests of the State": approaches, enters or inspects a prohibited area; makes documents such as plans that are intended, calculated, or could directly or indirectly be of use to an enemy; or "obtains, collects, records, or publishes, or communicates to any other person any secret official code word, or pass word, or any sketch, plan, model, article, or note, or other document which is calculated to be or might be or is intended to be directly or indirectly useful to an enemy". The illegality of espionage also includes any action which may be considered 'preparatory to' spying, or encouraging or aiding another to spy.[60]

An individual convicted of espionage can be imprisoned for up to 14 years in the UK, although multiple sentences can be issued.

Government intelligence laws and its distinction from espionage

Page 865: Tugas tik di kelas xi ips 3

Government intelligence is very much distinct from espionage, and isnot illegal in the UK, providing that the organisations of individuals are registered, often with the ICO, and are acting within the restrictions of the Regulation of Investigatory Powers Act (RIPA). 'Intelligence' is considered legally as "information of all sorts gathered by a government or organisation to guide its decisions. It includes information that may be both public and private, obtained from many different public or secret sources. It could consist entirely of information from either publicly availableor secret sources, or be a combination of the two."[61]

However, espionage and intelligence can be linked. According to the MI5 website, "foreign intelligence officers acting in the UK under diplomatic cover may enjoy immunity from prosecution. Such persons can only be tried for spying (or, indeed, any criminal offence) if diplomatic immunity is waived beforehand. Those officers operating without diplomatic cover have no such immunity from prosecution".

There are also laws surrounding government and organisational intelligence and surveillance. Generally, the body involved should be issued with some form of warrant or permission from the government, and should be enacting their procedures in the interest of protecting national security or the safety of public citizens. Those carrying out intelligence missions should act within not only RIPA, but also the Data Protection Act and Human Rights Act. However, there are specific spy equipment laws and legal requirements around intelligence methods that vary for each form of intelligence enacted.

Military conflicts

French spy captured during the Franco-Prussian War.

In military conflicts, espionage is considered permissible as many nations recognizes the inevitability of opposing sides seeking intelligence each about the dispositions of the other. To make the mission easier and successful, soldiers or agents wear disguises to

Page 866: Tugas tik di kelas xi ips 3

conceal their true identity from the enemy while penetrating enemy lines for intelligence gathering. However, if they are caught behindenemy lines in disguises, they are not entitled to prisoner-of-war status and subject to prosecution and punishment—including execution.

The Hague Convention of 1907 addresses the status of wartime spies, specifically within "Laws and Customs of War on Land" (Hague IV); October 18, 1907: CHAPTER II Spies".[62] Article 29 states that a person is considered a spy who, acts clandestinely or on false pretenses, infiltrates enemy lines with the intention of acquiring intelligence about the enemy and communicate it to the belligerent during times of war. Soldiers who penetrates enemy lines in proper uniforms for the purpose of acquiring intelligence are not considered spies but are lawful combatants entitled to be treated asprisoners of war upon capture by the enemy. Article 30 states that aspy captured behind enemy lines may only be punished following a trial. However, Article 31 provides that if a spy successfully rejoined his own military and is then captured by the enemy as a lawful combatant, he cannot be punished for his previous acts of espionage and must be treated as a prisoner of war. Note that this provision does not apply to citizens who committed treason against their own country or co-belligerents of that country and may be captured and prosecuted at any place or any time regardless whether he rejoined the military to which he belongs or not or during or after the war.[63][64]

The ones that are excluded from being treated as spies while behind enemy lines are escaping prisoners of war and downed airmen as international law distinguishes between a disguised spy and a disguised escaper.[44] It is permissible for these groups to wear enemy uniforms or civilian clothes in order to facilitate their escape back to friendly lines so long as they do not attack enemy forces, collect military intelligence, or engage in similar militaryoperations while so disguised.[65][66] Soldiers who are wearing enemy uniforms or civilian clothes simply for the sake of warmth along with other purposes rather than engaging in espionage or

Page 867: Tugas tik di kelas xi ips 3

similar military operations while so attired is also excluded from being treated as unlawful combatants.[44]

Saboteurs are treated as spies as they too wear disguises behind enemy lines for the purpose of waging destruction on enemy's vital targets in addition to intelligence gathering.[67][68] For example, during World War II, eight German agents entered the U.S. in June 1942 as part of Operation Pastorius, a sabotage mission against U.S.economic targets. Two weeks later, all were arrested in civilian clothes by the FBI thanks to two German agents betraying the missionto the U.S. Under the Hague Convention of 1907, these Germans were classified as spies and tried by a military tribunal in Washington D.C.[69] On August 3, 1942, all eight were found guilty and sentenced to death. Five days later, six were executed by electric chair at the District of Columbia jail. Two who had given evidence against the others had their sentences reduced by President FranklinD. Roosevelt to prison terms. In 1948, they were released by President Harry S. Truman and deported to the American Zone of occupied Germany.

The U.S. codification of enemy spies is Article 106 of the Uniform Code of Military Justice. This provides a mandatory death sentence if a person captured in the act is proven to be "lurking as a spy oracting as a spy in or about any place, vessel, or aircraft, within the control or jurisdiction of any of the armed forces, or in or about any shipyard, any manufacturing or industrial plant, or any other place or institution engaged in work in aid of the prosecutionof the war by the United States, or elsewhere".[70]

List of famous spies

This article is in a list format that may be better presented using prose. You can help by converting this article to prose, if appropriate. Editing help is available. (March 2010)

See also: Intelligence agency, Special Operations Executive and United States government security breaches

Howard Burnham (1915)

Page 868: Tugas tik di kelas xi ips 3

FBI file photo of the leader of the Duquesne Spy Ring (1941)

Reign of Elizabeth I of England

Sir Francis Walsingham

Christopher Marlowe

American Revolution

Thomas Knowlton, The First American Spy

Nathan Hale

John Andre

James Armistead

Benjamin Tallmadge, Case agent who organized of the Culper Spy Ring in New York City

Napoleonic Wars

Charles-Louis Schulmeister

William Wickham

American Civil War

One of the innovations in the American Civil War was the useof proprietary companies for intelligence collection by the Union; see Allan Pinkerton.

Confederate Secret Service

Page 869: Tugas tik di kelas xi ips 3

Belle Boyd[71]

Aceh War

Dutch professor Snouck Hurgronje world leading authority on Islam was a proponent of espionage to quell Muslim resistance in Aceh in the Dutch East Indies. In his role as Colonial Advisor on Oriental Affairs, he gathered intelligence under the name "Haji Abdul Ghaffar".

He used his knowledge of Islamic and Aceh culture to devise strategies that significantly helped crush the resistance of the Aceh inhabitants and impose Dutch colonial rule, ending the 40 year Aceh War. Casualty estimates ranged between 50,000 and 100,000 inhabitants dead and about a million wounded.

Christiaan Snouck Hurgronje

Second Boer War

Fritz Joubert Duquesne

Sidney Reilly

Russo-Japanese War

Sidney Reilly

Ho Liang-Shung

Akashi Motojiro

World War I

Page 870: Tugas tik di kelas xi ips 3

See also: Espionage in Norway during World War I

Fritz Joubert Duquesne

Jules C. Silber

Mata Hari

Howard Burnham

T.E. Lawrence

Sidney Reilly

Maria de Victorica

11 German spies were executed in the Tower of London during WW1.[72]

Executed :- Carl Hans Lody on 6 November 1914, in the Miniature Rifle Range.

Executed :- Carl Frederick Muller on 23 June 1915, in Miniature Rifle Range. Prepared bullets were used by the execution party.

Executed :- Haicke Marinus Janssen & Willem Johannes Roos both executed on 30 July 1915, both in the Tower ditch.

Executed :- Ernst Waldemar Melin on 10 September 1915, MiniatureRifle Range.

Executed :- Augusto Alfredo Roggen on 17 September 1915, in Miniature Rifle Range.

Executed :- Fernando Buschman on 19 October 1915, in Miniature Rifle Range.

Executed :- George Traugott Breeckow, otherwise known as Reginald Rowland or George T. Parker on 26 October 1915, in Miniature Rifle Range. Worked with a lady called Lizzie Louise Wertheim who was sentenced to ten years penal servitude. Later on 17January 1918 was certified as insane and died in Broadmoor criminal lunatic asylum on 29 July 1920.

Page 871: Tugas tik di kelas xi ips 3

Executed :- Irving Guy Ries on 27 October 1915, in Miniature Rifle Range.

Executed :- Albert Mayer on 2 December 1915, in Miniature Rifle Range.

Executed :- Ludovico Hurwitz-y-Zender on 11 April 1916 in Miniature Rifle Range.

Carl Hans Lody has his own grave and black headstone in the East London Cemetery, Plaistow. The others are buried about 150 yards away under a small memorial stone alongside a pathway.

World War II

Imagined German Intelligence Officer thanks British Forces for giving away details of operations, (Graham & Gillies Advertising)

Informants were common in World War II. In November 1939, the GermanHans Ferdinand Mayer sent what is called the Oslo Report to inform the British of German technology and projects in an effort to undermine the Nazi regime. The Réseau AGIR was a French network developed after the fall of France that reported the start of construction of V-weapon installations in Occupied France to the British.

Counterespionage included the use of turned Double Cross agents to misinform Nazi Germany of impact points during the Blitz and internment of Japanese in the US against "Japan's wartime spy program". Additional WWII espionage examples include Soviet spying on the US Manhattan project, the German Duquesne Spy Ring convicted in the US, and the Soviet Red Orchestra spying on Nazi Germany. The US lacked a specific agency at the start of the war, but quickly formed the Office of Strategic Services (OSS).

Spying has sometimes been considered a gentlemanly pursuit, with recruiting focused on military officers, or at least on persons of

Page 872: Tugas tik di kelas xi ips 3

the class from whom officers are recruited. However, the demand for male soldiers, an increase in women's rights, and the tactical advantages of female spies led the British Special Operations Executive (SOE) to set aside any lingering Victorian Era prejudices and begin employing them in April 1942.[73] Their task was to transmit information from Nazi occupied France back to Allied Forces. The main strategic reason was that men in France faced a high risk of being interrogated by Nazi troops but women were less likely to arouse suspicion. In this way they made good couriers and proved equal to, if not more effective than, their male counterparts. Their participation in Organization and Radio Operation was also vital to the success of many operations, including the main network between Paris and London.

See also: Clandestine HUMINT asset recruiting § Love, honeypots and recruitment

Post World War II

Further information: Cold War espionage

In the United States, there are seventeen[74] federal agencies that form the United States Intelligence Community. The Central Intelligence Agency operates the National Clandestine Service (NCS)[75] to collect human intelligence and perform Covert operations.[76] The National Security Agency collects Signals Intelligence. Originally the CIA spearheaded the US-IC. Pursuant to the September 11 attacks the Office of the Director of National Intelligence (ODNI) was created to promulgate information-sharing.

Kim Philby

Ray Mawby

Spy fiction

Main article: Spy fiction

Page 873: Tugas tik di kelas xi ips 3

This section does not cite any references or sources.Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (July 2011)

An early example of espionage literature is Kim by the English novelist Rudyard Kipling, with a description of the training of an intelligence agent in the Great Game between the UK and Russia in 19th century Central Asia. An even earlier work was James Fenimore Cooper's classic novel, The Spy, written in 1821, about an American spy in New York during the Revolutionary War.

During the many 20th century spy scandals, much information became publicly known about national spy agencies and dozens of real-life secret agents. These sensational stories piqued public interest in aprofession largely off-limits to human interest news reporting, a natural consequence of the secrecy inherent to their work. To fill in the blanks, the popular conception of the secret agent has been formed largely by 20th and 21st century literature and cinema. Attractive and sociable real-life agents such as Valerie Plame find little employment in serious fiction, however. The fictional secret agent is more often a loner, sometimes amoral—an existential hero operating outside the everyday constraints of society. Loner spy personalities may have been a stereotype of convenience for authors who already knew how to write loner private investigator characters that sold well from the 1920s to the present.

Johnny Fedora achieved popularity as a fictional agent of early ColdWar espionage, but James Bond is the most commercially successful ofthe many spy characters created by intelligence insiders during thatstruggle. His less fantastic rivals include Le Carre's George Smileyand Harry Palmer as played by Michael Caine. Most post-Vietnam era characters were modeled after the American, C.C. Taylor, reportedly the last sanctioned "asset" of the U.S. government. Taylor, a true "Double 0 agent", worked alone and would travel as an American or Canadian tourist or businessman throughout Europe and Asia, he was used extensively in the Middle East toward the end of his career.

Page 874: Tugas tik di kelas xi ips 3

Taylor received his weapons training from Carlos Hathcock, holder ofa record 93 confirmed kills from WWII through the Viet Nam conflict.According to documents made available through the Freedom of Information Act, his operations were classified as "NOC" or Non-Official Cover.

Jumping on the spy bandwagon, other writers also started writing about spy fiction featuring female spies as protagonists, such as The Baroness, which has more graphic action and sex, as compared to other novels featuring male protagonists.

It also made its way into the videogame world, hence the famous creation of Hideo Kojima, the Metal Gear Solid Series.

Espionage has also made its way into comedy depictions. The 1960s TVseries Get Smart portrays an inept spy, while the 1985 movie Spies Like Us depicts a pair of none-too-bright men sent to the Soviet Union to investigate a missile.

World War II: 1939–1945

Author(s) Title Publisher Date Notes

Babington-Smith, Constance Air Spy: The Story of Photo Intelligence in World War II — 1957 —

Bryden, John Best-Kept Secret: Canadian Secret Intelligence in theSecond World War Lester 1993 —

Hinsley, F. H. and Alan Stripp Codebreakers: The Inside Story of Bletchley Park — 2001 —

Hinsley, F. H. British Intelligence in the Second World War — 1996 Abridged version of multivolume official

history.

Hohne, Heinz Canaris: Hitler's Master Spy — 1979 —

Jones, R. V. The Wizard War: British Scientific Intelligence 1939–1945 — 1978 —

Page 875: Tugas tik di kelas xi ips 3

Kahn, David Hitler's Spies: German Military Intelligence in WorldWar II' — 1978 —

Kahn, David Seizing the Enigma: The Race to Break the German U-Boat Codes, 1939–1943 — 1991 FACE

Kitson, Simon The Hunt for Nazi Spies: Fighting Espionage in Vichy France - 2008

Lewin, Ronald The American Magic: Codes, Ciphers and the Defeat of Japan — 1982 —

Masterman, J. C. The Double Cross System in the War of 1935 to 1945 Yale 1972 —

Persico, Joseph Roosevelt's Secret War: FDR and World War II Espionage — 2001 —

Persico, Joseph Casey: The Lives and Secrets of William J. Casey-From the OSS to the CIA — 1991 —

Ronnie, Art Counterfeit Hero: Fritz Duquesne, Adventurer and Spy — 1995 ISBN 1-55750-733-3

Sayers, Michael & Albert E. Kahn Sabotage! The Secret War Against America — 1942 —

Smith, Richard Harris OSS: The Secret History of America's First Central Intelligence Agency — 2005 —

Stanley, Roy M. World War II Photo Intelligence — 1981 —

Wark, Wesley The Ultimate Enemy: British Intelligence and Nazi Germany, 1933–1939 — 1985 —

Wark, Wesley "Cryptographic Innocence: The Origins of Signals Intelligence in Canada in the Second World War" in Journal of Contemporary History 22 — 1987 —

West, Nigel Secret War: The Story of SOE, Britain's Wartime Sabotage Organization — 1992 —

Winterbotham, F. W. The Ultra Secret Harper & Row 1974 —

Winterbotham, F. W. The Nazi Connection Harper & Row 1978 —

Cowburn, B. No Cloak No Dagger Brown, Watson, Ltd. 1960 —

Page 876: Tugas tik di kelas xi ips 3

Wohlstetter, Roberta. Pearl Harbor: Warning and Decision — 1962 —

Cold War era: 1945–1991

Author(s) Title Publisher Date Notes

Ambrose, Stephen E. Ike's Spies: Eisenhower and the Intelligence Establishment — 1981- —

Andrew, Christopher and Vasili Mitrokhin The Sword and the Shield: The Mitrokhin Archive and the Secret History of the KGB

Basic Books 1991, 2005 ISBN 0-465-00311-7

Andrew, Christopher, and Oleg Gordievsky KGB: The Inside Story ofIts Foreign Operations from Lenin to Gorbachev — 1990 —

Aronoff, Myron J. The Spy Novels of John Le Carré: Balancing Ethics and Politics — 1999 —

Bissell, Richard Reflections of a Cold Warrior: From Yalta to the Bay of Pigs' — 1996 —

Bogle, Lori, ed. Cold War Espionage and Spying — 2001- essays

Christopher Andrew and Vasili Mitrokhin The World Was Going Our Way: The KGB and the Battle for the Third World — — —

Christopher Andrew and Vasili Mitrokhin The Mitrokhin Archive: The KGB in Europe and the West Gardners Books 2000 ISBN 978-0-14-028487-4

Colella, Jim My Life as an Italian Mafioso Spy — 2000 —

Craig, R. Bruce Treasonable Doubt: The Harry Dexter Spy Case University Press of Kansas 2004 ISBN 978-0-7006-

1311-3

Dorril, Stephen MI6: Inside the Covert World of Her Majesty's SecretIntelligence Service — 2000 —

Dziak, John J. Chekisty: A History of the KGB — 1988 —

Gates, Robert M. From The Shadows: The Ultimate Insider's StoryOf Five Presidents And How They Won The Cold War' — 1997 —

Page 877: Tugas tik di kelas xi ips 3

Frost, Mike and Michel Gratton Spyworld: Inside the Canadian and American Intelligence Establishments Doubleday Canada 1994 —

Haynes, John Earl, and Harvey Klehr Venona: Decoding Soviet Espionage in America — 1999 —

Helms, Richard A Look over My Shoulder: A Life in the Central Intelligence Agency — 2003 —

Koehler, John O. Stasi: The Untold Story of the East German Secret Police' — 1999 —

Persico, Joseph Casey: The Lives and Secrets of William J. Casey-From the OSS to the CIA — 1991 —

Murphy, David E., Sergei A. Kondrashev, and George Bailey Battleground Berlin: CIA vs. KGB in the Cold War — 1997 —

Prados, John Presidents' Secret Wars: CIA and Pentagon Covert Operations Since World War II — 1996 —

Rositzke, Harry. The CIA's Secret Operations: Espionage, Counterespionage, and Covert Action — 1988 —

Srodes, James Allen Dulles: Master of Spies Regnery 2000 CIA head to 1961

Sontag Sherry, and Christopher Drew Blind Man's Bluff: The Untold Story of American Submarine Espinonage Harper 1998 —

Encyclopedia of Cold War Espionage, Spies and Secret Operations Greenwood Press/Questia[77] 2004 —

Anderson, Nicholas NOC Enigma Books 2009 - Post Cold War era

Ishmael Jones The Human Factor: Inside the CIA's Dysfunctional Intelligence Culture Encounter Books 2008, rev. 2010

Michael Ross The Volunteer: The Incredible True Story of an Israeli Spy on the Trail of International Terrorists McClelland & Stewart 2007, rev. 2008

Page 878: Tugas tik di kelas xi ips 3

Jean-Marie Thiébaud, Dictionnaire Encyclopédique International des Abréviations, Sigles et Acronymes, Armée et armement, Gendarmerie, Police, Services de renseignement et Services secrets français et étrangers, Espionnage, Contrespionnage, Services de secours, Organisations révolutioDiplomat

From Wikipedia, the free encyclopedia

For other uses, see Diplomat (disambiguation).

A diplomat is a person appointed by a state to conduct diplomacy with one or more other states or international organizations. The main functions of diplomats revolve around the representation and protection of the interests and nationals of the sending state, initiation and Facilitation of strategic agreements, treaties and conventions, as well as the promotion of information, trade & commerce, technology and friendly relations. Seasoned Diplomats of International repute are coveted in international organisations (eg.UN) as well as multi-national companies for their experience in management and negotiating skills. Diplomats are members of Foreign services and diplomatic corps of various nations of the world.

Diplomats are the oldest form of any of the foreign policy institutions of the state, predating by centuries foreign ministers and ministerial offices.

Contents

1 Etymology

2 "Career diplomats" and political appointees

3 Diplomatic ranks

4 Function

4.1 Advocacy

Page 879: Tugas tik di kelas xi ips 3

4.2 Negotiation

5 Training

6 Status and public image

7 Psychology and loyalty

8 References

9 External links

Etymology

Diplomat is derived from the Greek διπλωμάτης, diplōmátēs, the holder of a diploma (a folded paper, literally a "folding"), referring in this case not to an educational certificate but to a diplomat's letters of accreditation, which enable him or her to carry out duties on behalf of one country or institution within the jurisdiction of another country or institution.

"Career diplomats" and political appointees

Though any person can be appointed by a state's national Government to conduct said state's relations with other state(s) or international organization(s), a number of states maintain an institutionalized group of career diplomats—that is, public servantswith a steady professional connection to the country's foreign ministry. The term "career diplomat" is used world-widely[1][2][3][4][5][6][7][8][9] in opposition to political appointees (that is, people from any other professional backgrounds who may equally be designated by an official government to act as a diplomat abroad).[10][11] While officially posted to an embassy or delegation in a foreign country or accredited to an international organization, bothcareer diplomats and political appointees enjoy the same diplomatic immunities.

Diplomatic ranks

Main article: Diplomatic rank

Page 880: Tugas tik di kelas xi ips 3

Regardless of being a career diplomat or a political appointee, every diplomat, while posted abroad, will be classified in one of the ranks of diplomats (Secretary, Counselor, Minister, Ambassador, Envoys, or chargé d'affaires) —, as regulated by international law (namely, by the Vienna Convention on Diplomatic Relations of 1961).

Diplomats can be contrasted with consuls and attachés, who representtheir state in a number of administrative ways, but who don't have the diplomat’s political functions.

Function

Diplomats in posts collect and report information that could affect national interests, often with advice about how the home-country government should respond. Then, once any policy response has been decided in the home country's capital, posts bear major responsibility for implementing it. Diplomats have the job of conveying, in the most persuasive way possible, the views of the home government to the governments to which they are accredited and,in doing so, of trying to convince those governments to act in ways that suit home-country interests. In this way, diplomats are part ofthe beginning and the end of each loop in the continuous process through which foreign policy develops.

In general, it has become harder for diplomats to act autonomously. Diplomats have to seize secure communication systems, emails and mobile telephones can be tracked down and instruct the most reclusive head of mission. The same technology in reverse gives diplomats the capacity for more immediate input about the policy-making processes in the home capital.

Secure email has transformed the contact between diplomats and the ministry. It is less likely to leak, and enables more personal

Page 881: Tugas tik di kelas xi ips 3

contact than the formal cablegram, with its wide distribution and impersonal style.

Advocacy

The home country will usually send instructions to a diplomatic poston what foreign policy goals to pursue, but decisions on tactics - who needs to be influenced, what will best persuade them, who are potential allies and adversaries, and how it can be done - are for the diplomats overseas to make.

In this operation, the intelligence, integrity, cultural understanding and energy of individual diplomats become critical. Ifcompetent, they will have developed relationships grounded in trust and mutual understanding with influential members of the country in which they are accredited. They will have worked hard to understand the motives, thought patterns and culture of the other side.

Negotiation

The diplomat should be an excellent negotiator but, above all, a catalyst for peace and understanding between peoples. The diplomat'sprincipal role is to foster peaceful relations between states. This role takes on heightened importance if war breaks out. Negotiation must necessarily continue - but within significantly altered contexts.

Training

Most career diplomats have university degrees in international relations, political science, economics, or law.[12]

Status and public image

Diplomats have generally been considered members of an exclusive andprestigious profession. The public image of diplomats has been

Page 882: Tugas tik di kelas xi ips 3

described as "a caricature of pinstriped men gliding their way around a never-ending global cocktail party"[13] J. W. Burton has noted that "despite the absence of any specific professional training, diplomacy has a high professional status, due perhaps to adegree of secrecy and mystery that its practitioners self-consciously promote."[14] The state supports the high status, privileges and self-esteem of its diplomats in order to support its own international status and position.

The high regard for diplomats is also due to most countries' conspicuous selection of diplomats, with regard to their professionality and ability to behave according to a certain etiquette, in order to effectively promote their interests. Also, international law grants diplomats extensive privileges and immunities, which further distinguished the diplomat from the statusof an ordinary citizen.

Psychology and loyalty

Further information: Clientitis

While posted overseas, there is a danger that diplomats may become disconnected from their own country and culture. Sir Harold Nicolsonacknowledged that diplomats can become "denationalised, internationalised and therefore dehydrated, an elegant empty husk".[15]nnaires et terroristes, Paris, L'Harmattan, 2015, 827

AuthenticationFrom Wikipedia, the free encyclopedia

Contents1 Methods

2 Factors and identity

2.1 Two-factor authentication

Page 883: Tugas tik di kelas xi ips 3

3 Product authentication

3.1 Packaging

4 Information content

4.1 Factual verification

4.2 Video authentication

4.3 Literacy & Literature authentication

5 History and state-of-the-art

5.1 Strong authentication

6 Authorization

7 Access control

8 See also

9 References

10 External links

MethodsMain article: Provenance

Authentication has relevance to multiple fields. In art, antiques, and anthropology, a common problem is verifying thata given artifact was produced by a certain person or was produced in a certain place or period of history. In computer science, verifying a person's identity is often required to secure access to confidential data or systems.

Authentication can be considered to be of three types:

The first type of authentication is accepting proof of identity given by a credible person who has first-hand evidence that the identity is genuine. When authentication is required of art or physical objects, this proof could be a friend, family member or colleague attesting to the item's

Page 884: Tugas tik di kelas xi ips 3

provenance, perhaps by having witnessed the item in its creator's possession. With autographed sports memorabilia, this could involve someone attesting that they witnessed the object being signed. A vendor selling branded items implies authenticity, while he or she may not have evidence that everystep in the supply chain was authenticated. Authority based trust relationships (centralized) drive the majority of secured internet communication through known public certificate authorities; peer based trust (decentralized, web of trust) is used for personal services like email or files (pretty good privacy, GNU Privacy Guard) and trust is established by known individuals signing each other's keys (proof of identity) or at Key signing parties, for example.

The second type of authentication is comparing the attributes of the object itself to what is known about objects of that origin. For example, an art expert might look for similaritiesin the style of painting, check the location and form of a signature, or compare the object to an old photograph. An archaeologist might use carbon dating to verify the age of an artifact, do a chemical analysis of the materials used, or compare the style of construction or decoration to other artifacts of similar origin. The physics of sound and light, and comparison with a known physical environment, can be used to examine the authenticity of audio recordings, photographs, or videos. Documents can be verified as being created on ink or paper readily available at the time of the item's implied creation.

Attribute comparison may be vulnerable to forgery. In general,it relies on the facts that creating a forgery indistinguishable from a genuine artifact requires expert knowledge, that mistakes are easily made, and that the amount of effort required to do so is considerably greater than the amount of profit that can be gained from the forgery.

In art and antiques, certificates are of great importance for authenticating an object of interest and value. Certificates can, however, also be forged, and the authentication of these poses a problem. For instance, the son of Han van Meegeren, the well-known art-forger, forged the work of his father and provided a certificate for its provenance as well; see the article Jacques van Meegeren.

Page 885: Tugas tik di kelas xi ips 3

Criminal and civil penalties for fraud, forgery, and counterfeiting can reduce the incentive for falsification, depending on the risk of getting caught.

Currency and other financial instruments commonly use this second type of authentication method. Bills, coins, and cheques incorporate hard-to-duplicate physical features, such as fine printing or engraving, distinctive feel, watermarks, and holographic imagery, which are easy for trained receivers to verify.

The third type of authentication relies on documentation or other external affirmations. In criminal courts, the rules of evidence often require establishing the chain of custody of evidence presented. This can be accomplished through a writtenevidence log, or by testimony from the police detectives and forensics staff that handled it. Some antiques are accompaniedby certificates attesting to their authenticity. Signed sportsmemorabilia is usually accompanied by a certificate of authenticity. These external records have their own problems of forgery and perjury, and are also vulnerable to being separated from the artifact and lost.

In computer science, a user can be given access to secure systems based on user credentials that imply authenticity. A network administrator can give a user a password, or provide the user with a key card or other access device to allow system access. In this case, authenticity is implied but not guaranteed.

Consumer goods such as pharmaceuticals, perfume, fashion clothing can use all three forms of authentication to prevent counterfeit goods from taking advantage of a popular brand's reputation (damaging the brand owner's sales and reputation). As mentioned above, having an item for sale in a reputable store implicitly attests to it being genuine, the first type of authentication. The second type of authentication might involve comparing the quality and craftsmanship of an item, such as an expensive handbag, to genuine articles. The third type of authentication could be the presence of a trademark onthe item, which is a legally protected marking, or any other identifying feature which aids consumers in the identificationof genuine brand-name goods. With software, companies have

Page 886: Tugas tik di kelas xi ips 3

taken great steps to protect from counterfeiters, including adding holograms, security rings, security threads and color shifting ink.[1]

Factors and identityThe ways in which someone may be authenticated fall into threecategories, based on what are known as the factors of authentication: something the user knows, something the user has, and something the user is. Each authentication factor covers a range of elements used to authenticate or verify a person's identity prior to being granted access, approving a transaction request, signing a document or other work product,granting authority to others, and establishing a chain of authority.

Security research has determined that for a positive authentication, elements from at least two, and preferably allthree, factors should be verified.[2] The three factors (classes) and some of elements of each factor are:

This is a picture of the front (top) and back (bottom) of an ID Card.

the knowledge factors: Something the user knows (e.g., a password, pass phrase, or personal identification number (PIN), challenge response (the user must answer a question, orpattern)

the ownership factors: Something the user has (e.g., wrist band, ID card, security token, cell phone with built-in hardware token, software token, or cell phone holding a software token)

Page 887: Tugas tik di kelas xi ips 3

the inherence factors: Something the user is or does (e.g., fingerprint, retinal pattern, DNA sequence (there are assorteddefinitions of what is sufficient), signature, face, voice, unique bio-electric signals, or other biometric identifier).

Two-factor authentication

Main article: Two-factor authentication

When elements representing two factors are required for authentication, the term two-factor authentication is applied — e.g.a bankcard (something the user has) and a PIN (something the user knows). Business networks may require users to provide a password (knowledge factor) and a pseudorandom number from a security token (ownership factor). Access to a very-high-security system might require a mantrap screening of height, weight, facial, and fingerprint checks (several inherence factor elements) plus a PIN and a day code (knowledge factor elements), but this is still a two-factor authentication.

Product authentication

A Security hologram label on an electronics box for authentication

Counterfeit products are often offered to consumers as being authentic. Counterfeit consumer goods such as electronics, music, apparel, and Counterfeit medications have been sold as being legitimate. Efforts to control the supply chain and educate consumers help ensure that authentic products are soldand used. Even security printing on packages, labels, and nameplates, however, is subject to counterfeiting.

A secure key storage device can be used for authentication in consumer electronics, network authentication, license

Page 888: Tugas tik di kelas xi ips 3

management, supply chain management, etc. Generally the deviceto be authenticated needs some sort of wireless or wired digital connection to either a host system or a network. Nonetheless, the component being authenticated need not be electronic in nature as an authentication chip can be mechanically attached and read through a connector to the hoste.g. an authenticated ink tank for use with a printer. For products and services that these Secure Coprocessors can be applied to, they can offer a solution that can be much more difficult to counterfeit than most other options while at the same time being more easily verified.

Packaging

Packaging and labeling can be engineered to help reduce the risks of counterfeit consumer goods or the theft and resale ofproducts.[3][4] Some package constructions are more difficult to copy and some have pilfer indicating seals. Counterfeit goods,unauthorized sales (diversion), material substitution and tampering can all be reduced with these anti-counterfeiting technologies. Packages may include authentication seals and use security printing to help indicate that the package and contents are not counterfeit; these too are subject to counterfeiting. Packages also can include anti-theft devices, such as dye-packs, RFID tags, or electronic article surveillance [5] tags that can be activated or detected by devices at exit points and require specialized tools to deactivate. Anti-counterfeiting technologies that can be used with packaging include:

Taggant fingerprinting - uniquely coded microscopic materials that are verified from a database

Encrypted micro-particles - unpredictably placed markings (numbers, layers and colors) not visible to the human eye

Holograms - graphics printed on seals, patches, foils or labels and used at point of sale for visual verification

Micro-printing - second line authentication often used on currencies

Serialized barcodes

Page 889: Tugas tik di kelas xi ips 3

UV printing - marks only visible under UV light

Track and trace systems - use codes to link products to database tracking system

Water indicators - become visible when contacted with water

DNA tracking - genes embedded onto labels that can be traced

Color shifting ink or film - visible marks that switch colors or texture when tilted

Tamper evident seals and tapes - destructible or graphically verifiable at point of sale

2d barcodes - data codes that can be tracked

RFID chips

Information contentThe authentication of information can pose special problems with electronic communication, such as vulnerability to man-in-the-middle attacks, whereby a third party taps into the communication stream, and poses as each of the two other communicating parties, in order to intercept information from each. Extra identity factors can be required to authenticate each party's identity.

Literary forgery can involve imitating the style of a famous author. If an original manuscript, typewritten text, or recording is available, then the medium itself (or its packaging — anything from a box to e-mail headers) can help prove or disprove the authenticity of the document.

However, text, audio, and video can be copied into new media, possibly leaving only the informational content itself to use in authentication.

Various systems have been invented to allow authors to providea means for readers to reliably authenticate that a given message originated from or was relayed by them. These involve authentication factors like:

Page 890: Tugas tik di kelas xi ips 3

A difficult-to-reproduce physical artifact, such as a seal, signature, watermark, special stationery, or fingerprint.

A shared secret, such as a passphrase, in the content of the message.

An electronic signature; public-key infrastructure is often used to cryptographically guarantee that a message has been signed by the holder of a particular private key.

The opposite problem is detection of plagiarism, where information from a different author is passed off as a person's own work. A common technique for proving plagiarism is the discovery of another copy of the same or very similar text, which has different attribution. In some cases, excessively high quality or a style mismatch may raise suspicion of plagiarism.

Factual verification

Determining the truth or factual accuracy of information in a message is generally considered a separate problem from authentication. A wide range of techniques, from detective work, to fact checking in journalism, to scientific experimentmight be employed.

Video authentication

It is sometimes necessary to authenticate the veracity of video recordings used as evidence in judicial proceedings. Proper chain-of-custody records and secure storage facilities can help ensure the admissibility of digital or analog recordings by the Court.

Literacy & Literature authentication

In literacy, authentication is a readers’ process of questioning the veracity of an aspect of literature and then verifying those questions via research. The fundamental question for authentication of literature is - Do you believe it? Related to that, an authentication project is therefore a reading and writing activity which students documents the relevant research process ([6]). It builds students' critical

Page 891: Tugas tik di kelas xi ips 3

literacy. The documentation materials for literature go beyondnarrative texts and likely include informational texts, primary sources, and multimedia. The process typically involves both internet and hands-on library research. When authenticating historical fiction in particular, readers considers the extent that the major historical events, as wellas the culture portrayed (e.g., the language, clothing, food, gender roles), are believable for the time period. ([7]).

History and state-of-the-artHistorically, fingerprints have been used as the most authoritative method of authentication, but recent court casesin the US and elsewhere have raised fundamental doubts about fingerprint reliability.[citation needed] Outside of the legal system as well, fingerprints have been shown to be easily spoofable, with British Telecom's top computer-security official noting that "few" fingerprint readers have not already been tricked by one spoof or another.[8] Hybrid or two-tiered authenticationmethods offer a compelling solution, such as private keys encrypted by fingerprint inside of a USB device.

In a computer data context, cryptographic methods have been developed (see digital signature and challenge-response authentication) which are currently not spoofable if and only if the originator's key has not been compromised. That the originator (or anyone other than an attacker) knows (or doesn't know) about a compromise is irrelevant. It is not known whether these cryptographically based authentication methods are provably secure, since unanticipated mathematical developments may make them vulnerable to attack in future. If that were to occur, it may call into question much of the authentication in the past. In particular, a digitally signed contract may be questioned when a new attack on the cryptography underlying the signature is discovered.

Strong authentication

The U.S. Government's National Information Assurance Glossary defines strong authentication as

Page 892: Tugas tik di kelas xi ips 3

layered authentication approach relying on two or more authenticators to establish the identity of an originator or receiver of information.

The above definition is consistent with that of the European Central Bank, as discussed in the strong authentication entry.

AuthorizationMain article: AuthorizationA soldier checks a driver's identification card before allowing her to enter a military base.

The process of authorization is distinct from that of authentication. Whereas authentication is the process of verifying that "you are who you say you are", authorization isthe process of verifying that "you are permitted to do what you are trying to do". Authorization thus presupposes authentication.

For example, a client showing proper identification credentials to a bank teller is asking to be authenticated that he really is the one whose identification he is showing. A client whose authentication request is approved becomes authorized to access the accounts of that account holder, but no others.

However note that if a stranger tries to access someone else'saccount with his own identification credentials, the stranger's identification credentials will still be successfully authenticated because they are genuine and not counterfeit, however the stranger will not be successfully authorized to access the account, as the stranger's identification credentials had not been previously set to be eligible to access the account, even if valid (i.e. authentic).

Similarly when someone tries to log on a computer, they are usually first requested to identify themselves with a login name and support that with a password. Afterwards, this combination is checked against an existing login-password validity record to check if the combination is authentic. If so, the user becomes authenticated (i.e. the identification he

Page 893: Tugas tik di kelas xi ips 3

supplied in step 1 is valid, or authentic). Finally, a set of pre-defined permissions and restrictions for that particular login name is assigned to this user, which completes the finalstep, authorization.

Even though authorization cannot occur without authentication,the former term is sometimes used to mean the combination of both.

To distinguish "authentication" from the closely related "authorization", the shorthand notations A1 (authentication), A2 (authorization) as well as AuthN / AuthZ (AuthR) or Au / Azare used in some communities.[9]

Normally delegation was considered to be a part of authorization domain. Recently authentication is also used forvarious type of delegation tasks. Delegation in IT network is also a new but evolving field.[10]

Access controlMain article: Access control

One familiar use of authentication and authorization is accesscontrol. A computer system that is supposed to be used only bythose authorized must attempt to detect and exclude the unauthorized. Access to it is therefore usually controlled by insisting on an authentication procedure to establish with some degree of confidence the identity of the user, granting privileges established for that identity. One such procedure involves the usage of Layer 8 which allows IT administrators to identify users, control Internet activity of users in the network, set user based policies and generate reports by username. Common examples of access control involving authentication include:

Asking for photoID when a contractor first arrives at a house to perform work.

Using captcha as a means of asserting that a user is a human being and not a computer program.

Page 894: Tugas tik di kelas xi ips 3

By using One Time Password (OTP), received on a tele-network enabled device like mobile phone, as an authentication password/PIN

A computer program using a blind credential to authenticate toanother program

Entering a country with a passport

Logging in to a computer

Using a confirmation E-mail to verify ownership of an e-mail address

Using an Internet banking system

Withdrawing cash from an ATM

In some cases, ease of access is balanced against the strictness of access checks. For example, the credit card network does not require a personal identification number for authentication of the claimed identity; and a small transaction usually does not even require a signature of the authenticated person for proof of authorization of the transaction. The security of the system is maintained by limiting distribution of credit card numbers, and by the threat of punishment for fraud.

Security experts argue that it is impossible to prove the identity of a computer user with absolute certainty. It is only possible to apply one or more tests which, if passed, have been previously declared to be sufficient to proceed. Theproblem is to determine which tests are sufficient, and many such are inadequate. Any given test can be spoofed one way or another, with varying degrees of difficulty.

Computer security experts are now also recognising that despite extensive efforts, as a business, research and networkcommunity, we still do not have a secure understanding of the requirements for authentication, in a range of circumstances. Lacking this understanding is a significant barrier to identifying optimum methods of authentication. major questionsare:

Page 895: Tugas tik di kelas xi ips 3

What is authentication for?

Who benefits from authentication/who is disadvantaged by authentication failures?

What disadvantages can effective authentication actually guardagainst?

Digital signatureFrom Wikipedia, the free encyclopedia×Sign into your Wajam account and discover what your friends have shared

Twitter

Facebook

A digital signature is a mathematical scheme for demonstratingthe authenticity of a digital message or documents. A valid digital signature gives a recipient reason to believe that themessage was created by a known sender, that the sender cannot deny having sent the message (authentication and non-repudiation), and that the message was not altered in transit (integrity). Digital signatures are commonly used for softwaredistribution, financial transactions, and in other cases whereit is important to detect forgery or tampering.

Contents1 Explanation

2 Definition

3 History

4 How they work

5 Notions of security

6 Applications of digital signatures

Page 896: Tugas tik di kelas xi ips 3

6.1 Authentication

6.2 Integrity

6.3 Non-repudiation

7 Additional security precautions

7.1 Putting the private key on a smart card

7.2 Using smart card readers with a separate keyboard

7.3 Other smart card designs

7.4 Using digital signatures only with trusted applications

7.5 Using a network attached hardware security module

7.6 WYSIWYS

7.7 Digital signatures versus ink on paper signatures

8 Some digital signature algorithms

9 The current state of use – legal and practical

10 Industry standards

10.1 Using separate key pairs for signing and encryption

11 See also

12 Notes

13 Further reading

ExplanationDigital signatures are often used to implement electronic signatures, a broader term that refers to any electronic data that carries the intent of a signature,[1] but not all electronic signatures use digital signatures.[2][3] In some countries, including the United States, India, Brazil, Saudi

Page 897: Tugas tik di kelas xi ips 3

Arabia[4] and members of the European Union, electronic signatures have legal significance.

Digital signatures employ asymmetric cryptography. In many instances they provide a layer of validation and security to messages sent through a nonsecure channel: Properly implemented, a digital signature gives the receiver reason to believe the message was sent by the claimed sender. Digital seals and signatures are equivalent to handwritten signatures and stamped seals.[5] Digital signatures are equivalent to traditional handwritten signatures in many respects, but properly implemented digital signatures are more difficult to forge than the handwritten type. Digital signature schemes, inthe sense used here, are cryptographically based, and must be implemented properly to be effective. Digital signatures can also provide non-repudiation, meaning that the signer cannot successfully claim they did not sign a message, while also claiming their private key remains secret; further, some non-repudiation schemes offer a time stamp for the digital signature, so that even if the private key is exposed, the signature is valid. Digitally signed messages may be anything representable as a bitstring: examples include electronic mail, contracts, or a message sent via some other cryptographic protocol.

DefinitionMain article: Public-key cryptography

A digital signature scheme typically consists of three algorithms;

A key generation algorithm that selects a private key uniformly at random from a set of possible private keys. The algorithm outputs the private key and a corresponding public key.

A signing algorithm that, given a message and a private key, produces a signature.

A signature verifying algorithm that, given the message, public keyand signature, either accepts or rejects the message's claim to authenticity.

Page 898: Tugas tik di kelas xi ips 3

Two main properties are required. First, the authenticity of asignature generated from a fixed message and fixed private keycan be verified by using the corresponding public key. Secondly, it should be computationally infeasible to generate a valid signature for a party without knowing that party's private key. A digital signature is an authentication mechanism that enables the creator of the message to attach a code that acts as a signature.

HistoryIn 1976, Whitfield Diffie and Martin Hellman first described the notion of a digital signature scheme, although they only conjectured that such schemes existed.[6][7] Soon afterwards, Ronald Rivest, Adi Shamir, and Len Adleman invented the RSA algorithm, which could be used to produce primitive digital signatures[8] (although only as a proof-of-concept – "plain" RSA signatures are not secure[9]). The first widely marketed software package to offer digital signature was Lotus Notes 1.0, released in 1989, which used the RSA algorithm.[10]

Other digital signature schemes were soon developed after RSA,the earliest being Lamport signatures,[11] Merkle signatures (also known as "Merkle trees" or simply "Hash trees"),[12] and Rabin signatures.[13]

In 1988, Shafi Goldwasser, Silvio Micali, and Ronald Rivest became the first to rigorously define the security requirements of digital signature schemes.[14] They described a hierarchy of attack models for signature schemes, and also presented the GMR signature scheme, the first that could be proved to prevent even an existential forgery against a chosenmessage attack.[14]

How they workTo create RSA signature keys, generate an RSA key pair containing a modulus N that is the product of two large primes, along with integers e and d such that e d ≡ 1 (mod φ(N)), where φ is the Euler phi-function. The signer's public key consists of N and e, and the signer's secret key contains d.

Page 899: Tugas tik di kelas xi ips 3

To sign a message m, the signer computes σ ≡ md (mod N). To verify, the receiver checks that σe ≡ m (mod N).

As noted earlier, this basic scheme is not very secure. To prevent attacks, one can first apply a cryptographic hash function to the message m and then apply the RSA algorithm described above to the result. This approach can be proven secure in the so-called random oracle model[clarification needed]. Most early signature schemes were of a similar type: they involve the use of a trapdoor permutation, such as the RSA function, or in the case of the Rabin signature scheme, computing squaremodulo composite n. A trapdoor permutation family is a family of permutations, specified by a parameter, that is easy to compute in the forward direction, but is difficult to compute in the reverse direction without already knowing the private key ("trapdoor"). Trapdoor permutations can be used for digital signature schemes, where computing the reverse direction with the secret key is required for signing, and computing the forward direction is used to verify signatures.

Used directly, this type of signature scheme is vulnerable to a key-only existential forgery attack. To create a forgery, the attacker picks a random signature σ and uses the verification procedure to determine the message m corresponding to that signature.[15] In practice, however, this type of signature is not used directly, but rather, the message to be signed is first hashed to produce a short digestthat is then signed. This forgery attack, then, only produces the hash function output that corresponds to σ, but not a message that leads to that value, which does not lead to an attack. In the random oracle model, this hash-then-sign form of signature is existentially unforgeable, even against a chosen-plaintext attack.[7][clarification needed]

There are several reasons to sign such a hash (or message digest) instead of the whole document.

For efficiency: The signature will be much shorter and thus save time since hashing is generally much faster than signing in practice.

For compatibility: Messages are typically bit strings, but some signature schemes operate on other domains (such as, in

Page 900: Tugas tik di kelas xi ips 3

the case of RSA, numbers modulo a composite number N). A hash function can be used to convert an arbitrary input into the proper format.

For integrity: Without the hash function, the text "to be signed" may have to be split (separated) in blocks small enough for the signature scheme to act on them directly. However, the receiver of the signed blocks is not able to recognize if all the blocks are present and in the appropriateorder.

Notions of securityIn their foundational paper, Goldwasser, Micali, and Rivest lay out a hierarchy of attack models against digital signatures:[14]

In a key-only attack, the attacker is only given the public verification key.

In a known message attack, the attacker is given valid signatures for a variety of messages known by the attacker butnot chosen by the attacker.

In an adaptive chosen message attack, the attacker first learns signatures on arbitrary messages of the attacker's choice.

They also describe a hierarchy of attack results:[14]

A total break results in the recovery of the signing key.

A universal forgery attack results in the ability to forge signatures for any message.

A selective forgery attack results in a signature on a messageof the adversary's choice.

An existential forgery merely results in some valid message/signature pair not already known to the adversary.

The strongest notion of security, therefore, is security against existential forgery under an adaptive chosen message attack.

Page 901: Tugas tik di kelas xi ips 3

Applications of digital signaturesAs organizations move away from paper documents with ink signatures or authenticity stamps, digital signatures can provide added assurances of the evidence to provenance, identity, and status of an electronic document as well as acknowledging informed consent and approval by a signatory. The United States Government Printing Office (GPO) publishes electronic versions of the budget, public and private laws, and congressional bills with digital signatures. Universities including Penn State, University of Chicago, and Stanford are publishing electronic student transcripts with digital signatures.

Below are some common reasons for applying a digital signatureto communications:

Authentication

Although messages may often include information about the entity sending a message, that information may not be accurate. Digital signatures can be used to authenticate the source of messages. When ownership of a digital signature secret key is bound to a specific user, a valid signature shows that the message was sent by that user. The importance of high confidence in sender authenticity is especially obvious in a financial context. For example, suppose a bank's branch office sends instructions to the central office requesting a change in the balance of an account. If the central office is not convinced that such a message is truly sent from an authorized source, acting on such a request couldbe a grave mistake.

Integrity

In many scenarios, the sender and receiver of a message may have a need for confidence that the message has not been altered during transmission. Although encryption hides the contents of a message, it may be possible to change an encrypted message without understanding it. (Some encryption algorithms, known as nonmalleable ones, prevent this, but others do not.) However, if a message is digitally signed, any

Page 902: Tugas tik di kelas xi ips 3

change in the message after signature invalidates the signature. Furthermore, there is no efficient way to modify a message and its signature to produce a new message with a valid signature, because this is still considered to be computationally infeasible by most cryptographic hash functions (see collision resistance).

Non-repudiation

Non-repudiation, or more specifically non-repudiation of origin, is an important aspect of digital signatures. By this property, an entity that has signed some information cannot at a later time deny having signed it. Similarly, access to the public key only does not enable a fraudulent party to fake a valid signature.

Note that these authentication, non-repudiation etc. properties rely on the secret key not having been revoked prior to its usage. Public revocation of a key-pair is a required ability, else leaked secret keys would continue to implicate the claimed owner of the key-pair. Checking revocation status requires an "online" check, e.g. checking a "Certificate Revocation List" or via the "Online Certificate Status Protocol". Very roughly this is analogous to a vendor who receives credit-cards first checking online with the credit-card issuer to find if a given card has been reported lost or stolen. Of course, with stolen key pairs, the theft is often discovered only after the secret key's use, e.g., to sign a bogus certificate for espionage purpose.

Additional security precautionsPutting the private key on a smart card

All public key / private key cryptosystems depend entirely on keeping the private key secret. A private key can be stored ona user's computer, and protected by a local password, but thishas two disadvantages:

the user can only sign documents on that particular computer

the security of the private key depends entirely on the security of the computer

Page 903: Tugas tik di kelas xi ips 3

A more secure alternative is to store the private key on a smart card. Many smart cards are designed to be tamper-resistant (although some designs have been broken, notably by Ross Anderson and his students). In a typical digital signature implementation, the hash calculated from the document is sent to the smart card, whose CPU signs the hash using the stored private key of the user, and then returns thesigned hash. Typically, a user must activate his smart card byentering a personal identification number or PIN code (thus providing two-factor authentication). It can be arranged that the private key never leaves the smart card, although this is not always implemented. If the smart card is stolen, the thiefwill still need the PIN code to generate a digital signature. This reduces the security of the scheme to that of the PIN system, although it still requires an attacker to possess the card. A mitigating factor is that private keys, if generated and stored on smart cards, are usually regarded as difficult to copy, and are assumed to exist in exactly one copy. Thus, the loss of the smart card may be detected by the owner and the corresponding certificate can be immediately revoked. Private keys that are protected by software only may be easierto copy, and such compromises are far more difficult to detect.

Using smart card readers with a separate keyboard

Entering a PIN code to activate the smart card commonly requires a numeric keypad. Some card readers have their own numeric keypad. This is safer than using a card reader integrated into a PC, and then entering the PIN using that computer's keyboard. Readers with a numeric keypad are meant to circumvent the eavesdropping threat where the computer might be running a keystroke logger, potentially compromising the PIN code. Specialized card readers are also less vulnerable to tampering with their software or hardware and are often EAL3 certified.

Other smart card designs

Smart card design is an active field, and there are smart cardschemes which are intended to avoid these particular problems,though so far with little security proofs.

Page 904: Tugas tik di kelas xi ips 3

Using digital signatures only with trusted applications

One of the main differences between a digital signature and a written signature is that the user does not "see" what he signs. The user application presents a hash code to be signed by the digital signing algorithm using the private key. An attacker who gains control of the user's PC can possibly replace the user application with a foreign substitute, in effect replacing the user's own communications with those of the attacker. This could allow a malicious application to trick a user into signing any document by displaying the user's original on-screen, but presenting the attacker's own documents to the signing application. ff To protect against this scenario, an authentication system can be set up between the user's application (word processor, email client, etc.) and the signing application. The general idea is to provide some means for both the user application and signing application to verify each other's integrity. For example, thesigning application may require all requests to come from digitally signed binaries.

Using a network attached hardware security module

One of the main differences between a cloud based digital signature service and a locally provided one is risk. Many risk averse companies, including governments, financial and medical institutions, and payment processors require more secure standards, like FIPS 140-2 level 3 and FIPS 201 certification, to ensure the signature is validated and secure.[16]

WYSIWYS

Main article: WYSIWYS

Technically speaking, a digital signature applies to a string of bits, whereas humans and applications "believe" that they sign the semantic interpretation of those bits. In order to besemantically interpreted, the bit string must be transformed into a form that is meaningful for humans and applications, and this is done through a combination of hardware and software based processes on a computer system. The problem is that the semantic interpretation of bits can change as a

Page 905: Tugas tik di kelas xi ips 3

function of the processes used to transform the bits into semantic content. It is relatively easy to change the interpretation of a digital document by implementing changes on the computer system where the document is being processed. From a semantic perspective this creates uncertainty about what exactly has been signed. WYSIWYS (What You See Is What You Sign) [17] means that the semantic interpretation of a signed message cannot be changed. In particular this also means that a message cannot contain hidden information that the signer is unaware of, and that can be revealed after the signature has been applied. WYSIWYS is a necessary requirementfor the validity of digital signatures, but this requirement is difficult to guarantee because of the increasing complexityof modern computer systems.

Digital signatures versus ink on paper signatures

An ink signature could be replicated from one document to another by copying the image manually or digitally, but to have credible signature copies that can resist some scrutiny is a significant manual or technical skill, and to produce inksignature copies that resist professional scrutiny is very difficult.

Digital signatures cryptographically bind an electronic identity to an electronic document and the digital signature cannot be copied to another document. Paper contracts sometimes have the ink signature block on the last page, and the previous pages may be replaced after a signature is applied. Digital signatures can be applied to an entire document, such that the digital signature on the last page will indicate tampering if any data on any of the pages have been altered, but this can also be achieved by signing with ink and numbering all pages of the contract.

Some digital signature algorithmsRSA-based signature schemes, such as RSA-PSS

DSA and its elliptic curve variant ECDSA

Page 906: Tugas tik di kelas xi ips 3

ElGamal signature scheme as the predecessor to DSA, and variants Schnorr signature and Pointcheval–Stern signature algorithm

Rabin signature algorithm

Pairing-based schemes such as BLS

Undeniable signatures

Aggregate signature - a signature scheme that supports aggregation: Given n signatures on n messages from n users, itis possible to aggregate all these signatures into a single signature whose size is constant in the number of users. This single signature will convince the verifier that the n users did indeed sign the n original messages.

Signatures with efficient protocols - are signature schemes that facilitate efficient cryptographic protocols such as zero-knowledge proofs or secure computation.

The current state of use – legal and practical

The examples and perspective in this section may not represent a worldwide view of the subject. Please improvethis article and discuss the issue on the talk page. (November 2009)

All digital signature schemes share the following basic prerequisites regardless of cryptographic theory or legal provision:

Quality algorithms Some public-key algorithms are known to be insecure, practicalattacks against them having been discovered.Quality implementations An implementation of a good algorithm (or protocol) with mistake(s) will not work.The private key must remain private If the private key becomes known to any other party, that party can produce perfect digital signatures of anything whatsoever.

Page 907: Tugas tik di kelas xi ips 3

The public key owner must be verifiable A public key associated with Bob actually came from Bob. This is commonly done using a public key infrastructure (PKI) and the public key↔user association is attested by the operator ofthe PKI (called a certificate authority). For 'open' PKIs in which anyone can request such an attestation (universally embodied in a cryptographically protected identity certificate), the possibility of mistaken attestation is nontrivial. Commercial PKI operators have suffered several publicly known problems. Such mistakes could lead to falsely signed, and thus wrongly attributed, documents. 'Closed' PKI systems are more expensive, but less easily subverted in this way.Users (and their software) must carry out the signature protocol properly.

Only if all of these conditions are met will a digital signature actually be any evidence of who sent the message, and therefore of their assent to its contents. Legal enactmentcannot change this reality of the existing engineering possibilities, though some such have not reflected this actuality.

Legislatures, being importuned by businesses expecting to profit from operating a PKI, or by the technological avant-garde advocating new solutions to old problems, have enacted statutes and/or regulations in many jurisdictions authorizing,endorsing, encouraging, or permitting digital signatures and providing for (or limiting) their legal effect. The first appears to have been in Utah in the United States, followed closely by the states Massachusetts and California. Other countries have also passed statutes or issued regulations in this area as well and the UN has had an active model law project for some time. These enactments (or proposed enactments) vary from place to place, have typically embodied expectations at variance (optimistically or pessimistically) with the state of the underlying cryptographic engineering, and have had the net effect of confusing potential users and specifiers, nearly all of whom are not cryptographically knowledgeable. Adoption of technical standards for digital signatures have lagged behind much of the legislation,

Page 908: Tugas tik di kelas xi ips 3

delaying a more or less unified engineering position on interoperability, algorithm choice, key lengths, and so on what the engineering is attempting to provide.

Interactive proof systemFrom Wikipedia, the free encyclopedia×Sign into your Wajam account and discover what your friends have shared

Twitter

FacebookNot to be confused with Proof assistant.

In computational complexity theory, an interactive proof system is an abstract machine that models computation as the exchange of messages between two parties. The parties, the verifier and the prover, interact by exchanging messages in order to ascertain whether a given string belongs to a language or not. The prover is all-powerful and possesses unlimited computational resources, but cannot be trusted, while the verifier has bounded computation power. Messages aresent between the verifier and prover until the verifier has ananswer to the problem and has "convinced" itself that it is correct.

All interactive proof systems have two requirements:

Completeness: if the statement is true, the honest verifier (that is, one following the protocol properly) will be convinced of this fact by an honest prover.

Soundness: if the statement is false, no prover, even if it doesn't follow the protocol, can convince the honest verifier that it is true, except with some small probability.

It is assumed that the verifier is always honest.

Page 909: Tugas tik di kelas xi ips 3

The specific nature of the system, and so the complexity classof languages it can recognize, depends on what sort of bounds are put on the verifier, as well as what abilities it is given— for example, most interactive proof systems depend critically on the verifier's ability to make random choices. It also depends on the nature of the messages exchanged — how many and what they can contain. Interactive proof systems havebeen found to have some important implications for traditionalcomplexity classes defined using only one machine. The main complexity classes describing interactive proof systems are AMand IP.

Contents1 NP

2 Arthur–Merlin and Merlin–Arthur protocols

3 Public coins versus private coins

4 IP

5 QIP

6 Zero knowledge

7 MIP

8 PCP

9 See also

10 References

11 Textbooks

12 External links

NPThe complexity class NP may be viewed as a very simple proof system. In this system, the verifier is a deterministic, polynomial-time machine (a P machine). The protocol is:

Page 910: Tugas tik di kelas xi ips 3

The prover looks at the input and computes the solution using its unlimited power and returns a polynomial-size proof certificate.

The verifier verifies that the certificate is valid in deterministic polynomial time. If it is valid, it accepts; otherwise, it rejects.

In the case where a valid proof certificate exists, the proveris always able to make the verifier accept by giving it that certificate. In the case where there is no valid proof certificate, however, the input is not in the language, and noprover, however malicious it is, can convince the verifier otherwise, because any proof certificate will be rejected.

Arthur–Merlin and Merlin–Arthur protocolsMain article: Arthur–Merlin protocol

Although NP may be viewed as using interaction, it wasn't until 1985 that the concept of computation through interactionwas conceived by two independent groups of researchers. One approach, by László Babai, who published "Trading group theoryfor randomness",[1] defined the Arthur–Merlin (AM) class hierarchy. In this presentation, Arthur (the verifier) is a probabilistic, polynomial-time machine, while Merlin (the prover) has unbounded resources.

The class MA in particular is a simple generalization of the NP interaction above in which the verifier is probabilistic instead of deterministic. Also, instead of requiring that the verifier always accept valid certificates and reject invalid certificates, it is more lenient:

Completeness: if the string is in the language, the prover must be able to give a certificate such that the verifier willaccept with probability at least 2/3 (depending on the verifier's random choices).

Soundness: if the string is not in the language, no prover, however malicious, will be able to convince the verifier to accept the string with probability exceeding 1/3.

Page 911: Tugas tik di kelas xi ips 3

This machine is potentially more powerful than an ordinary NP interaction protocol, and the certificates are no less practical to verify, since BPP algorithms are considered as abstracting practical computation (see BPP).

Public coins versus private coinsIn the same conference where Babai defined his proof system for MA, Shafi Goldwasser, Silvio Micali and Charles Rackoff [2] published a paper defining the interactive proof system IP[f(n)]. This has the same machines as the MA protocol, exceptthat f(n) rounds are allowed for an input of size n. In each round, the verifier performs computation and passes a message to the prover, and the prover performs computation and passes information back to the verifier. At the end the verifier mustmake its decision. For example, in an IP[3] protocol, the sequence would be VPVPVPV, where V is a verifier turn and P isa prover turn.

In Arthur–Merlin protocols, Babai defined a similar class AM[f(n)] which allowed f(n) rounds, but he put one extra condition on the machine: the verifier must show the prover all the random bits it uses in its computation. The result is that the verifier cannot "hide" anything from the prover, because the prover is powerful enough to simulate everything the verifier does if it knows what random bits it used. This is called a public coin protocol, because the random bits ("coin flips") are visible to both machines. The IP approach is called a private coin protocol by contrast.

The essential problem with public coins is that if the prover wishes to maliciously convince the verifier to accept a stringwhich is not in the language, it seems like the verifier mightbe able to thwart its plans if it can hide its internal state from it. This was a primary motivation in defining the IP proof systems.

In 1986, Goldwasser and Sipser [3] showed, perhaps surprisingly,that the verifier's ability to hide coin flips from the proverdoes it little good after all, in that an Arthur–Merlin publiccoin protocol with only two more rounds can recognize all the same languages. The result is that public-coin and private-

Page 912: Tugas tik di kelas xi ips 3

coin protocols are roughly equivalent. In fact, as Babai showsin 1988, AM[k]=AM for all constant k, so the IP[k] have no advantage over AM.[4]

To demonstrate the power of these classes, consider the graph isomorphism problem, the problem of determining whether it is possible to permute the vertices of one graph so that it is identical to another graph. This problem is in NP, since the proof certificate is the permutation which makes the graphs equal. It turns out that the complement of the graph isomorphism problem, a co-NP problem not known to be in NP, has an AM algorithm and the best way to see it is via a private coins algorithm.[5]

IPMain article: IP (complexity)

Private coins may not be helpful, but more rounds of interaction are helpful. If we allow the probabilistic verifier machine and the all-powerful prover to interact for apolynomial number of rounds, we get the class of problems called IP. In 1992, Adi Shamir revealed in one of the central results of complexity theory that IP equals PSPACE, the class of problems solvable by an ordinary deterministic Turing machine in polynomial space.[6]

QIPMain article: QIP (complexity)

If we allow the elements of the system to use quantum computation, the system is called a quantum interactive proof system, and the corresponding complexity class is called QIP.[7] A series of recent results culminating in a paper publishedin 2010 is believed to have demonstrated that QIP = PSPACE.[8][9]

Zero knowledgeMain article: Zero-knowledge proof

Page 913: Tugas tik di kelas xi ips 3

Not only can interactive proof systems solve problems not believed to be in NP, but under assumptions about the existence of one-way functions, a prover can convince the verifier of the solution without ever giving the verifier information about the solution. This is important when the verifier cannot be trusted with the full solution. At first itseems impossible that the verifier could be convinced that there is a solution when the verifier has not seen a certificate, but such proofs, known as zero-knowledge proofs are in fact believed to exist for all problems in NP and are valuable in cryptography. Zero-knowledge proofs were first mentioned in the original 1985 paper on IP by Goldwasser, Micali and Rackoff, but the extent of their power was shown byOded Goldreich, Silvio Micali and Avi Wigderson.[5]

MIPOne goal of IP's designers was to create the most powerful possible interactive proof system, and at first it seems like it cannot be made more powerful without making the verifier more powerful and so impractical. Goldwasser et al. overcame this in their 1988 "Multi prover interactive proofs: How to remove intractability assumptions", which defines a variant ofIP called MIP in which there are two independent provers.[10] The two provers cannot communicate once the verifier has begunsending messages to them. Just as it's easier to tell if a criminal is lying if he and his partner are interrogated in separate rooms, it's considerably easier to detect a maliciousprover trying to trick the verifier into accepting a string not in the language if there is another prover it can double-check with.

In fact, this is so helpful that Babai, Fortnow, and Lund wereable to show that MIP = NEXPTIME, the class of all problems solvable by a nondeterministic machine in exponential time, a verylarge class.[11] NEXPTIME contains PSPACE, and is believed to strictly contain PSPACE. Adding a constant number of additional provers beyond two does not enable recognition of any more languages. This result paved the way for the celebrated PCP theorem, which can be considered to be a "scaled-down" version of this theorem.

Page 914: Tugas tik di kelas xi ips 3

MIP also has the helpful property that zero-knowledge proofs for every language in NP can be described without the assumption of one-way functions that IP must make. This has bearing on the design of provably unbreakable cryptographic algorithms.[10] Moreover, a MIP protocol can recognize all languages in IP in only a constant number of rounds, and if a third prover is added, it can recognize all languages in NEXPTIME in a constant number of rounds, showing again its power over IP.

PCPMain article: Probabilistically checkable proof

While the designers of IP considered generalizations of Babai's interactive proof systems, others considered restrictions. A very useful interactive proof system is PCP(f(n), g(n)), which is a restriction of MA where Arthur can only use f(n) random bits and can only examine g(n) bits of theproof certificate sent by Merlin (essentially using random access).

There are a number of easy-to-prove results about various PCP classes. PCP(0,poly), the class of polynomial-time machines with no randomness but access to a certificate, is just NP. PCP(poly,0), the class of polynomial-time machines with access to polynomially many random bits is co-RP. Arora and Safra's first major result was that PCP(log, log) = NP; put another way,if the verifier in the NP protocol is constrained to choose only O(log n) bits of the proof certificate to look at, this won't make any difference as long as it has O(log n) random bits to use.[12]

Furthermore, the PCP theorem asserts that the number of proof accesses can be brought all the way down to a constant. That is, NP = PCP(log, O(1)).[13] They used this valuable characterization of NP to prove that approximation algorithms do not exist for the optimization versions of certain NP-complete problems unless P = NP. Such problems are now studiedin the field known as hardness of approximation.

Secure multi-party computation

Page 915: Tugas tik di kelas xi ips 3

From Wikipedia, the free encyclopedia  (Redirected from Secure multiparty computation)×Sign into your Wajam account and discover what your friends have shared

Twitter

FacebookThis article relies too much on references to primary sources. Please improve this article by adding secondary or tertiary sources. (October 2014)

Secure multi-party computation (also known as secure computation or multi-party computation/MPC) is a subfield of cryptography with the goal to create methods for parties to jointly compute a function over their inputs, and keeping these inputs private.

Contents1 Definition and Overview

2 Security Definitions

3 History

4 Protocols used

4.1 Two Party Computation

4.2 Multiparty Protocols

4.3 Other Protocols

5 Scalable MPC

6 Practical MPC Systems

6.1 Yao Based Protocols

7 See also

8 References

Page 916: Tugas tik di kelas xi ips 3

9 External links

Definition and OverviewIn an MPC, a given number of participants p1, p2, ..., pN each have private data, respectively d1, d2, ..., dN. Participants want to compute the value of a public function F on N variables at the point (d1, d2, ..., dN).

The basic scenario is that a group of parties wish to compute a given function on their private inputs. For example, supposewe have three parties Alice, Bob and Charlie, with respective inputs x,y and z. They want to compute the value of the function

F(x,y,z) = max(x,y,z)

To do so the parties engage in a protocol, by exchanging messages, and thus obtain the output of the desired function. The goal is that the output of the protocol is just the value of the function, and nothing else is revealed. In particular, all that the parties can learn is what they can learn from theoutput and their own input. So in the above example, if the output is z, then Charlie learns that his z is the maximum value, whereas Alice and Bob learn (if x, y and z are distinct), that their input is not equal to the maximum, and that the maximum held is equal to z. The basic scenario can beeasily generalised to where the parties have several inputs and outputs, and the function outputs different values to different parties.

Informally speaking, the most basic properties that a multi-party computation protocol aims to ensure are:

Input privacy: The information derived from the execution of the protocol should not allow any inference of the private data held by the parties, except for what is revealed by the output of the function.

Correctness: Any proper subset of adversarial colluding parties willing to share information or deviate from the instructions during the protocol execution should not be able to force honest parties to output an incorrect result. This

Page 917: Tugas tik di kelas xi ips 3

comes in two flavours either the honest parties are guaranteedto compute the correct output (a “robust” protocol), or they abort if they find an error (an MPC protocol “with abort”).

There is a wide range of practical applications, varying from simple tasks such as coin tossing to more complex ones like electronic auctions (e.g. compute the market clearing price), electronic voting, or privacy-preserving data mining. A classical example is the Millionaire's Problem: two millionaires want to know who is richer, in such a way that neither of them learns the net worth of the other. A solution to this situation is essentially to securely evaluate the comparison function.

Security DefinitionsA key question to ask is; when is such a multiparty computation protocol secure? In modern cryptography, a protocol can only be deemed to be secure if it comes equipped with a "security proof". This is a mathematical proof that thesecurity of the protocol reduces to that of the security of the underlying primitives. But this means we need a definitionof what it means for a protocol to be secure. This is hard to formalize, in the case of MPC, since we cannot say that the parties should "learn nothing" since they need to learn the output and this depends on the inputs. In addition, we cannot just say that the output must be "correct" since the correct output depends on the parties’ inputs, and we do not know whatinputs corrupted parties will use. A formal mathematical definition of security for MPC protocols follows the ideal-real-world paradigm, described below.

The ideal-real-world paradigm imagines two worlds. In the ideal world, there exists an incorruptible trusted party who helps the parties compute the function. Specifically, the parties privately send their inputs to the trusted party, who computes the function on his own, and then sends back the appropriate output to each party. For example, going back to Alice, Bob and Charlie wishing to compute F(x,y,z). They send x,y and z securely to the trusted third party (i.e. Alice sends an encrypted message to the third party containing x, and so on). The trusted third party computes F(x,y,z) and

Page 918: Tugas tik di kelas xi ips 3

returns the value to the three of players. In contrast, in thereal-world model, there is no trusted party and all the parties can do is to exchange messages with each other. The definition of security then states that an MPC protocol is secure if a real-world protocol “behaves” like an ideal-world one.

We stress that the ideal-real-world paradigm provides a simpleabstraction of the complexities of MPC that is of great use toanyone using an MPC protocol. Namely, it suffices to constructan application under the pretence that the MPC protocol at itscore is actually an ideal execution. If the application is secure in this case, then it is also secure when a real protocol is run instead.

The security requirements on an MPC protocol are so stringent that it may seem that it is rarely possible to actually achieve. Surprisingly, in the late 1980s it was already shown that any function can be securely computed, with security for malicious adversaries [1] .[2] This is encouraging news, but it took a long time until MPC became efficient enough to be used in practice. Unconditionally or information-theoretically secure MPC is closely related to the problem of secret sharing, and more specifically verifiable secret sharing (VSS), which many secure MPC protocols that protect against active adversaries use.

Unlike in traditional cryptographic applications, such as encryption or signatures, the adversary in an MPC protocol canbe one of the players engaged in the protocol. In fact, in MPCwe assume that corrupted parties may collude in order to breach the security of the protocol. If the number of parties in the protocol is n, then the number of parties who can be adversarial is usually denoted by t. The protocols and solutions for the case of t < n/2 (i.e., when an honest majority is assumed) are very different to those where no suchassumption is made. This latter case includes the important case of two-party computation where one of the participants may be corrupted, and the general case where an unlimited number of participants are corrupted and collude to attack thehonest participants.

Page 919: Tugas tik di kelas xi ips 3

Different protocols can deal with different adversarial powers. We can categorize adversaries according to how willingthey are to deviate from the protocol. There are essentially two types of adversaries, each giving rise to different forms of security:

Semi-Honest (Passive) Security: In this case, we assume that corrupted parties merely cooperate to gather information out of the protocol, but do not deviate from the protocol specification. This is a naive adversary model, yielding weak security in real situations. However, protocols achieving thislevel of security prevent inadvertent leakage of information between parties, and are thus useful if this is the only concern. In addition, protocols in the semi-honest model are very efficient, and are often an important first step for achieving higher levels of security.

Malicious (Active) Security: In this case, the adversary may arbitrarily deviate from the protocol execution in its attemptto cheat. Protocols that achieve security in this model provide a very high security guarantee. The only thing that anadversary can do in the case of dishonest majority is to causethe honest parties to “abort” having detected cheating. If thehonest parties do obtain output, then they are guaranteed thatit is correct. Of course, their privacy is always preserved.

Since security against active adversaries is often only possible at the cost of reducing efficiency one is led to consider a relaxed form of active security called covert security, proposed by Aumann and Lindell.[3] Covert security captures more realistic situations, where active adversaries are willing to cheat but only if they are not caught. For example, their reputation could be damaged, preventing future collaboration with other honest parties. Thus, protocols that are covertly secure provide mechanisms to ensure that, if someof the parties do not follow the instructions, then it will benoticed with high probability, say 75% or 90%. In a way, covert adversaries are active ones forced to act passively dueto external non-cryptographic (e.g. business) concerns. This sets a bridge between the two models in the hope of finding protocols, which are efficient yet secure enough for practice.

Page 920: Tugas tik di kelas xi ips 3

Like many cryptographic protocols, the security of an MPC protocol can rely on different assumptions:

It can be computational (i.e. based on some mathematical problem, like factoring) or unconditional (usually with some probability of error which can be made arbitrarily small).

The model might assume that participants use a synchronized network, where a message sent at a "tick" always arrives at the next "tick", or that a secure and reliable broadcast channel exists, or that a secure communication channel exists between every pair of participants where an adversary cannot read, modify or generate messages in the channel, etc.

The set of honest parties that can execute a computational task is related to the concept of access structure. "Adversarystructures" can be static, i.e. the adversary chooses its victims before the start of the multi-party computation, or dynamic, i.e. it chooses its victims during the course of execution of the multiparty computation. Attaining security against a dynamic adversary is often much harder than securityagainst a static adversary. An adversary structure can be defined as a "threshold structure" meaning that it can corruptor simply read the memory of a number of participants up to some threshold, or be defined as a more complex structure, where it can affect certain predefined subsets of participants, modeling different possible collusions.

HistorySecure computation was formally introduced as secure two-partycomputation (2PC) in 1982 by Andrew Yao,[4] the first recipientof the Knuth Prize. It is also referred to as Secure function evaluation (SFE), and is concerned with the question: 'Can twoparty computation be achieved more efficiently and under weaker security assumptions than general MPC?'[citation needed]. The millionaire problem solution gave way to a generalization to multi-party protocols.[1][2]

Increasingly efficient protocols for MPC have been proposed, and MPC can be now used as a practical solution to various real-life problems such as distributed voting, private biddingand auctions, sharing of signature or decryption functions and

Page 921: Tugas tik di kelas xi ips 3

private information retrieval.[5] The first large-scale and practical application of multiparty computation took place in Denmark in January 2008.[6]

Protocols usedThere are major differences between the protocols proposed fortwo party computation (2PC) and multiparty computation (MPC).

Two Party Computation

The two party setting is particularly interesting, not only from an applications perspective but also because special techniques can be applied in the two party setting which do not apply in the multi-party case. Indeed, secure multi-party computation (in fact the restricted case of secure function evaluation, where only a single function is evaluated) was first presented in the two-party setting. The original work isoften cited as being from one of the two papers of Yao[citation

needed]; although the papers do not actually contain what is now known as Yao’s protocol.

Yao’s basic protocol is secure against semi-honest adversariesand is extremely efficient in terms of number of rounds, whichis constant, and independent of the target function being evaluated. The function is viewed as a Boolean circuit, with inputs in binary of fixed length. A Boolean circuit is a collection of gates connected with three different types of wires: circuit-input wires, circuit-output wires and intermediate wires. Each gate receives two input wires and it has a single output wire which might be fan-out (i.e. be passed to multiple gates at the next level). Plain evaluation of the circuit is done by evaluating each gate in turn; assuming the gates have been lexicographically ordered. The gate is represented as a truth table such that for each possible pair of bits (those coming from the input wires' gate) the table assigns a unique output bit; which is the value of the output wire of the gate. The results of the evaluation are the bits obtained in the circuit-output wires.

Yao explained how to garble a circuit (hide its structure) so that two parties, sender and receiver, can learn the output of

Page 922: Tugas tik di kelas xi ips 3

the circuit and nothing else. At a high level, the sender prepares the garbled circuit and sends it to the receiver, whoobliviously evaluates the circuit, learning the encodings corresponding to both his and the senders output.He then just sends back the senders encodings, allowing the sender to compute his part of the output. The sender sends the mapping from the receivers output encodings to bits to the receiver, allowing the receiver to obtain their output.

In more detail, the garbled circuit is computed as follows. The main ingredient is a double-keyed symmetric encryption scheme. Given a gate of the circuit, each possible value of its input wires (either 0 or 1) is encoded with a random number (label). The values resulting from the evaluation of the gate at each of the four possible pair of input bits are also replaced with random labels. The garbled truth table of the gate consists of encryptions of each output label using its inputs labels as keys. The position of these four encryptions in the truth table is randomized so no informationon the gate is leaked.

To correctly evaluate each garbled gate the encryption scheme has the following two properties. Firstly, the ranges of the encryption function under any two distinct keys are disjoint (with overwhelming probability). The second property says thatit can be checked efficiently whether a given ciphertext has been encrypted under a given key. With these two properties the receiver, after obtaining the labels for all circuit-inputwires, can evaluate each gate by first finding out which of the four ciphertexts has been encrypted with his label keys, and then decrypting to obtain the label of the output wire. This is done obliviously as all the receiver learns during theevaluation are encodings of the bits.

The sender’s (i.e. circuit creators) input bits can be just sent as encodings to the evaluator; whereas the receiver’s (i.e. circuit evaluators) encodings corresponding to his inputbits are obtained via a 1-out-of-2 Oblivious Transfer (OT) protocol. A 1-out-of-2 OT protocol, enables the sender, in possession of two values C1 and C2, to send the one requested by the receiver (b a value in {1,2}) in such a way that the sender does not know what value has been transferred, and the receiver only learns the queried value.

Page 923: Tugas tik di kelas xi ips 3

If one is considering malicious adversaries, further mechanisms to ensure correct behaviour of both parties need tobe provided. By construction it is easy to show security for the sender, as all the receiver can do is to evaluate a garbled circuit that would fail to reach the circuit-output wires if he deviated from the instructions. The situation is very different on the sender's side. For example, he may send an incorrect garbled circuit that computes a function revealing the receiver's input. This would mean that privacy no longer holds, but since the circuit is garbled the receiverwould not be able to detect this.

Multiparty Protocols

Most MPC protocols, as opposed to 2PC protocols, make use of secret sharing. In the secret sharing based methods, the parties do not play special roles (as in Yao, of creator and evaluator). Instead the data associated to each wire is sharedamongst the parties; and a protocol is then used to evaluate each gate. The function is now defined as a “circuit” over GF(p), as opposed to the binary circuits used for Yao. Such a circuit is called an arithmetic circuit in the literature, andit consists of addition and multiplication “gates” where the values operated on are defined over GF(p).

Secret sharing allows one to distribute a secret among a number of parties by distributing shares to each party. Three types of secret sharing schemes are commonly used; Shamir Secret Sharing, Replicated Secret Sharing and Additive Secret Sharing. In all three cases the shares are random elements of GF(p) that add up to the secret in GF(p); intuitively, security steams because any non-qualifying set shares looks randomly distributed. All three secret sharing schemes are linear, so the sum of two shared secrets, or multiplication a secret by a public constant, can be done locally. Thus linear functions can be evaluated for free.

Replicated Secret Sharing schemes are usually associated with passively secure MPC systems consisting of three parties, of which at most one can be adversarial; such as used in the Sharemind system. MPC systems based on Shamir Secret Sharing are generally associated with systems which can tolerate up tot adversaries out of n, so called threshold systems. In the

Page 924: Tugas tik di kelas xi ips 3

case of information theoretic protocols actively secure protocols can be realised with Shamir Secret sharing if t<n/3,whilst passively secure ones are available if t<n/2. In the case of computationally secure protocols one can tolerate a threshold of t<n/2 for actively secure protocols. A practical system adopting this approach is the VIFF framework. Additive secret sharing is used when one wants to tolerate a dishonest majority, i.e. t<n, in which case we can only obtain MPC protocols "with abort", this later type is typified by the SPDZ [7] and (multi-party variant) of the TinyOT protocol.[8]

Other Protocols

Virtual Party Protocol is a protocol which uses virtual parties and complex mathematics to hide the identity of the parties.[9]

Secure sum protocols allow multiple cooperating parties to compute sum function of their individual data without revealing the data to one another.[10]

In 2014 a "model of fairness in secure computation in which anadversarial party that aborts on receiving output is forced topay a mutually predefined monetary penalty" has been describedfor the Bitcoin network or for fair lottery.[11]

Scalable MPCRecently, several multi-party computation techniques have beenproposed targeting resource-efficiency (in terms of bandwidth,computation, and latency) for large networks. Although much theoretical progress has been made to achieve scalability, practical progress is slower. In particular, most known schemes suffer from either poor or unknown communication and computation costs in practice.[12]

Practical MPC SystemsMuch advances have been conducted on 2PC and MPC systems in recent years.

Yao Based Protocols

Page 925: Tugas tik di kelas xi ips 3

One of the main issues when working with Yao based protocols is that the function to be securely evaluated (which could be an arbitrary program) must be represented as a circuit, usually consisting of XOR and AND gates. Since most real-worldprograms contain loops and complex data structures, this is a highly non-trivial task. The Fairplay system [13] was the first tool designed to tackle this problem. Fairplay comprises two main components. The first of these is a compiler enabling users to write programs in a simple high level language, and output these programs in a Boolean circuit representation. Thesecond component can then garble the circuit and execute a protocol to securely evaluate the garbled circuit. As well as two-party computation based on Yao's protocol, Fairplay can also carry out multi-party protocols. This is done using the BMR protocol,[13] which extends Yao's passively secure protocol to the active case.

In the years following the introduction of Fairplay, many improvements to Yao's basic protocol have been created, in theform of both efficiency improvements and techniques for activesecurity. These include techniques such as the free XOR method, which allows for much simpler evaluation of XOR gates,and garbled row reduction, reducing the size of garbled tableswith two inputs by 25%.[14]

The approach that so far seems to be the most fruitful in obtaining active security comes from a combination of the garbling technique and the “cut-and-choose” paradigm. This combination seems to render more efficient constructions. To avoid the aforementioned problems with respect to dishonest behaviour, many garblings of the same circuit are sent from the constructor to the evaluator. Then around half of them (depending on the specific protocol) are opened to check consistency, and if so a vast majority of the unopened ones are correct with high probability. The output is the majority vote of all the evaluations. Note that here the majority output is needed. If there is disagreement on the outputs the receiver knows the sender is cheating, but he cannot complain as otherwise this would leak information on his input.

This approach for active security was initiated by Lindell andPinkas .[15] This technique was implemented by Pinkas et al. in 2009 ,[14] This provided the first actively secure two-party

Page 926: Tugas tik di kelas xi ips 3

evaluation of the Advanced Encryption Standard (AES) circuit, regarded as a highly complex (consisting of around 30,000 AND and XOR gates), non-trivial function (also with some potentialapplications), taking around 20 minutes to compute and requiring 160 circuits to obtain a 2-40 cheating probability.

As many circuits are evaluated, the parties (including the receiver) need to commit to their inputs to ensure that in allthe iterations the same values are used. The experiments of Pinkas et. al. reported [14] show that the bottleneck of the protocol lies in the consistency checks. They had to send overthe net about 6,553,600 commitments to various values to evaluate the AES circuit. In recent results [16] the efficiency of actively secure Yao based implementations was improved evenfurther, requiring only 40 circuits, and much less commitments, to obtain 2-40 cheating probability. The improvements come from new methodologies for performing cut-and-choose on the transmitted circuits.

More recently, there has been a focus on highly parallel implementations based on garbled circuits, designed to be run on Central Processing Units (CPUs) with many cores. Kreuter, et al[17] describe an implementation running on 512 cores of a powerful cluster computer. Using these resources they could evaluate the 4095-bit edit distance function, whose circuit comprises almost 6 billion gates. To accomplish this they developed a custom, better optimized circuit compiler than Fairplay and several new optimizations such as pipelining, whereby transmission of the garbled circuit across the networkbegins while the rest of the circuit is still being generated.The time to compute AES was reduced to 1.4 seconds per block in the active case, using a 512-node cluster machine, and 115 seconds using one node. shelat and Shen[18] improve this, using commodity hardware, to 0.52 seconds per block. The same paper reports on a throughput of 21 blocks per second, but with a latency of 48 seconds per block.

Meanwhile, another group of researchers has investigated usingconsumer-grade Graphics Processing Units (GPUs) to achieve similar levels of parallelism.[19] They utilize OT extensions and some other novel techniques to design their GPU-specific protocol. This approach seems to achieve comparable efficiencyto the cluster computing implementation, using a similar

Page 927: Tugas tik di kelas xi ips 3

number of cores. However, the authors only report on a implementation of the AES circuit, which has around 50,000 gates. On the other hand, the hardware required here is far more accessible, as similar devices may already be found in many people's desktop computers or games consoles. The authorsobtain a timing of 2.7 seconds per AES block on a standard desktop, with a standard GPU. If they allow security to decrease to something akin to covert security, they obtain a run time of 0.30 seconds per AES block.It should be noted thatin the passive security case there are reports of processing of circuits with 250 million gates, and at a rate of 75 million gates per second.[20]

Classic cryptography

Reconstructed ancient Greek scytale, an early cipher device

The earliest forms of secret writing required little more thanwriting implements since most people could not read. More literacy, or literate opponents, required actual cryptography.The main classical cipher types are transposition ciphers, which rearrange the order of letters in a message (e.g., 'hello world' becomes 'ehlol owrdl' in a trivially simple rearrangement scheme), and substitution ciphers, which systematically replace letters or groups of letters with otherletters or groups of letters (e.g., 'fly at once' becomes 'gmzbu podf' by replacing each letter with the one following it inthe Latin alphabet). Simple versions of either have never offered much confidentiality from enterprising opponents. An early substitution cipher was the Caesar cipher, in which eachletter in the plaintext was replaced by a letter some fixed number of positions further down the alphabet. Suetonius reports that Julius Caesar used it with a shift of three to communicate with his generals. Atbash is an example of an early Hebrew cipher. The earliest known use of cryptography is

Page 928: Tugas tik di kelas xi ips 3

some carved ciphertext on stone in Egypt (ca 1900 BCE), but this may have been done for the amusement of literate observers rather than as a way of concealing information.

The Greeks of Classical times are said to have known of ciphers (e.g., the scytale transposition cipher claimed to have been used by the Spartan military).[12] Steganography (i.e., hiding even the existence of a message so as to keep itconfidential) was also first developed in ancient times. An early example, from Herodotus, was a message tattooed on a slave's shaved head and concealed under the regrown hair.[8] More modern examples of steganography include the use of invisible ink, microdots, and digital watermarks to conceal information.

In India, the 2000-year old Kamasutra of Vātsyāyana speaks of two different kinds of ciphers called Kautiliyam and Mulavediya. In the Kautiliyam, the cipher letter substitutionsare based on phonetic relations, such as vowels becoming consonants. In the Mulavediya, the cipher alphabet consists ofpairing letters and using the reciprocal ones.[8]

First page of a book by Al-Kindi which discusses encryption ofmessages

Ciphertexts produced by a classical cipher (and some modern ciphers) always reveal statistical information about the plaintext, which can often be used to break them. After the discovery of frequency analysis, perhaps by the Arab mathematician and polymath Al-Kindi (also known as Alkindus) in the 9th century,[13] nearly all such ciphers became more or lessreadily breakable by any informed attacker. Such classical ciphers still enjoy popularity today, though mostly as puzzles

Page 929: Tugas tik di kelas xi ips 3

(see cryptogram). Al-Kindi wrote a book on cryptography entitled Risalah fi Istikhraj al-Mu'amma (Manuscript for the Deciphering Cryptographic Messages), which described the first known use frequency analysis cryptanalysis techniques.[13][14]

16th-century book-shaped French cipher machine, with arms of Henri II of France

Enciphered letter from Gabriel de Luetz d'Aramon, French Ambassador to the Ottoman Empire, after 1546, with partial decipherment

Essentially all ciphers remained vulnerable to cryptanalysis using the frequency analysis technique until the development of the polyalphabetic cipher, most clearly by Leon Battista Alberti around the year 1467, though there is some indication that it was already known to Al-Kindi.[14] Alberti's innovation was to use different ciphers (i.e., substitution alphabets) for various parts of a message (perhaps for each successive plaintext letter at the limit). He also invented what was probably the first automatic cipher device, a wheel which implemented a partial realization of his invention. In the polyalphabetic Vigenère cipher, encryption uses a key word, which controls letter substitution depending on which letter of the key word is used. In the mid-19th century Charles Babbage showed that the Vigenère cipher was vulnerable to Kasiski examination, but this was first published about ten years later by Friedrich Kasiski.[15]

Page 930: Tugas tik di kelas xi ips 3

Although frequency analysis can be a powerful and general technique against many ciphers, encryption has still often been effective in practice, as many a would-be cryptanalyst was unaware of the technique. Breaking a message without usingfrequency analysis essentially required knowledge of the cipher used and perhaps of the key involved, thus making espionage, bribery, burglary, defection, etc., more attractiveapproaches to the cryptanalytically uninformed. It was finallyexplicitly recognized in the 19th century that secrecy of a cipher's algorithm is not a sensible nor practical safeguard of message security; in fact, it was further realized that anyadequate cryptographic scheme (including ciphers) should remain secure even if the adversary fully understands the cipher algorithm itself. Security of the key used should alonebe sufficient for a good cipher to maintain confidentiality under an attack. This fundamental principle was first explicitly stated in 1883 by Auguste Kerckhoffs and is generally called Kerckhoffs's Principle; alternatively and more bluntly, it was restated by Claude Shannon, the inventor of information theory and the fundamentals of theoretical cryptography, as Shannon's Maxim—'the enemy knows the system'.

Different physical devices and aids have been used to assist with ciphers. One of the earliest may have been the scytale ofancient Greece, a rod supposedly used by the Spartans as an aid for a transposition cipher (see image above). In medieval times, other aids were invented such as the cipher grille, which was also used for a kind of steganography. With the invention of polyalphabetic ciphers came more sophisticated aids such as Alberti's own cipher disk, Johannes Trithemius' tabula recta scheme, and Thomas Jefferson's multi cylinder (not publicly known, and reinvented independently by Bazeries around 1900). Many mechanical encryption/decryption devices were invented early in the 20th century, and several patented,among them rotor machines—famously including the Enigma machine used by the German government and military from the late 1920s and during World War II.[16] The ciphers implemented by better quality examples of these machine designs brought about a substantial increase in cryptanalytic difficulty afterWWI.[17]

Computer era

Page 931: Tugas tik di kelas xi ips 3

Cryptanalysis of the new mechanical devices proved to be both difficult and laborious. In the United Kingdom, cryptanalytic efforts at Bletchley Park during WWII spurred the development of more efficient means for carrying out repetitious tasks. This culminated in the development of the Colossus, the world's first fully electronic, digital, programmable computer, which assisted in the decryption of ciphers generated by the German Army's Lorenz SZ40/42 machine.

Just as the development of digital computers and electronics helped in cryptanalysis, it made possible much more complex ciphers. Furthermore, computers allowed for the encryption of any kind of data representable in any binary format, unlike classical ciphers which only encrypted written language texts;this was new and significant. Computer use has thus supplantedlinguistic cryptography, both for cipher design and cryptanalysis. Many computer ciphers can be characterized by their operation on binary bit sequences (sometimes in groups or blocks), unlike classical and mechanical schemes, which generally manipulate traditional characters (i.e., letters anddigits) directly. However, computers have also assisted cryptanalysis, which has compensated to some extent for increased cipher complexity. Nonetheless, good modern ciphers have stayed ahead of cryptanalysis; it is typically the case that use of a quality cipher is very efficient (i.e., fast andrequiring few resources, such as memory or CPU capability), while breaking it requires an effort many orders of magnitude larger, and vastly larger than that required for any classicalcipher, making cryptanalysis so inefficient and impractical asto be effectively impossible.

Extensive open academic research into cryptography is relatively recent; it began only in the mid-1970s. In recent times, IBM personnel designed the algorithm that became the Federal (i.e., US) Data Encryption Standard; Whitfield Diffie and Martin Hellman published their key agreement algorithm;[18] and the RSA algorithm was published in Martin Gardner's ScientificAmerican column. Since then, cryptography has become a widely used tool in communications, computer networks, and computer security generally. Some modern cryptographic techniques can only keep their keys secret if certain mathematical problems are intractable, such as the integer factorization or the

Page 932: Tugas tik di kelas xi ips 3

discrete logarithm problems, so there are deep connections with abstract mathematics. There are very few cryptosystems that are proven to be unconditionally secure. The one-time padis one. There are a few important ones that are proven secure under certain unproven assumptions. For example, the infeasibility of factoring extremely large integers is the basis for believing that RSA is secure, and some other systems, but even there, the proof is usually lost due to practical considerations. There are systems similar to RSA, such as one by Michael O. Rabin that is provably secure provided factoring n = pq is impossible, but the more practical system RSA has never been proved secure in this sense. The discrete logarithm problem is the basis for believing some other cryptosystems are secure, and again, there are related, less practical systems that are provably secure relative to the discrete log problem.[19]

As well as being aware of cryptographic history, cryptographicalgorithm and system designers must also sensibly consider probable future developments while working on their designs. For instance, continuous improvements in computer processing power have increased the scope of brute-force attacks, so whenspecifying key lengths, the required key lengths are similarlyadvancing.[20] The potential effects of quantum computing are already being considered by some cryptographic system designers; the announced imminence of small implementations ofthese machines may be making the need for this preemptive caution rather more than merely speculative.[4]

Essentially, prior to the early 20th century, cryptography waschiefly concerned with linguistic and lexicographic patterns. Since then the emphasis has shifted, and cryptography now makes extensive use of mathematics, including aspects of information theory, computational complexity, statistics, combinatorics, abstract algebra, number theory, and finite mathematics generally. Cryptography is also a branch of engineering, but an unusual one since it deals with active, intelligent, and malevolent opposition (see cryptographic engineering and security engineering); other kinds of engineering (e.g., civil or chemical engineering) need deal only with neutral natural forces. There is also active research examining the relationship between cryptographic

Page 933: Tugas tik di kelas xi ips 3

problems and quantum physics (see quantum cryptography and quantum computer).

Modern cryptographyThe modern field of cryptography can be divided into several areas of study. The chief ones are discussed here; see Topics in Cryptography for more.

Symmetric-key cryptography

Main article: Symmetric-key algorithm

Symmetric-key cryptography, where a single key is used for encryption and decryption

Symmetric-key cryptography refers to encryption methods in which both the sender and receiver share the same key (or, less commonly, in which their keys are different, but related in an easily computable way). This was the only kind of encryption publicly known until June 1976.[18]

Page 934: Tugas tik di kelas xi ips 3

One round (out of 8.5) of the IDEA cipher, used in some versions of PGP for high-speed encryption of, for instance, e-mail

Symmetric key ciphers are implemented as either block ciphers or stream ciphers. A block cipher enciphers input in blocks ofplaintext as opposed to individual characters, the input form used by a stream cipher.

The Data Encryption Standard (DES) and the Advanced EncryptionStandard (AES) are block cipher designs which have been designated cryptography standards by the US government (thoughDES's designation was finally withdrawn after the AES was adopted).[21] Despite its deprecation as an official standard, DES (especially its still-approved and much more secure triple-DES variant) remains quite popular; it is used across awide range of applications, from ATM encryption[22] to e-mail privacy [23] and secure remote access.[24] Many other block ciphershave been designed and released, with considerable variation in quality. Many have been thoroughly broken, such as FEAL.[4]

[25]

Stream ciphers, in contrast to the 'block' type, create an arbitrarily long stream of key material, which is combined with the plaintext bit-by-bit or character-by-character, somewhat like the one-time pad. In a stream cipher, the outputstream is created based on a hidden internal state which changes as the cipher operates. That internal state is initially set up using the secret key material. RC4 is a widely used stream cipher; see Category:Stream ciphers.[4] Block ciphers can be used as stream ciphers; see Block cipher modes of operation.

Cryptographic hash functions are a third type of cryptographicalgorithm. They take a message of any length as input, and output a short, fixed length hash which can be used in (for example) a digital signature. For good hash functions, an attacker cannot find two messages that produce the same hash. MD4 is a long-used hash function which is now broken; MD5, a strengthened variant of MD4, is also widely used but broken inpractice. The US National Security Agency developed the SecureHash Algorithm series of MD5-like hash functions: SHA-0 was a flawed algorithm that the agency withdrew; SHA-1 is widely

Page 935: Tugas tik di kelas xi ips 3

deployed and more secure than MD5, but cryptanalysts have identified attacks against it; the SHA-2 family improves on SHA-1, but it isn't yet widely deployed; and the US standards authority thought it "prudent" from a security perspective to develop a new standard to "significantly improve the robustness of NIST's overall hash algorithm toolkit."[26] Thus, a hash function design competition was meant to select a new U.S. national standard, to be called SHA-3, by 2012. The competition ended on October 2, 2012 when the NIST announced that Keccak would be the new SHA-3 hash algorithm.[27]

Message authentication codes (MACs) are much like cryptographic hash functions, except that a secret key can be used to authenticate the hash value upon receipt;[4] this additional complication blocks an attack scheme against bare digest algorithms, and so has been thought worth the effort.

Public-key cryptography

Main article: Public-key cryptography

Public-key cryptography, where different keys are used for encryption and decryption

Symmetric-key cryptosystems use the same key for encryption and decryption of a message, though a message or group of messages may have a different key than others. A significant disadvantage of symmetric ciphers is the key management necessary to use them securely. Each distinct pair of communicating parties must, ideally, share a different key, and perhaps each ciphertext exchanged as well. The number of keys required increases as the square of the number of network

Page 936: Tugas tik di kelas xi ips 3

members, which very quickly requires complex key management schemes to keep them all consistent and secret. The difficultyof securely establishing a secret key between two communicating parties, when a secure channel does not already exist between them, also presents a chicken-and-egg problem which is a considerable practical obstacle for cryptography users in the real world.

Whitfield Diffie and Martin Hellman, authors of the first published paper on public-key cryptography

In a groundbreaking 1976 paper, Whitfield Diffie and Martin Hellman proposed the notion of public-key (also, more generally, called asymmetric key) cryptography in which two different but mathematically related keys are used—a public key and a private key.[28] A public key system is so constructed that calculation of one key (the 'private key') is computationally infeasible from the other (the 'public key'), even though they are necessarily related. Instead, both keys are generated secretly, as an interrelated pair.[29] The historian David Kahn described public-key cryptography as "the most revolutionary new concept in the field since polyalphabetic substitution emerged in the Renaissance".[30]

In public-key cryptosystems, the public key may be freely distributed, while its paired private key must remain secret. In a public-key encryption system, the public key is used for encryption, while the private or secret key is used for decryption.While Diffie and Hellman could not find such a system, they showed that public-key cryptography was indeed possible by presenting the Diffie–Hellman key exchange protocol, a solution that is now widely used in secure communications to allow two parties to secretly agree on a shared encryption key.[18]

Diffie and Hellman's publication sparked widespread academic efforts in finding a practical public-key encryption system.

Page 937: Tugas tik di kelas xi ips 3

This race was finally won in 1978 by Ronald Rivest, Adi Shamir, and Len Adleman, whose solution has since become knownas the RSA algorithm.[31]

The Diffie–Hellman and RSA algorithms, in addition to being the first publicly known examples of high quality public-key algorithms, have been among the most widely used. Others include the Cramer–Shoup cryptosystem, ElGamal encryption, andvarious elliptic curve techniques. See Category:Asymmetric-keycryptosystems.

To much surprise, a document published in 1997 by the Government Communications Headquarters (GCHQ), a British intelligence organization, revealed that cryptographers at GCHQ had anticipated several academic developments.[32] Reportedly, around 1970, James H. Ellis had conceived the principles of asymmetric key cryptography. In 1973, Clifford Cocks invented a solution that essentially resembles the RSA algorithm.[32][33] And in 1974, Malcolm J. Williamson is claimed to have developed the Diffie–Hellman key exchange.[34]

Padlock icon from the Firefox Web browser, which indicates that TLS, a public-key cryptography system, is in use.

Public-key cryptography can also be used for implementing digital signature schemes. A digital signature is reminiscent of an ordinary signature; they both have the characteristic ofbeing easy for a user to produce, but difficult for anyone else to forge. Digital signatures can also be permanently tiedto the content of the message being signed; they cannot then be 'moved' from one document to another, for any attempt will be detectable. In digital signature schemes, there are two algorithms: one for signing, in which a secret key is used to process the message (or a hash of the message, or both), and one for verification, in which the matching public key is used withthe message to check the validity of the signature. RSA and DSA are two of the most popular digital signature schemes. Digital signatures are central to the operation of public key infrastructures and many network security schemes (e.g., SSL/TLS, many VPNs, etc.).[25]

Page 938: Tugas tik di kelas xi ips 3

Public-key algorithms are most often based on the computational complexity of "hard" problems, often from numbertheory. For example, the hardness of RSA is related to the integer factorization problem, while Diffie–Hellman and DSA are related to the discrete logarithm problem. More recently, elliptic curve cryptography has developed, a system in which security is based on number theoretic problems involving elliptic curves. Because of the difficulty of the underlying problems, most public-key algorithms involve operations such as modular multiplication and exponentiation, which are much more computationally expensive than the techniques used in most block ciphers, especially with typical key sizes. As a result,public-key cryptosystems are commonly hybrid cryptosystems, inwhich a fast high-quality symmetric-key encryption algorithm is used for the message itself, while the relevant symmetric key is sent with the message, but encrypted using a public-keyalgorithm. Similarly, hybrid signature schemes are often used,in which a cryptographic hash function is computed, and only the resulting hash is digitally signed.[4]

Cryptanalysis

Main article: Cryptanalysis

Variants of the Enigma machine, used by Germany's military andcivil authorities from the late 1920s through World War II, implemented a complex electro-mechanical polyalphabetic cipher. Breaking and reading of the Enigma cipher at Poland's Cipher Bureau, for 7 years before the war, and subsequent

Page 939: Tugas tik di kelas xi ips 3

decryption at Bletchley Park, was important to Allied victory.[8]

The goal of cryptanalysis is to find some weakness or insecurity in a cryptographic scheme, thus permitting its subversion or evasion.

It is a common misconception that every encryption method can be broken. In connection with his WWII work at Bell Labs, Claude Shannon proved that the one-time pad cipher is unbreakable, provided the key material is truly random, never reused, kept secret from all possible attackers, and of equal or greater length than the message.[35] Most ciphers, apart fromthe one-time pad, can be broken with enough computational effort by brute force attack, but the amount of effort needed may be exponentially dependent on the key size, as compared tothe effort needed to make use of the cipher. In such cases, effective security could be achieved if it is proven that the effort required (i.e., "work factor", in Shannon's terms) is beyond the ability of any adversary. This means it must be shown that no efficient method (as opposed to the time-consuming brute force method) can be found to break the cipher. Since no such proof has been found to date, the one-time-pad remains the only theoretically unbreakable cipher.

There are a wide variety of cryptanalytic attacks, and they can be classified in any of several ways. A common distinctionturns on what an attacker knows and what capabilities are available. In a ciphertext-only attack, the cryptanalyst has access only to the ciphertext (good modern cryptosystems are usually effectively immune to ciphertext-only attacks). In a known-plaintext attack, the cryptanalyst has access to a ciphertext and its corresponding plaintext (or to many such pairs). In a chosen-plaintext attack, the cryptanalyst may choose a plaintext and learn its corresponding ciphertext (perhaps many times); an example is gardening, used by the British during WWII. Finally, in a chosen-ciphertext attack, the cryptanalyst may be able to choose ciphertexts and learn their corresponding plaintexts.[4] Also important, often overwhelmingly so, are mistakes (generally in the design or use of one of the protocols involved; see Cryptanalysis of theEnigma for some historical examples of this).

Page 940: Tugas tik di kelas xi ips 3

Poznań monument (center) to Polish cryptologists whose breaking of Germany's Enigma machine ciphers, beginning in 1932, altered the course of World War II

Cryptanalysis of symmetric-key ciphers typically involves looking for attacks against the block ciphers or stream ciphers that are more efficient than any attack that could be against a perfect cipher. For example, a simple brute force attack against DES requires one known plaintext and 255 decryptions, trying approximately half of the possible keys, to reach a point at which chances are better than even that the key sought will have been found. But this may not be enough assurance; a linear cryptanalysis attack against DES requires 243 known plaintexts and approximately 243 DES operations.[36] This is a considerable improvement on brute force attacks.

Public-key algorithms are based on the computational difficulty of various problems. The most famous of these is integer factorization (e.g., the RSA algorithm is based on a problem related to integer factoring), but the discrete logarithm problem is also important. Much public-key cryptanalysis concerns numerical algorithms for solving these computational problems, or some of them, efficiently (i.e., ina practical time). For instance, the best known algorithms forsolving the elliptic curve-based version of discrete logarithmare much more time-consuming than the best known algorithms for factoring, at least for problems of more or less equivalent size. Thus, other things being equal, to achieve anequivalent strength of attack resistance, factoring-based encryption techniques must use larger keys than elliptic curvetechniques. For this reason, public-key cryptosystems based onelliptic curves have become popular since their invention in the mid-1990s.

Page 941: Tugas tik di kelas xi ips 3

While pure cryptanalysis uses weaknesses in the algorithms themselves, other attacks on cryptosystems are based on actualuse of the algorithms in real devices, and are called side-channel attacks. If a cryptanalyst has access to, for example, theamount of time the device took to encrypt a number of plaintexts or report an error in a password or PIN character, he may be able to use a timing attack to break a cipher that is otherwise resistant to analysis. An attacker might also study the pattern and length of messages to derive valuable information; this is known as traffic analysis [37] and can be quite useful to an alert adversary. Poor administration of a cryptosystem, such as permitting too short keys, will make anysystem vulnerable, regardless of other virtues. And, of course, social engineering, and other attacks against the personnel who work with cryptosystems or the messages they handle (e.g., bribery, extortion, blackmail, espionage, torture, ...) may be the most productive attacks of all.

Cryptographic primitives

Much of the theoretical work in cryptography concerns cryptographic primitives —algorithms with basic cryptographic properties—and their relationship to other cryptographic problems. More complicated cryptographic tools are then built from these basic primitives. These primitives provide fundamental properties, which are used to develop more complextools called cryptosystems or cryptographic protocols, which guaranteeone or more high-level security properties. Note however, thatthe distinction between cryptographic primitives and cryptosystems, is quite arbitrary; for example, the RSA algorithm is sometimes considered a cryptosystem, and sometimes a primitive. Typical examples of cryptographic primitives include pseudorandom functions, one-way functions, etc.

Cryptosystems

One or more cryptographic primitives are often used to developa more complex algorithm, called a cryptographic system, or cryptosystem. Cryptosystems (e.g., El-Gamal encryption) are designed to provide particular functionality (e.g., public keyencryption) while guaranteeing certain security properties (e.g., chosen-plaintext attack (CPA) security in the random

Page 942: Tugas tik di kelas xi ips 3

oracle model). Cryptosystems use the properties of the underlying cryptographic primitives to support the system's security properties. Of course, as the distinction between primitives and cryptosystems is somewhat arbitrary, a sophisticated cryptosystem can be derived from a combination of several more primitive cryptosystems. In many cases, the cryptosystem's structure involves back and forth communicationamong two or more parties in space (e.g., between the sender of a secure message and its receiver) or across time (e.g., cryptographically protected backup data). Such cryptosystems are sometimes called cryptographic protocols.

Some widely known cryptosystems include RSA encryption, Schnorr signature, El-Gamal encryption, PGP, etc. More complexcryptosystems include electronic cash [38] systems, signcryption systems, etc. Some more 'theoretical' cryptosystems include interactive proof systems,[39] (like zero-knowledge proofs),[40] systems for secret sharing,[41][42] etc.

Until recently[timeframe?], most security properties of most cryptosystems were demonstrated using empirical techniques or using ad hoc reasoning. Recently[timeframe?], there has been considerable effort to develop formal techniques for establishing the security of cryptosystems; this has been generally called provable security. The general idea of provable security is to give arguments about the computational difficulty needed to compromise some security aspect of the cryptosystem (i.e., to any adversary).

The study of how best to implement and integrate cryptography in software applications is itself a distinct field (see Cryptographic engineering and Security engineering).

Legal issuesSee also: Cryptography laws in different nations

Prohibitions

Cryptography has long been of interest to intelligence gathering and law enforcement agencies. Secret communications may be criminal or even treasonous. Because of its facilitation of privacy, and the diminution of privacy

Page 943: Tugas tik di kelas xi ips 3

attendant on its prohibition, cryptography is also of considerable interest to civil rights supporters. Accordingly,there has been a history of controversial legal issues surrounding cryptography, especially since the advent of inexpensive computers has made widespread access to high quality cryptography possible.

In some countries, even the domestic use of cryptography is, or has been, restricted. Until 1999, France significantly restricted the use of cryptography domestically, though it hassince relaxed many of these rules. In China and Iran, a license is still required to use cryptography.[5] Many countries have tight restrictions on the use of cryptography. Among the more restrictive are laws in Belarus, Kazakhstan, Mongolia, Pakistan, Singapore, Tunisia, and Vietnam.[43]

In the United States, cryptography is legal for domestic use, but there has been much conflict over legal issues related to cryptography. One particularly important issue has been the export of cryptography and cryptographic software and hardware. Probably because of the importance of cryptanalysis in World War II and an expectation that cryptography would continue to be important for national security, many Western governments have, at some point, strictly regulated export of cryptography. After World War II, it was illegal in the US to sell or distribute encryption technology overseas; in fact, encryption was designated as auxiliary military equipment and put on the United States Munitions List.[44] Until the development of the personal computer, asymmetric key algorithms (i.e., public key techniques), and the Internet, this was not especially problematic. However, as the Internet grew and computers became more widely available, high-quality encryption techniques became well known around the globe.

Export controls

Main article: Export of cryptography

In the 1990s, there were several challenges to US export regulation of cryptography. After the source code for Philip Zimmermann's Pretty Good Privacy (PGP) encryption program found its way onto the Internet in June 1991, a complaint by RSA Security (then called RSA Data Security, Inc.) resulted in

Page 944: Tugas tik di kelas xi ips 3

a lengthy criminal investigation of Zimmermann by the US Customs Service and the FBI, though no charges were ever filed.[45][46] Daniel J. Bernstein, then a graduate student at UC Berkeley, brought a lawsuit against the US government challenging some aspects of the restrictions based on free speech grounds. The 1995 case Bernstein v. United States ultimately resulted in a 1999 decision that printed source code for cryptographic algorithms and systems was protected asfree speech by the United States Constitution.[47]

In 1996, thirty-nine countries signed the Wassenaar Arrangement, an arms control treaty that deals with the exportof arms and "dual-use" technologies such as cryptography. The treaty stipulated that the use of cryptography with short key-lengths (56-bit for symmetric encryption, 512-bit for RSA) would no longer be export-controlled.[48] Cryptography exports from the US became less strictly regulated as a consequence ofa major relaxation in 2000;[49] there are no longer very many restrictions on key sizes in US-exported mass-market software.Since this relaxation in US export restrictions, and because most personal computers connected to the Internet include US-sourced web browsers such as Firefox or Internet Explorer, almost every Internet user worldwide has potential access to quality cryptography via their browsers (e.g., via Transport Layer Security). The Mozilla Thunderbird and Microsoft OutlookE-mail client programs similarly can transmit and receive emails via TLS, and can send and receive email encrypted with S/MIME. Many Internet users don't realize that their basic application software contains such extensive cryptosystems. These browsers and email programs are so ubiquitous that even governments whose intent is to regulate civilian use of cryptography generally don't find it practical to do much to control distribution or use of cryptography of this quality, so even when such laws are in force, actual enforcement is often effectively impossible.[citation needed]

NSA involvement

See also: Clipper chip

Another contentious issue connected to cryptography in the United States is the influence of the National Security Agencyon cipher development and policy. The NSA was involved with

Page 945: Tugas tik di kelas xi ips 3

the design of DES during its development at IBM and its consideration by the National Bureau of Standards as a possible Federal Standard for cryptography.[50] DES was designedto be resistant to differential cryptanalysis,[51] a powerful and general cryptanalytic technique known to the NSA and IBM, that became publicly known only when it was rediscovered in the late 1980s.[52] According to Steven Levy, IBM discovered differential cryptanalysis,[46] but kept the technique secret atthe NSA's request. The technique became publicly known only when Biham and Shamir re-discovered and announced it some years later. The entire affair illustrates the difficulty of determining what resources and knowledge an attacker might actually have.

Another instance of the NSA's involvement was the 1993 Clipperchip affair, an encryption microchip intended to be part of the Capstone cryptography-control initiative. Clipper was widely criticized by cryptographers for two reasons. The cipher algorithm (called Skipjack) was then classified (declassified in 1998, long after the Clipper initiative lapsed). The classified cipher caused concerns that the NSA had deliberately made the cipher weak in order to assist its intelligence efforts. The whole initiative was also criticizedbased on its violation of Kerckhoffs's Principle, as the scheme included a special escrow key held by the government for use by law enforcement, for example in wiretaps.[46]

Digital rights management

Main article: Digital rights management

Cryptography is central to digital rights management (DRM), a group of techniques for technologically controlling use of copyrighted material, being widely implemented and deployed atthe behest of some copyright holders. In 1998, U.S. President Bill Clinton signed the Digital Millennium Copyright Act (DMCA), which criminalized all production, dissemination, and use of certain cryptanalytic techniques and technology (now known or later discovered); specifically, those that could be used to circumvent DRM technological schemes.[53] This had a noticeable impact on the cryptography research community sincean argument can be made that any cryptanalytic research violated, or might violate, the DMCA. Similar statutes have

Page 946: Tugas tik di kelas xi ips 3

since been enacted in several countries and regions, includingthe implementation in the EU Copyright Directive. Similar restrictions are called for by treaties signed by World Intellectual Property Organization member-states.

The United States Department of Justice and FBI have not enforced the DMCA as rigorously as had been feared by some, but the law, nonetheless, remains a controversial one. Niels Ferguson, a well-respected cryptography researcher, has publicly stated that he will not release some of his research into an Intel security design for fear of prosecution under the DMCA.[54] Both Alan Cox (longtime number 2 in Linux kernel development) and Edward Felten (and some of his students at Princeton) have encountered problems related to the Act. Dmitry Sklyarov was arrested during a visit to the US from Russia, and jailed for five months pending trial for alleged violations of the DMCA arising from work he had done in Russia, where the work was legal. In 2007, the cryptographic keys responsible for Blu-ray and HD DVD content scrambling were discovered and released onto the Internet. In both cases,the MPAA sent out numerous DMCA takedown notices, and there was a massive Internet backlash[7] triggered by the perceived impact of such notices on fair use and free speech.

Forced disclosure of encryption keys

Main article: Key disclosure law

In the United Kingdom, the Regulation of Investigatory Powers Act gives UK police the powers to force suspects to decrypt files or hand over passwords that protect encryption keys. Failure to comply is an offense in its own right, punishable on conviction by a two-year jail sentence or up to five years in cases involving national security.[6] Successful prosecutions have occurred under the Act; the first, in 2009,[55] resulted in a term of 13 months' imprisonment.[56] Similar forced disclosure laws in Australia, Finland, France, and India compel individual suspects under investigation to hand over encryption keys or passwords during a criminal investigation.

In the United States, the federal criminal case of United States v. Fricosu addressed whether a search warrant can

Page 947: Tugas tik di kelas xi ips 3

compel a person to reveal an encryption passphrase or password.[57] The Electronic Frontier Foundation (EFF) argued that this is a violation of the protection from self-incrimination given by the Fifth Amendment.[58] In 2012, the court ruled that under the All Writs Act, the defendant was required to produce an unencrypted hard drive for the court.[59]

In many jurisdictions, the legal status of forced disclosure remains unclear.

However, to avoid a public relations disaster, Microsoft re-issued MSN Music shutdown statement on June 19 and allowed the users to use their licenses until the end of 2011:

After careful consideration, Microsoft has decided to continue to support the authorization of new computers and devices and delivery of new license keys for MSN Music customers through at least the end of 2011, after which we will evaluate how much this functionality is still being used and what steps should be taken next to support our customers. This means you will continue to be able to listen to your purchased music and transfer your music to new PCs and devices beyond the previously announced August 31, 2008 date.[134]

Yahoo! Music Store

On July 23, 2008, the Yahoo! Music Store emailed its customers to tell them it will be shutting down effective September 30, 2008, and the DRM license key servers will be taken offline.[135]

Page 948: Tugas tik di kelas xi ips 3

Walmart

In August 2007, Walmart's online music division started offering (DRM-free) MP3s as an option. Starting in February 2008, they made all sales DRM-free.

On September 26, 2008, the Walmart Music Team notified itscustomers via email they will be shutting down their DRM servers October 9, 2008, and any DRM-encumbered music acquired from them will no longer be accessible unless backed up to a recordable CDs (anon-DRM format) before that date.[136]

After bad press and negative reaction from customers, on October 9, 2008, Walmart decided not to take its DRM servers offline.[137]

Fictionwise / Overdrive

In January 2009, OverDrive informed Fictionwise that they would no longer be providing downloads for purchasers of e-books through Fictionwise as of 31 January 2009. No reason was provided toFictionwise as to why they were being shut down. This prevents previous purchasers from being able to renew their books on new devices.[138] Fictionwise is working to provide replacement ebooks for its customers in alternative, non-DRM formats, but does not havethe rights to provide all the books in different formats.[138]

Ads for Adobe PDF

Also in January 2009, Adobe Systems announced that as of March 2009, they would no longer operate the servers that served adsto their PDF reader. Depending on the restriction settings used whenPDF documents were created, they may no longer be readable.[139]

Adobe Content Server 3 for Adobe PDF

In April 2009, Adobe Systems announced that as of March 30, 2009, the Adobe Content 3 server would no longer activate new installations of Adobe Reader or Adobe Acrobat. In addition, the ability to migrate content from Adobe Content Server 3 to Adobe Content Server 4 would cease from mid-December 2009. Anyone who failed to migrate their DRMed PDF files during this nine month window lost access to their content the next time they had to re-install their copy of Adobe Reader or Adobe Acrobat.[140]

Page 949: Tugas tik di kelas xi ips 3

Harper Collins ebook store

In November 2010, Harper Collins announced that as of November 19, 2010, their eBook Store was discontinued, and advised all customers to download and archive their purchases before December 19, 2010, when purchased titles would no longer be accessible. Loss of access to Mobipocket ebooks on new devices.[141]

CyberRead ebook store

In February 2011, CyberRead announced that they were closing down, and advised all customers to download and archive their purchases. Loss of access to Mobipocket ebooks on new devices.[142]

Microsoft Reader and .lit ebooks

In August 2011, Microsoft announced they were discontinuing both Microsoft Reader and the use of the .lit format for ebooks at the end of August 2012. The activation servers are nowoffline, and it is not possible to read DRMed .lit ebooks except on an installation made before the servers were taken down.[143]

Fictionwise

In November 2012, Fictionwise announced that they were shutting down. Access to ebook downloads stopped on 31 January 2013.US and UK customers had a limited time-window (up to 26 April 2013) to opt-in to a transfer of (most of) their Fictionwise library to a Barnes & Nobel/NOOK UK account. Customers outside the US and UK lostaccess to new downloads of their books. For books in the secure Mobipocket format, this meant that customers would not be able to read the book on any new devices.[144]

JManga ebook store

In March 2013, JManga announced that they were closing down, and advised all customers that they would lose all access to their content in May 2013.[145]

Waterstones ebook store

In March 2013, Waterstones announced that they were makingchanges to their eBook store, and advised customers that some of their ebooks would become unavailable for download after 18 April

Page 950: Tugas tik di kelas xi ips 3

2013. Waterstones advised affected customers to download before then. Any customer who had not kept their own backups and missed this 31-day window lost their ebooks.[146]

Acetrax Video on Demand

In May 2013, Acetrax announced they were shutting down. Refunds were provided for purchases of HD movies, but for standard definition versions, the only option made available was a limited time option to download to a Windows PC, but the movie would then belocked to that particular installation of Microsoft Media Player. Non-Windows users lost access to their SD movies.[147]

Sony Reader Store

In February 2014, Sony announced that their US ebook storewould be closing by the end of March 2014. Accounts were transferredto Kobo, but not all books in Sony accounts could be transferred.[148] In May 2014, Sony announced that their European and Australianebook stores would be closing on 16 June 2014, with similar arrangements for transfer to Kobo accounts.[149] Customers were advised to download all their ebooks before the closing date, because not all books could be transferred.

Environmental issues

DRM can accelerate hardware obsolescence, turning it into electronic waste sooner:

DRM-related restrictions on capabilities of hardware can artificially reduce the range of potential uses of the device (to the point of making a device consisting of general-purpose components usable only for a purpose approved, or with “content” provided, by the vendor), limit upgradeability and repairability.[150][151] Cf. proprietary abandonware, orphan works, planned obsolescence. Examples:

DVD region code (applies to discs as well as DVD players and drives);

Page 951: Tugas tik di kelas xi ips 3

the removal of the OtherOS feature from Sony PlayStation 3 game consoles;

Tivoization, UEFI Secure Boot and similar.

Users may be forced to buy new devices for compatibility with DRM, i.a. through having to upgrade an operating system to one with different hardware requirements.[152]

Moral and legitimacy implications

According to the EFF, "in an effort to attract customers, these music services try to obscure the restrictions they impose on you with clever marketing."[153]

DRM laws are widely flouted: according to Australia Official Music Chart Survey, copyright infringements from all causes are practised by millions of people.[154]

Relaxing some forms of DRM can be beneficial

Jeff Raikes, ex-president of the Microsoft Business Division, stated: "If they're going to pirate somebody, we want it to be us rather than somebody else".[155] An analogous argument was made in an early paper by Kathleen Conner and Richard Rummelt.[156] A subsequent study of digital rights management for ebooks by Gal Oestreicher-Singer and Arun Sundararajan showed that relaxing some forms of DRM can be beneficial to digital rights holders because thelosses from piracy are outweighed by the increases in value to legalbuyers.[157]

Also, free distribution, even if unauthorized, can be beneficial to small or new content providers by spreading and popularizing content and therefore generating a larger consumer baseby sharing and word of mouth. Several musicians have grown to popularity by posting their music videos on sites like YouTube where

Page 952: Tugas tik di kelas xi ips 3

the content is free to listen to. This method of putting the productout in the world free of DRM not only generates a greater following but also fuels greater revenue through other merchandise (hats, T-shirts), concert tickets, and of course, more sales of the content to paying consumers.

Can increase piracy

While the main intent of DRM is to prevent unauthorized copiesof a product, there are mathematical models that suggest that DRM can fail to do their job on multiple levels.[158] Ironically, the biggest failure that can result from DRM is that they have a potential to increase the piracy of a product. This goes against theheld belief that DRM can always reduce piracy. There also seems to be evidence that DRM will reduce profits.

The driving factor behind why DRM have the potential to increase piracy is related to how many restrictions they impose on alegal buyer. An ideal DRM would be one which imposes zero restrictions on legal buyers but makes imposing restrictions on pirates. Even if an ideal DRM can be created and used, in certain cases, it can be shown that removing the DRM will result in less piracy. This is also true when the DRM is not ideal and it does impose restrictions on legal buyers. The reason for this is because,when the DRM is imposed, pirates are able to lift the restrictions set by it. This leads pirates being able to get more utility out of the product than legal consumers and this is what causes the increase in piracy.[citation needed]

The important factor for companies is how all of this affects their profits. As mentioned, removing DRM will increase profits whether the DRM are ideal or not. Removing DRM can make the product cheaper. For the ideal DRM, the reason why profits can increase is because of the price elasticity of demand is elastic. Since there are also fewer people pirating and more people legally buying, more profits are going to be made. For the non-ideal DRM this is also

Page 953: Tugas tik di kelas xi ips 3

true, especially when there are a high number of restrictions associated with it.[citation needed]

The mathematical models are strictly applied to the music industry (music CDs, downloadable music). These models could be extended to the other industries such as the gaming industry which show similarities to the music industry model. There are real instances when DRM restrain consumers in the gaming industry. Some DRM games are required to connect to the internet in order to play them. If one can't connect to the internet or if the service is down, one can't play.[159] Good Old Games' head of public relations and marketing, Trevor Longino, in agreement with this, believes thatusing DRM is less effective than improving a game's value in reducing video game piracy.[160] However, TorrentFreak published a "Top 10 pirated games of 2008" list which shows that intrusive DRM is not the main reason why some games are pirated more heavily than others. Popular games such as BioShock, Crysis Warhead, and Mass Effect which use intrusive DRM are strangely absent from the list.[24]

Alternatives to DRM

Several business models have been proposed that offer an alternative to the use of DRM by content providers and rights holders.[161]

"Easy and cheap"

The first business model that dissuades illegal file sharing is to make the downloading easy and cheap. The use of a noncommercial site makes downloading music complex. If someone misspells the artist's name, the search will leave the consumer dissatisfied. Also, some[which?] illegal file sharing websites lead to many viruses and malware that attach themselves to the files, somewhat in a way of torrent poisoning.[162] Some sites limit the traffic, which can make downloading a song a long and frustrating process. If the songs are all provided on one site, and reasonably

Page 954: Tugas tik di kelas xi ips 3

priced, consumers will purchase the music legally to overcome the frustrations such as media files appearing empty or failing to play smoothly/correctly occasionally on DRM'ed players and pcs giving a risky feel to the potential "sharer" that can occur downloading illegally.[161]

Comedian Louis C.K. made headlines in 2011, with the release of his concert film Live at the Beacon Theater as an inexpensive (US$5), DRM-free download. The only attempt to deter piracy was a letter emphasizing the lack of corporate involvement and direct relationship between artist and viewer. The film was a commercial success, turning a profit within 12 hours of its release. Some, including the artist himself, have suggested that piracy rates were lower than normal as a result, making the release an important case study for the digital marketplace.[163][164][165]

Webcomic Diesel Sweeties released a DRM-free PDF ebook on author R Stevens's 35th birthday,[166][167][168] leading to more than 140,000 downloads in the first month, according to Stevens.[169] He followed this with a DRM-free iBook specifically for the iPad, using Apple's new software,[170] which generated more than 10,000 downloads in three days.[171] That led Stevens to launch a Kickstarter project – "ebook stravaganza 3000" – to fund the conversion of 3,000 comics, written over 12 years, into a single "humongous" ebook to be released both for free and through the iBookstore; launched February 8, 2012, with the goal of raising $3,000 in 30 days, the project met its goal in 45 minutes, and went on to be funded at more than 10 times its original goal.[172] The "payment optional" DRM-free model in this case was adopted on Stevens' view that "there is a class of webcomics reader who would prefer to read in large chunks and, even better, would be willing tospend a little money on it."[171]

Crowdfunding or pre-order model

In February 2012, Double Fine asked for an upcoming video game, Double Fine Adventure, for crowdfunding on kickstarter.com and

Page 955: Tugas tik di kelas xi ips 3

offered the game DRM-free for backers. This project exceeded its original goal of $400,000 in 45 days, raising in excess of $2 million.[173][174] In this case DRM freedom was offered to backers as an incentive for supporting the project before release, with the consumer and community support and media attention from the highly successful Kickstarter drive counterbalancing any loss through piracy.[citation needed] Also, crowdfunding with the product itself as benefit for the supporters can be seen as pre-order or subscription business model in which one motivation for DRM, the uncertainty if a product will have enough paying customers to outweigh the development costs, is eliminated. After the success of Double Fine Adventure, many games were crowd-funded and many of themoffered a DRM-free game version for the backers.[175][176][177]

Digital content as promotion for traditional products

Many artists are using the Internet to give away music to create awareness and liking to a new upcoming album. The artists release a new song on the internet for free download, which consumers can download. The hope is to have the listeners buy the new album because of the free download.[161] A common practice used today is releasing a song or two on the internet for consumers to indulge. In 2007, Radiohead released an album named In Rainbows, in which fans could pay any amount they want, or download it for free.[178]

Artistic Freedom Voucher

The Artistic Freedom Voucher (AFV) introduced by Dean Baker isa way for consumers to support “creative and artistic work.” In thissystem, each consumer would have a refundable tax credit of $100 to give to any artist of creative work. To restrict fraud, the artists must register with the government. The voucher prohibits any artist that receives the benefits from copyrighting their material for a certain length of time. Consumers can obtain music for a certain amount of time easily and the consumer decides which artists receivethe $100. The money can either be given to one artist or to many, the distribution is up to the consumer.[179]

Page 956: Tugas tik di kelas xi ips 3

Historical note

A very early implementation of DRM was the Software Service System (SSS) devised by the Japanese engineer Ryoichi Mori in 1983 [180] and subsequently refined under the name superdistribution. TheSSS was based on encryption, with specialized hardware that controlled decryption and also enabled payments to be sent to the copyright holder. The underlying principle of the SSS and subsequently of superdistribution was that the distribution of encrypted digital products should be completely unrestricted and that users of those products would not just be permitted to redistribute them but would actually be encouraged to do so.