Top Banner

of 43

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript

Installation(orsetup) of acomputer program(includingdevice driversandplugins), is the act of making the program ready forexecution. Because the process varies for each program and each computer, programs (includingoperating systems) often come with aninstaller, a specialized program responsible for doing whatever is needed for their installation.Installation typically involves code being copied/generated from the installation files to new files on the local computer for easier access by the operating system. Because code is generally copied/generated in multiple locations, uninstallation usually involves more than just erasing the program folder. For example, registry files and other deep code in the system may need to be modified/deleted for a complete uninstallation.

Overview[edit]Some computer programs can be executed by simply copying them into afolderstored on a computer and executing them. Other programs are supplied in a form unsuitable for immediate execution and therefore need an installation procedure. Once installed, the program can be executed again and again, without the need to reinstall before each execution.Common operations performed during software installations include: Making sure that necessarysystem requirementsare met Checking for existing versions of the software Creating or updating programfilesand folders Adding configuration data such asconfiguration files,Windows registryentries orenvironment variables Making the software accessible to the user, for instance by creatinglinks, shortcutsorbookmarks Configuring components that run automatically, such asdaemonsorWindows services Performingproduct activation Updating the software versionsNecessity[edit]As mentioned earlier, some computer programs need no installation. This was once usual for many programs which run onDOS,Mac OS,Atari TOSandAmigaOS. As computing environments grew more complex and fixedhard drivesreplacedfloppy disks, the need for tangible installation presented itself.Nowadays, a class of modern applications that do not need installation are known asportable applications, as they may be roamed around onto different computers and run. Similarly, there arelive operating systems, which do not need installation and can be run directly from abootableCD,DVD, orUSB flash drive. Examples areAmigaOS4.0, variousLinux distributions,MorphOSor Mac OS versions 1.0 through 9.0. (Seelive CDandlive USB.) Finally,web applications, which run inside aweb browser, do not need installation.Types[edit]Attended installation[edit]OnWindowssystems, this is the most common form of installation. An installation process usually needs a user who attends it to make choices, such as accepting or declining anend-user license agreement(EULA), specifying preferences such as the installation location, supplying passwords or assisting inproduct activation. In graphical environments, installers that offer awizard-based interfaceare common. Attended installers may ask users to help mitigate the errors. For instance, if the disk in which the computer program is being installed was full, the installer may ask the user to specify another target path or clear enough space in the disk.Silent installation[edit]Installation that does not display messages or windows during its progress. "Silent installation" is not the same as "unattended installation" (see below): All silent installations are unattended but not all unattended installations are silent. The reason behind a silent installation may be convenience or subterfuge.Malwareis almost always installed silently.[citation needed]Unattended installation[edit]Installation that is performed without user interaction during its progress or with no user present at all. One of the reasons to use this approach is to automate the installation of a large number of systems. An unattended installation either does not require the user to supply anything or has received all necessary input prior to the start of installation. Such input may be in the form ofcommand lineswitchesor ananswer file, a file that contains all the necessary parameters.Windows XPand mostLinux distributionsare examples of operating systems that can be installed with an answer file. In unattended installation, it is assumed that there is no user to help mitigate errors. For instance, if the installation medium was faulty, the installer should fail the installation, as there is no user to fix the fault or replace the medium. Unattended installers may record errors in acomputer logfor later review.Headless installation[edit]Installation performed without using acomputer monitorconnected. In attended forms of headless installation, another machine connects to the target machine (for instance, via alocal area network) and takes over the display output. Since a headless installation does not need a user at the location of the target computer, unattended headless installers may be used to install a program on multiple machines at the same time.Scheduled or automated installation[edit]An installation process that runs on a preset time or when a predefined condition transpires, as opposed to an installation process that starts explicitly on a user's command. For instance, asystem administratorwilling to install a later version of a computer program that is being used can schedule that installation to occur when that program is not running. An operating system may automatically install a device driver for a device that the user connects. (Seeplug and play.) Malware may also be installed automatically. For example, the infamousConfickerwas installed when the user plugged an infected device to their computer.Clean installation[edit]A clean installation is one that is done in the absence of any interfering elements such as old versions of the computer program being installed or leftovers from a previous installation. In particular, the clean installation of an operating system is an installation in which the targetdisk partitionis erased before installation. Since the interfering elements are absent, a clean installation may succeed where an unclean installation may fail or may take significantly longer.Network installation[edit]Not to be confused withnetwork booting.Network installation, shortened netinstall, is an installation of a program from ashared network resourcethat may be done by installing a minimal system before proceeding to download further packages over the network. This may simply be a copy of the original media but software publishers which offer site licenses for institutional customers may provide a version intended for installation over a network.Installer[edit]Aninstallation programorinstalleris a computer program that installs files, such as applications, drivers, or other software, onto a computer. Some installers are specifically made to install the files they contain; other installers are general-purpose and work by reading the contents of thesoftware packageto be installed.The differences between apackage management systemand an installer are:Package management systemInstaller

Usually part of an operating system.Each product comes bundled with its own installer.

Uses one installation database.Performs its own installation, sometimes recording information about that installation in aregistry.

Can verify and manage all packages on the system.Works only with its bundled product.

One package management system vendor.Multiple installer vendors.

One package format.Multiple installation formats.

Bootstrapper[edit]During the installation of computer programs it is sometimes necessary to update the installer or package manager itself. To make this possible, a technique calledbootstrappingis used. The common pattern for this is to use small executable files which update the installer and starts the real installation after the update. This small executable is called bootstrapper. Sometimes the bootstrapper installs other prerequisites for the software during the bootstrapping process too.Common types[edit]Cross platform installer builders that produce installers for Windows, Mac OS X and Linux includeInstallAnywhere(Flexera Software), JExpress (DeNova),[1]InstallBuilder (BitRock Inc.) and Install4J (ej-technologies).[2]Installers forMicrosoft WindowsincludeWindows Installer, a software installation component. Additional third party commercial tools for creating installers for Windows includeInstallShield(Flexera Software), Advanced Installer (Caphyon Ltd),[3]InstallAware (InstallAware Software),[4]Wise Installation Studio(Wise Solutions, Inc.), SetupBuilder (Lindersoft, Inc.),[5]Installer VISE(MindVision Software), MSI Studio (ScriptLogic Corporation), Actual Installer (Softeza Development),[6]Smart Install Maker (InstallBuilders Company),[7]MSI Factory and Setup Factory (Indigo Rose Software),Visual Installer(SamLogic), Centurion Setup (Gammadyne Corporation),[8]Paquet Builder(G.D.G. Software),[9]Xeam Visual Installer(Xeam).[10]Free installer-authoring tools includeNSIS,IzPack,Clickteam,InnoSetup,InstallSimpleandWiX.Mac OS XincludesInstaller, a native Package Manager software. Mac OS X also includes a separate software updating application,Software Updatebut only supports Apple and system software. Included in the dock as of 10.6.6, theMac App Storeshares many attributes with the successfulApp Storefor iOS devices, such as a similar app approval process, the use of Apple ID for purchases, and automatic installation and updating. Although this is Apple's preferred delivery method for Mac OS X,[11]previously purchased licenses can not be transferred to the Mac App Store for downloading or automatic updating. Commercial applications for Mac OS X may also use a third-party installer, such as Mac version ofInstaller VISE(MindVision Software) or InstallerMaker (StuffIt).See also[edit] Application virtualization List of installation software Package management system Portable application Pre-installed software Software distribution Uninstaller

Computer configurationFrom Wikipedia, the free encyclopediaIncommunicationsorcomputer systems, aconfigurationis an arrangement offunctional unitsaccording to their nature, number, and chief characteristics. Often, configuration pertains to the choice of hardware, software,firmware, and documentation. The configuration affects system function and performance.See also[edit] Auto-configuration Configuration file- In software, a data resource used for program initialization Configuration management- In multiple disciplines, a practice for managing change Configure script (computing)

System monitoring

Insystems engineering, asystem monitor(SM) is a process within adistributed systemfor collecting and storing state data. This is a fundamental principle supportingApplication Performance Management.Overview[edit]The argument that system monitoring is just a nice to have, and not really a core requirement for operational readiness, dissipates quickly when a critical application goes down with no warning.[1]The configuration for the system monitor takes two forms:1. configuration data for the monitor application itself, and2. configuration data for the system being monitored. See:System configurationThe monitoring application needs information such aslog file pathandnumber of threads to run with. Once the application is running, it needs to knowwhat to monitor, and deducehow to monitor. Because the configuration data for what to monitor is needed in other areas of the system, such asdeployment, the configuration data should not be tailored specifically for use by the system monitor, but should be a generalized system configuration model.The performance of the monitoring system has two aspects: Impact on system domain or impact on domain functionality: Any element of the monitoring system that prevents the main domain functionality from working is in-appropriate. Ideally the monitoring is a tiny fraction of each applications footprint, requiring simplicity. The monitoring function must be highly tunable to allow for such issues as network performance, improvements to applications in the development life-cycle, appropriate levels of detail, etc. Impact on the systems' primary function must be considered. Efficient monitoring or ability to monitor efficiently: Monitoring must be efficient, able to handle all monitoring goals in a timely manner, within the desired period. This is most related toscalability. Various monitoringmodesare discussed below.There are many issues involved with designing and implementing a system monitor. Here are a few issues to be dealt with: configuration protocol performance data accessSystem monitor basics[edit]Protocol[edit]There are many tools for collecting system data from hosts and devices using the SNMP (Simple Network Management Protocol).[2]Most computers and networked devices will have some form of SNMP access. Interpretation of the SNMP data from a host or device requires either a specialized tool (typically extra software[3]from the vendor) or aManagement information base(MIB), a mapping of commands/data references to the various data elements the host or device provides. The advantage of SNMP for monitoring is its low bandwidth requirements and universal usage in the industries.Unless an application itself provides a MIB and output via SNMP, then SNMP is not suitable for collecting application data.Other protocols are suitable for monitoring applications, such asCORBA(language/OS-independent),JMX(Java-specific management and monitoring protocol), or proprietary TCP/IP or UDP protocols (language/OS independent for the most part).Data access[edit]Data access refers to the interface by which the monitor data can be utilized by other processes. For example, if the system monitor is aCORBAserver, clients can connect and make calls on the monitor forcurrent stateof an element, orhistorical statesfor an element for some time period.The system monitor may be writing data directly into a database, allowing other processes to access the database outside the context of the system monitor. This is dangerous however, as the table design for the database will dictate the potential for data-sharing. Ideally the system monitor is a wrapper for whatever persistence mechanism is used, providing a consistent and 'safe' access interface for others to access the data.Mode[edit]The data collection mode of the system monitor is critical. The modes are:monitor poll,agent push, and ahybrid scheme.Monitor pollIn this mode, one or more processes in the monitoring system actually poll the system elements in some thread. During the loop, devices are polled via SNMP calls, hosts can be accessed via Telnet/SSH to execute scripts or dump files or execute other OS-specific commands, applications can be polled for state data, or their state-output-files can be dumped.The advantage of this mode is that there is little impact on the host/device being polled. The host'sCPUis loaded only during the poll. The rest of the time the monitoring function plays no part in CPU loading.The main disadvantage of this mode is that the monitoring process can only do so much in its time. If polling takes too long, the intended poll-period gets elongated.Agent pushIn agent-push mode, the monitored host is simply pushing data from itself to the system monitoring application. This can be done periodically, or by request from the system monitor asynchronously.The advantage of this mode is that the monitoring system's load can be reduced to simply accepting and storing data. It doesn't have to worry about timeouts for SSH calls, parsing OS-specific call results, etc.The disadvantage of this mode is that the logic for the polling cycle/options are not centralized at the system monitor, but distributed to each remote node. Thus changes to the monitoring logic must be pushed out to each node.Also, in agent-based monitoring, a host cannot inform that it is completely "down" or powered off, or if an intermediary system (such as a router) is preventing access to the system.Hybrid modeThe median mode between 'monitor-poll' and 'agent-push' is a hybrid approach, where thesystem configurationdetermines where monitoring occurs, either in thesystem monitororagent. Thus when applications come up, they can determine for themselves what system elements they are responsible for polling. Everything however must post its monitored-data ultimately to thesystem monitorprocess.This is especially useful when setting up a monitoring infrastructure for the first time and not all monitoring mechanisms have been implemented. Thesystem monitorcan do all the polling in whatever simple means are available. As theagentsbecome smarter, they can take on more of the load.

UpgradeFrom Wikipedia, the free encyclopediaFor the facility that upgrades bitumen (extra heavy oil) into synthetic crude oil, seeupgrader. For academic upgrading, seeremedial education.Look upupgradein Wiktionary, the free dictionary.

Upgradingis the process of replacing a product with a newer version of the same product. Incomputingandconsumer electronicsanupgradeis generally a replacement ofhardware,softwareorfirmwarewith a newer or better version, in order to bring the system up to date or to improve its characteristics.Computing and consumer electronics[edit]Examples of common hardware upgrades include installing additional memory (RAM), adding largerhard disks, replacing microprocessor cards orgraphics cards, and installing new versions of software. Many other upgrades are often possible as well.Common software upgrades include changing the version of anoperating system, of anoffice suite, of an anti-virus program, or of various other tools.Common firmware upgrades include the updating of theiPodcontrol menus, theXbox 360dashboard, or the non-volatile flash memory that contains theembedded operating systemfor aconsumer electronicsdevice.Users can often download software and firmware upgrades from theInternet. Often the download is apatchit does not contain the new version of the software in its entirety, just the changes that need to be made. Software patches usually aim to improve functionality or solve problems withsecurity. Rushed patches can cause more harm than good and are therefore sometimes regarded[by whom?]with scepticism for a short time after release (see "Risks").[1]Patches are generally free.A software or firmware upgrade can be major or minor and therelease versioncode-number increases accordingly. A major upgrade will change the version number, whereas a minor update will often append a ".01", ".02", ".03", etc. For example, "version 10.03" might designate the third minor upgrade of version 10. Incommercial software, the minor upgrades (or updates) are generally free, but the major versions must be purchased. See also:sidegrade.Risks[edit]Although developers usually produce upgrades in order to improve a product, there are risks involvedincluding the possibility that the upgrade will worsen the product.Upgrades of hardware involve a risk that new hardware will not be compatible with other pieces of hardware in a system. For example, an upgrade of RAM may not be compatible with existing RAM in a computer. Other hardware components may not be compatible after either an upgrade or downgrade, due to the non-availability of compatibledriversfor the hardware with a specificoperating system. Conversely, there is the same risk of non-compatibility when software is upgraded or downgraded for previously functioning hardware to no longer function.Upgrades of software introduce the risk that the new version (or patch) will contain abug, causing the program to malfunction in some way or not to function at all. For example, in October 2005, a glitch in a software upgrade caused trading on theTokyo Stock Exchangeto shut down for most of the day.[2]Similar gaffes have occurred: from important government systems[3]tofreewareon the internet.Upgrades can also worsen a product subjectively. A user may prefer an older version even if a newer version functions perfectly as designed.A software update can be a downgrade from the point of view of the user, for example by removing features for marketing or copyright reasons, seeOtherOS.See also[edit] Adaptation kit upgrade Advanced Packaging Tool Macintosh Processor Upgrade Card Source upgrade Windows Anytime Upgrade Yellow dog Updater, Modified

System administratorFrom Wikipedia, the free encyclopediaFor the privileged user account, seesuperuser.[hide]This article has multiple issues.Please helpimprove itor discuss these issues on thetalk page.

This articleis in a list format that may be better presented usingprose.(April 2014)

This articleneeds additional citations forverification.(August 2010)

System administrator

A professional system administrator works at a server rack in a datacenter.

Occupation

NamesSystem administrator, systems administrator, sysadmin, IT professional

Occupation typeProfession

Activity sectorsInformation technology

Description

CompetenciesSystem administration,network management,analytical skills,critical thinking

Education requiredVaries from apprenticeship to Masters degree

Asystem administrator, orsysadmin, is a person who is responsible for the upkeep, configuration, and reliable operation ofcomputer systems; especiallymulti-usercomputers, such asservers.The system administrator seeks to ensure that theuptime,performance,resources, andsecurityof the computers he or she manages meet the needs of theusers, without exceeding thebudget.To meet these needs, a system administrator may acquire, install, or upgrade computer components and software; automate routine tasks; write computer programs;troubleshoot; train and/or supervise staff; and providetechnical support.The duties of a system administrator are wide-ranging, and vary widely from one organization to another. Sysadmins are usually charged with installing, supporting, and maintaining servers or other computer systems, and planning for and responding to service outages and other problems. Other duties may include scripting or light programming, project management for systems-related projects.Contents[hide] 1Related fields 2Training 3Skills 4Duties 5See also 6References 7Further reading 8External linksRelated fields[edit]Many organizations staff other jobs related to system administration. In a larger company, these may all be separate positions within a computer support or Information Services (IS) department. In a smaller group they may be shared by a few sysadmins, or even a single person. Adatabase administrator(DBA) maintains adatabasesystem, and is responsible for the integrity of the data and the efficiency and performance of the system. Anetwork administratormaintains network infrastructure such asswitchesandrouters, and diagnoses problems with these or with the behavior of network-attached computers. Asecurity administratoris a specialist in computer and network security, including the administration of security devices such as firewalls, as well as consulting on general security measures. Aweb administratormaintains web server services (such as Apache or IIS) that allow for internal or external access to web sites. Tasks include managing multiple sites, administering security, and configuring necessary components and software. Responsibilities may also include softwarechange management. Acomputer operatorperforms routine maintenance and upkeep, such as changing backup tapes or replacing failed drives in aRAID. Such tasks usually require physical presence in the room with the computer; and while less skilled than sysadmin tasks require a similar level of trust, since the operator has access to possibly sensitive data. Apostmasteradministers amail server. AStorage (SAN) Administrator. Create, Provision, Add or Remove Storage to/from Computer systems. Storage can be attached local to the system or from aStorage Area Network(SAN) orNetwork Attached Storage(NAS). Create File Systems from newly added storage.In some organizations, a person may begin as a member of technical support staff or a computer operator, then gain experience on the job to be promoted to a sysadmin position.Training[edit]

System Administration Conference TrainingUnlike many other professions, there is no single path to becoming a system administrator. Many system administrators have a degree in a related field:computer science,information technology,computer engineering, information systems, or even a trade school program. On top of this, nowadays some companies require an IT certification. Other schools have offshoots of their Computer Science program specifically for system administration.Some schools have started offering undergraduate degrees in System Administration. The first,Rochester Institute of Technology[1]started in 1992. Others such asRensselaer Polytechnic Institute,University of New Hampshire,[2]Marist College, andDrexel Universityhave more recently offered degrees in Information Technology.Symbiosis Institute of Computer Studies and Research (SICSR)in Pune, India offers Masters degree in Computers Applications with a specialization in System Administration. TheUniversity of South Carolina[2]offers an Integrated Information Technology B.S. degree specializing inMicrosoftproduct support.As of 2011, only five U.S. universities, Rochester Institute of Technology,[3]Tufts,[4]Michigan Tech, andFlorida State University[5]havegraduateprograms in system administration.[citation needed]InNorway, there is a special English-taught MSc program organized byOslo University College[6]in cooperation withOslo University, named "Masters programme in Network and System Administration." There is also a "BSc in Network and System Administration"[7]offered byGjvik University College.University of Amsterdam(UvA) offers a similar program in cooperation withHogeschool van Amsterdam(HvA) named "Master System and Network Engineering". InIsrael, theIDF's ntmm course in considered a prominent way to train System administrators.[8]However, many other schools offer related graduate degrees in fields such as network systems and computer security.One of the primary difficulties with teaching system administration as a formal university discipline, is that the industry and technology changes much faster than the typical textbook and coursework certification process. By the time a new textbook has spent years working through approvals and committees, the specific technology for which it is written may have changed significantly or become obsolete.In addition, because of the practical nature of system administration and the easy availability ofopen-sourceserversoftware, many system administrators enter the field self-taught. Some learning institutions are reluctant to, what is in effect, teach hacking to undergraduate level students[citation needed].Generally, a prospective will be required to have some experience with the computer system he or she is expected to manage. In some cases, candidates are expected to possess industry certifications such as the MicrosoftMCSA,MCSE,MCITP, Red HatRHCE, NovellCNA,CNE, CiscoCCNAorCompTIA'sA+orNetwork+,Sun CertifiedSCNA,Linux Professional Instituteamong others.Sometimes, almost exclusively in smaller sites, the role of system administrator may be given to a skilled user in addition to or in replacement of his or her duties. For instance, it is not unusual for a mathematics or computing teacher to serve as the system administrator of a secondary school[citation needed].Skills[edit]Most important skill to a system administratorProblem solving. This can some times lead into all sorts of constraints and stress. When a workstation or server goes down, the sysadmin is called to solve the problem. They should be able to quickly and correctly diagnose the problem. They must figure out what is wrong and how best it can be fixed in a short time.

Microsoft System Administrator BadgeSome of this section is from theOccupational Outlook Handbook[dead link], 2010-11 Edition, which is in thepublic domainas awork of the United States Government.Thesubject matterof system administration includes computer systems and the ways people use them in an organization. This entails a knowledge ofoperating systemsandapplications, as well as hardware and softwaretroubleshooting, but also knowledge of the purposes for which people in the organization use the computers.Perhaps the most important skill for a system administrator isproblem solvingfrequently under various sorts of constraints and stress. The sysadmin is on call when a computer system goes down or malfunctions, and must be able to quickly and correctly diagnose what is wrong and how best to fix it. They may also need to have team work and communication skills; as well as being able to install and configure hardware and software.System administrators are notsoftware engineersordevelopers. It is not usually within their duties to design or write new application software. However, sysadmins must understand the behavior of software in order to deploy it and to troubleshoot problems, and generally know severalprogramming languagesused for scripting or automation of routine tasks.Particularly when dealing withInternet-facing or business-critical systems, a sysadmin must have a strong grasp ofcomputer security. This includes not merely deploying software patches, but also preventing break-ins and other security problems with preventive measures. In some organizations, computer security administration is a separate role responsible for overall security and the upkeep offirewallsandintrusion detection systems, but all sysadmins are generally responsible for the security of computer systems.Duties[edit]A system administrator's responsibilities might include: Analyzingsystem logsand identifying potential issues with computer systems. Introducing and integrating new technologies into existingdata centerenvironments. Performing routine audits of systems and software. Applyingoperating systemupdates, patches, and configuration changes. Installing and configuring newhardwareandsoftware. Adding, removing, or updatinguser accountinformation, resettingpasswords,etc. Answering technical queries and assisting users. Responsibility forsecurity. Responsibility fordocumentingthe configuration of the system. Troubleshootingany reported problems. Systemperformance tuning. Ensuring that the network infrastructure is up and running. Configure, add, delete file systems. Knowledge of volume management tools like Veritas (now Symantec), Solaris ZFS, LVM. User administration (setup and maintaining account) Maintaining system Verify that peripherals are working properly Quickly arrange repair for hardware in occasion of hardware failure Monitor system performance Create file systems Installsoftware Create a 'backup'and recover policy Monitor network communication Implement the policies for the use of the computer system and network Setup security policies for users. A sysadmin must have a strong grasp of computer security (e.g. firewalls and intrusion detection systems) Password and identity management Sometimes maintains website SSL certificatesSSL certificateworking with aCertificate authority Incident management using ticketing system software -Ticketing systemIn larger organizations, some of the tasks above may be divided among different system administrators or members of different organizational groups. For example, a dedicated individual(s) may apply all system upgrades, aQuality Assurance (QA)team may perform testing and validation, and one or moretechnical writersmay be responsible for all technical documentation written for a company. System administrators, in larger organizations, tend not to besystems architects, system engineers, orsystem designers.In smaller organizations, the system administrator might also act as technical support,Database Administrator,Network Administrator, Storage (SAN) Administrator orapplication analyst.See also[edit]Information technology portal

Computer Science portal

Application service management Bastard Operator From Hell(BOFH) DevOps Forum administrator Information technology operations Large Installation System Administration Conference League of Professional System Administrators LISA (organization) Professional certification (computer technology) Superuser Sysop System Administrator Appreciation Day

Software maintenanceFrom Wikipedia, the free encyclopediaThis articlehas an unclear citation style.The references used may be made clearer with a different or consistent style ofcitation,footnoting, orexternal linking.(September 2010)

Software development process

Asoftware developerat work

Core activities

Requirements Specification Architecture Construction Design Testing Debugging Deployment Maintenance

Methodologies

Waterfall Prototype model Incremental Iterative V-Model Spiral Scrum Cleanroom RAD DSDM RUP XP Agile Lean Dual Vee Model TDD BDD FDD DDD MDD

Supporting disciplines

Configuration management Documentation Quality assurance (SQA) Project management User experience

Tools

Compiler Debugger Profiler GUI designer Modeling IDE Build automation

v t e

Software maintenanceinsoftware engineeringis the modification of a software product after delivery to correct faults, to improve performance or other attributes.[1]A common perception of maintenance is that it merely involves fixingdefects. However, one study indicated that the majority, over 80%, of the maintenance effort is used for non-corrective actions.[2]This perception is perpetuated by users submitting problem reports that in reality are functionality enhancements to the system. More recent studies put the bug-fixing proportion closer to 21%.[3]Software maintenance andevolutionof systems was first addressed byMeir M. Lehmanin 1969. Over a period of twenty years, his research led to the formulation ofLehman's Laws(Lehman 1997). Key findings of his research include that maintenance is really evolutionary development and that maintenance decisions are aided by understanding what happens to systems (and software) over time. Lehman demonstrated that systems continue to evolve over time. As they evolve, they grow more complex unless some action such ascode refactoringis taken to reduce the complexity.The key software maintenance issues are both managerial and technical. Key management issues are: alignment with customer priorities, staffing, which organization does maintenance, estimating costs. Key technical issues are: limited understanding,impact analysis, testing, maintainability measurement.Software maintenance is a very broad activity that includes error correction, enhancements of capabilities, deletion of obsolete capabilities, and optimization. Because change is inevitable, mechanisms must be developed for evaluation, controlling and making modifications.So any work done to change the software after it is in operation is considered to be maintenance work. The purpose is to preserve the value of software over the time. The value can be enhanced by expanding the customer base, meeting additional requirements, becoming easier to use, more efficient and employing newer technology. Maintenance may span for 20 years, whereas development may be 1-2 years.Contents[hide] 1Importance of software maintenance 2Software maintenance planning 3Software maintenance processes 4Categories of maintenance in ISO/IEC 14764 5See also 6References 7Further reading 8External linksImportance of software maintenance[edit]In the late 1970s, a famous and widely cited survey study by Lientz and Swanson, exposed the very high fraction oflife-cycle coststhat were being expended on maintenance. They categorized maintenance activities into four classes: Adaptive modifying the system to cope with changes in the software environment (DBMS,OS)[4] Perfective implementing new or changed user requirements which concern functional enhancements to the software Corrective diagnosing and fixing errors, possibly ones found by users[4] Preventive increasing software maintainability or reliability to prevent problems in the future[4]The survey showed that around 75% of the maintenance effort was on the first two types, and error correction consumed about 21%. Many subsequent studies suggest a similar magnitude of the problem. Studies show that contribution of end user is crucial during the new requirement data gathering and analysis. And this is the main cause of any problem during software evolution and maintenance. So software maintenance is important because it consumes a large part of the overall lifecycle costs and also the inability to change software quickly and reliably means that business opportunities are lost.[5][6][7]Impact of key adjustment factors on maintenance (sorted in order of maximum positive impact)Maintenance FactorsPlus Range

Maintenance specialists35%

High staff experience34%

Table-driven variables and data33%

Low complexity of base code32%

Y2K and special search engines30%

Code restructuring tools29%

Re-engineering tools27%

High level programming languages25%

Reverse engineering tools23%

Complexity analysis tools20%

Defect tracking tools20%

Y2K mass update specialists20%

Automated change control tools18%

Unpaid overtime18%

Quality measurements16%

Formal base code inspections15%

Regression test libraries15%

Excellent response time12%

Annual training of > 10 days12%

High management experience12%

HELP desk automation12%

No error prone modules10%

On-line defect reporting10%

Productivity measurements8%

Excellent ease of use7%

User satisfaction measurements5%

High team morale5%

Sum503%

Not only are error-prone modules troublesome, but many other factors can degrade performance too. For example, very complex spaghetti code is quite difficult to maintain safely. A very common situation which often degrades performance is lack of suitable maintenance tools, such as defect tracking software, change management software, and test library software. Below describe some of the factors and the range of impact on software maintenance.Impact of key adjustment factors on maintenance (sorted in order of maximum negative impact)Maintenance FactorsMinus Range

Error prone modules-50%

Embedded variables and data-45%

Staff inexperience-40%

High code complexity-30%

No Y2K of special search engines-28%

Manual change control methods-27%

Low level programming languages-25%

No defect tracking tools-24%

No Y2K mass update specialists-22%

Poor ease of use-18%

No quality measurements-18%

No maintenance specialists-18%

Poor response time-16%

No code inspections-15%

No regression test libraries-15%

No help desk automation-15%

No on-line defect reporting-12%

Management inexperience-15%

No code restructuring tools-10%

No annual training-10%

No reengineering tools-10%

No reverse-engineering tools-10%

No complexity analysis tools-10%

No productivity measurements-7%

Poor team morale-6%

No user satisfaction measurements-4%

No unpaid overtime0%

Sum-500%

[8]Software maintenance planning[edit]An integral part of software is the maintenance one, which requires an accurate maintenance plan to be prepared during the software development. It should specify how users will request modifications or report problems. The budget should include resource and cost estimates. A new decision should be addressed for the developing of every new system feature and its quality objectives. The software maintenance, which can last for 56 years (or even decades) after the development process, calls for an effective plan which can address the scope of software maintenance, the tailoring of the post delivery/deployment process, the designation of who will provide maintenance, and an estimate of the life-cycle costs. The selection of proper enforcement of standards is the challenging task right from early stage of software engineering which has not got definite importance by the concerned stakeholders.Software maintenance processes[edit]This section describes the six software maintenance processes as:1. The implementation process contains software preparation and transition activities, such as the conception and creation of the maintenance plan; the preparation for handling problems identified during development; and the follow-up on product configuration management.2. The problem and modification analysis process, which is executed once the application has become the responsibility of the maintenance group. The maintenance programmer must analyze each request, confirm it (by reproducing the situation) and check its validity, investigate it and propose a solution, document the request and the solution proposal, and finally, obtain all the required authorizations to apply the modifications.3. The process considering the implementation of the modification itself.4. The process acceptance of the modification, by confirming the modified work with the individual who submitted the request in order to make sure the modification provided a solution.5. The migration process (platform migration, for example) is exceptional, and is not part of daily maintenance tasks. If the software must be ported to another platform without any change in functionality, this process will be used and a maintenance project team is likely to be assigned to this task.6. Finally, the last maintenance process, also an event which does not occur on a daily basis, is the retirement of a piece of software.There are a number of processes, activities and practices that are unique to maintainers, for example: Transition: a controlled and coordinated sequence of activities during which a system is transferred progressively from the developer to the maintainer; Service Level Agreements(SLAs) and specialized (domain-specific) maintenance contracts negotiated by maintainers; Modification Request and Problem Report Help Desk: a problem-handling process used by maintainers to prioritize, documents and route the requests they receive;Categories of maintenance in ISO/IEC 14764[edit]E.B. Swanson initially identified three categories of maintenance: corrective, adaptive, and perfective.[9]These have since been updated and ISO/IEC 14764 presents: Corrective maintenance: Reactive modification of a software product performed after delivery to correct discovered problems. Adaptive maintenance: Modification of a software product performed after delivery to keep a software product usable in a changed or changing environment. Perfective maintenance: Modification of a software product after delivery to improve performance ormaintainability. Preventive maintenance: Modification of a software product after delivery to detect and correct latent faults in the software product before they become effective faults.There is also a notion of pre-delivery/pre-release maintenance which is all the good things you do to lower the total cost of ownership of the software. Things like compliance with coding standards that includes software maintainability goals. The management of coupling and cohesion of the software. The attainment of software supportability goals (SAE JA1004, JA1005 and JA1006 for example). Note also that some academic institutions are carrying out research to quantify the cost to ongoing software maintenance due to the lack of resources such as design documents and system/software comprehension training and resources (multiply costs by approx. 1.5-2.0 where there is no design data available.).See also[edit] Application retirement Journal of Software Maintenance and Evolution: Research and Practice Long-term support Search-based software engineering Software archaeology Software maintainer Software development

Computer securityFrom Wikipedia, the free encyclopedia(Redirected fromComputer Security)See also:Cyber security and countermeasure[hide]This article has multiple issues.Please helpimprove itor discuss these issues on thetalk page.

This articleneeds additional citations forverification.(September 2010)

This article'suse ofexternal linksmay not follow Wikipedia's policies or guidelines.(April 2014)

It has been suggested thatCyber security and countermeasurebemergedinto this article. (Discuss)Proposed since March 2014.

This article is part of a series on

Computer security

Computer security(main article)

Related security categories

Cyber security and countermeasure Cyberwarfare Information security Mobile security Network security World Wide Web Security

Threats

Vulnerability Eavesdropping Exploits Trojans Virusesandworms Denial of service Malware Payloads Rootkits Keyloggers

Defenses

Access Control Systems Application security Antivirus software Secure coding Security by design Secure operating systems Authentication Two-factor authentication Multi-factor authentication Authorization Firewall (computing) Intrusion detection system Intrusion prevention system

v t e

Computer security(also known ascybersecurityorIT security) isinformation securityas applied to computing devices such ascomputersandsmartphones, as well ascomputer networkssuch as private and public networks, including the Internet as a whole.The field covers all the processes and mechanisms by which computer-based equipment, information and services are protected from unintended or unauthorized access, change or destruction. Computer security also includes protection from unplanned events andnatural disasters.The worldwide security technology and services market is forecast to reach $67.2 billion in 2013, up 8.7 percent from $61.8 billion in 2012, according toGartner, Inc.[1]Contents[hide] 1Vulnerabilities 1.1Backdoors 1.2Denial-of-service attack 1.3Direct access attacks 1.4Eavesdropping 1.5Exploits 1.6Indirect attacks 2Social engineering and human error 3Vulnerable areas 3.1Cloud computing 3.2Aviation 4Financial cost of security breaches 4.1Reasons 5Computer protection 5.1Security and systems design 5.2Security measures 5.2.1Difficulty with response 5.3Reducing vulnerabilities 5.4Security by design 5.5Security architecture 5.6Hardware protection mechanisms 5.7Secure operating systems 5.8Secure coding 5.9Capabilities and access control lists 5.10Hacking back 6Notable computer breaches 6.1Rome Laboratory 6.2Robert Morris and the first computer worm 7Legal issues and global regulation 8Computer security policies 8.1United States 8.1.1Cybersecurity Act of 2010 8.1.2International Cybercrime Reporting and Cooperation Act 8.1.3Protecting Cyberspace as a National Asset Act of 2010 8.1.4White House proposes cybersecurity legislation 8.2Germany 8.2.1Berlin starts National Cyber Defense Initiative 8.3South Korea 9The cyber security job market 10Terminology 11Scholars in the field 12See also 13References 14Further reading 15External links 15.1Lists of currently known unpatched vulnerabilitiesVulnerabilities[edit]Main article:Vulnerability (computing)To understand the techniques for securing a computer system, it is important to first understand the various types of "attacks" that can be made against it. Thesethreatscan typically be classified into one of these seven categories:Backdoors[edit]Abackdoorin a computer system (or cryptosystem or algorithm) is a method of bypassing normal authentication, securing remote access to a computer, obtaining access to plaintext, and so on, while attempting to remain undetected. The backdoor may take the form of an installed program (e.g., Back Orifice), or could be a modification to an existing program or hardware device. A specific form of backdoor is arootkit, which replaces system binaries and/or hooks into the function calls of an operating system to hide the presence of other programs, users, services and open ports. It may also fake information about disk and memory usage.Denial-of-service attack[edit]Main article:Denial-of-service attackUnlike other exploits, denial of service attacks are not used to gain unauthorized access or control of a system. They are instead designed to render it unusable. Attackers can deny service to individual victims, such as by deliberately entering a wrong password three consecutive times and thus causing the victim account to be locked, or they may overload the capabilities of a machine or network and block all users at once. These types of attack are, in practice, very hard to prevent, because the behavior of wholenetworksneeds to be analyzed, not only the behaviour of small pieces of code.Distributed denial of service(DDoS) attacks are common, where a large number of compromised hosts (commonly referred to as "zombie computers", used as part of abotnetwith, for example; aworm, trojan horse, orbackdoor exploitto control them) are used to flood a target system with network requests, thus attempting to render it unusable through resource exhaustion. Another technique to exhaust victim resources is through the use of an attack amplifier, where the attacker takes advantage of poorly designed protocols on third-party machines, such as FTP or DNS, in order to instruct these hosts to launch the flood. There are also commonly found vulnerabilities in applications that cannot be used to take control over a computer, but merely make the target application malfunction or crash. This is known as a denial-of-service exploit.Direct access attacks[edit]

Common consumer devices that can be used to transfer data surreptitiously.Someone who has gained access to a computer can install different types of devices to compromise security, includingoperating systemmodifications, softwareworms,key loggers, andcovert listening devices. The attacker can also easily download large quantities of data onto backup media, for instanceCD-R/DVD-R,tape; or portable devices such askeydrives,digital camerasordigital audio players. Another common technique is to boot an operating system contained on aCD-ROMor other bootable media and read the data from theharddrive(s) this way. The only way to defeat this is to encrypt the storage media and store the key separate from the system.Eavesdropping[edit]Eavesdropping is the act of surreptitiously listening to a private conversation, typically between hosts on a network. For instance, programs such asCarnivoreandNarusInsighthave been used by theFBIandNSAto eavesdrop on the systems ofinternet service providers. Even machines that operate as a closed system (i.e., with no contact to the outside world) can be eavesdropped upon via monitoring the faintelectro-magnetictransmissions generated by the hardware such asTEMPEST.Exploits[edit]Main article:Exploit (computer security)An exploit (from the same word in the French language, meaning "achievement", or "accomplishment") is a piece of software, a chunk of data, or sequence of commands that take advantage of a software "bug" or "glitch" in order to cause unintended or unanticipated behavior to occur on computer software, hardware, or something electronic (usually computerized). This frequently includes such things as gaining control of a computer system or allowingprivilege escalationor a denial of service attack. Many development methodologies rely ontestingto ensure thequalityof any code released; this process often fails to discover unusual potential exploits. The term "exploit" generally refers to small programs designed to take advantage of a software flaw that has been discovered, either remote or local. The code from the exploit program is frequently reused introjan horsesandcomputer viruses. In some cases, a vulnerability can lie in certain programs' processing of a specific file type, such as a non-executable media file. Some security web sites maintain lists of currently known unpatched vulnerabilities found in common programs (see "External links" below).Indirect attacks[edit]An indirect attack is an attack launched by a third-party computer. By using someone else's computer to launch an attack, it becomes far more difficult to track down the actual attacker. There have also been cases where attackers took advantage of public anonymizing systems, such as thetor onion routersystem.Social engineering and human error[edit]Main article:Social engineering (security)See also:Category:Cryptographic attacksA computer system is no more secure than the human systems responsible for its operation. Maliciousindividualshave regularly penetrated well-designed, secure computer systems by taking advantage of the carelessness of trusted individuals, or by deliberately deceiving them, for example sending messages that they are the system administrator and asking for passwords. This deception is known associal engineering.In the world of information technology there are different types of cyber attacklike code injection to a website or utilising malware (malicious software) such as virus, trojans, or similar. Attacks of these kinds are counteracted managing or improving the damaged product. But there is one last type, social engineering, which does not directly affect the computers but instead their users, which are also known as "the weakest link". This type of attack is capable of achieving similar results to other class of cyber attacks, by going around the infrastructure established to resist malicious software; since being more difficult to calculate or prevent, it is many times a more efficient attack vector.The main target is to convince the user by means of psychological ways to disclose his or her personal information such as passwords, card numbers, etc. by, for example, impersonating the services company or the bank.[2]Vulnerable areas[edit]Computer security is critical in almost any technology-driven industry which operates on computer systems. The issues of computer based systems and addressing their countless vulnerabilities are an integral part of maintaining an operational industry.[3]Cloud computing[edit]Security in the cloud is challenging,[citation needed]due to varied degrees of security features and management schemes within the cloud entities. In this connection one logical protocol base needs to evolve so that the entire gamut of components operates synchronously and securely.[original research?]Aviation[edit]The aviation industry is especially important when analyzing computer security because the involved risks include human life, expensive equipment, cargo, and transportation infrastructure. Security can be compromised by hardware and software malpractice, human error, and faulty operating environments. Threats that exploit computer vulnerabilities can stem from sabotage, espionage, industrial competition, terrorist attack, mechanical malfunction, and human error.[4]The consequences of a successful deliberate or inadvertent misuse of a computer system in the aviation industry range from loss of confidentiality to loss of system integrity, which may lead to more serious concerns such as exfiltration (data theft or loss), network andair traffic controloutages, which in turn can lead to airport closures, loss of aircraft, loss of passenger life.Militarysystems that control munitions can pose an even greater risk.A proper attack does not need to be very high tech or well funded; for a power outage at an airport alone can cause repercussions worldwide.[5]One of the easiest and, arguably, the most difficult to trace security vulnerabilities is achievable by transmitting unauthorized communications over specific radio frequencies. These transmissions may spoof air traffic controllers or simply disrupt communications altogether. These incidents are very common, having altered flight courses of commercial aircraft and caused panic and confusion in the past.[citation needed]Controlling aircraft over oceans is especially dangerous because radar surveillance only extends 175 to 225 miles offshore. Beyond the radar's sight controllers must rely on periodic radio communications with a third party.Lightning, power fluctuations, surges,brownouts, blown fuses, and various other power outages instantly disable all computer systems, since they are dependent on an electrical source. Other accidental and intentional faults have caused significant disruption of safety critical systems throughout the last few decades and dependence on reliable communication and electrical power only jeopardizes computer safety.[citation needed]Financial cost of security breaches[edit]Serious financial damage has been caused bysecurity breaches, but because there is no standard model for estimating the cost of an incident, the only data available is that which is made public by the organizations involved. Several computer security consulting firms produce estimates of total worldwide losses attributable tovirusandwormattacks and to hostile digital acts in general. The 2003 loss estimates by these firms range from $13 billion (worms and viruses only) to $226 billion (for all forms of covert attacks). The reliability of these estimates is often challenged; the underlying methodology is basically anecdotal.[6]Insecurities inoperating systemshave led to a massiveblack market[citation needed]forrogue software. An attacker can use asecurity holeto install software that tricks the user into buying a product. At that point, anaffiliate programpays the affiliate responsible for generating that installation about $30. The software is sold for between $50 and $75 per license.[7]Reasons[edit]There are many similarities (yet many fundamental differences) between computer andphysical security. Just like real-world security, the motivations for breaches of computer security vary between attackers, sometimes calledhackersor crackers. Some are thrill-seekers orvandals(the kind often responsible for defacing web sites); similarly, someweb site defacementsare done to make political statements. However, some attackers are highly skilled and motivated with the goal of compromising computers for financial gain or espionage.[citation needed]An example of the latter isMarkus Hess(more diligent than skilled), who spied for theKGBand was ultimately caught because of the efforts ofClifford Stoll, who wrote a memoir,The Cuckoo's Egg, about his experiences.For those seeking to prevent security breaches, the first step is usually to attempt to identify what might motivate an attack on the system, how much the continued operation and information security of the system are worth, and who might be motivated to breach it. The precautions required for a homepersonal computerare very different for those ofbanks' Internet banking systems, and different again for aclassifiedmilitarynetwork. Other computer security writers suggest that, since an attacker using a network need know nothing about you or what you have on your computer, attacker motivation is inherently impossible to determine beyond guessing. If true, blocking all possible attacks is the only plausible action to take.Computer protection[edit]There are numerous ways to protect computers, including utilizing security-aware design techniques, building on secure operating systems and installing hardware devices designed to protect the computer systems.Security and systems design[edit]Although there are many aspects to take into consideration when designing acomputer system, security can prove to be very important. According toSymantec, in 2010, 94 percent of organizations polled expect to implement security improvements to their computer systems, with 42 percent claimingcyber securityas their top risk.[8]At the same time, many organizations are improving security and many types ofcyber criminalsare finding ways to continue their activities. Almost every type ofcyber attackis on the rise. In 2009 respondents to the CSI Computer Crime and Security Survey admitted thatmalwareinfections,denial-of-service attacks, password sniffing, andweb site defacementswere significantly higher than in the previous two years.[9]Security measures[edit]A state of computer "security" is the conceptual ideal, attained by the use of the three processes: threat prevention, detection and response. These processes are based on various policies and system components, which include the following: User accountaccess controlsandcryptographycan protect systems files and data, respectively. Firewallsare by far the most common prevention systems from a network security perspective as they can (if properly configured) shield access to internal network services, and block certain kinds of attacks through packet filtering. Firewalls can be both hardware- or software-based. Intrusion Detection Systems(IDSs) are designed to detect network attacks in progress and assist in post-attackforensics, whileaudit trailsandlogsserve a similar function for individual systems. "Response" is necessarily defined by the assessed security requirements of an individual system and may cover the range from simple upgrade of protections to notification oflegalauthorities, counter-attacks, and the like. In some special cases, a complete destruction of the compromised system is favored, as it may happen that not all the compromised resources are detected.Today, computer security comprises mainly "preventive" measures, like firewalls or anexit procedure. A firewall can be defined as a way of filtering network data between a host or a network and another network, such as theInternet, and can be implemented as software running on the machine, hooking into thenetwork stack(or, in the case of mostUNIX-based operating systems such asLinux, built into the operating systemkernel) to provide real time filtering and blocking. Another implementation is a so-calledphysical firewallwhich consists of a separate machine filtering network traffic. Firewalls are common amongst machines that are permanently connected to theInternet.However, relatively few organisations maintain computer systems with effective detection systems, and fewer still have organised response mechanisms in place. As result, as Reuters points out: Companies for the first time report they are losing more through electronic theft of data than physical stealing of assets.[10]The primary obstacle to effective eradication of cyber crime could be traced to excessive reliance on firewalls and other automated "detection" systems. Yet it is basic evidence gathering by usingpacket capture appliancesthat puts criminals behind bars.Difficulty with response[edit]Responding forcefully to attemptedsecurity breaches(in the manner that one would for attempted physical security breaches) is often very difficult for a variety of reasons: Identifying attackers is difficult, as they are often in a differentjurisdictionto the systems they attempt to breach, and operate through proxies, temporary anonymous dial-up accounts, wireless connections, and other anonymising procedures which make backtracing difficult and are often located in yet another jurisdiction. If they successfully breach security, they are often able to deletelogsto cover their tracks. The sheer number of attempted attacks is so large that organisations cannot spend time pursuing each attacker (a typical home user with a permanent (e.g.,cable modem) connection will be attacked at least several times per day, so more attractive targets could be presumed to see many more). Note however, that most of the sheer bulk of these attacks are made by automatedvulnerability scannersandcomputer worms. Law enforcement officersare often unfamiliar withinformation technology, and so lack the skills and interest in pursuing attackers. There are also budgetary constraints. It has been argued that the high cost of technology, such asDNAtesting, and improvedforensicsmean less money for other kinds of law enforcement, so the overall rate of criminals not getting dealt with goes up as the cost of the technology increases. In addition, the identification of attackers across a network may require logs from various points in the network and in many countries, the release of these records to law enforcement (with the exception of being voluntarily surrendered by anetwork administratoror asystem administrator) requires asearch warrantand, depending on the circumstances, the legal proceedings required can be drawn out to the point where the records are either regularly destroyed, or the information is no longer relevant.Reducing vulnerabilities[edit]Computercodeis regarded by some as a form ofmathematics. It is theoretically possible toprovethecorrectnessof certain classes of computer programs, though the feasibility of actually achieving this in large-scale practical systems is regarded as small by some with practical experience in the industry; seeBruce Schneieret al.It is also possible to protect messages in transit (i.e.,communications) by means ofcryptography. One method of encryptiontheone-time padis unbreakable when correctly used. This method was used by the Soviet Union during the Cold War, though flaws in their implementation allowed somecryptanalysis; see theVenona project. The method uses a matching pair of key-codes, securely distributed, which are used once-and-only-once to encode and decode a single message. For transmitted computer encryption this method is difficult to use properly (securely), and highly inconvenient as well. Other methods ofencryption, while breakable in theory, are often virtually impossible to directly break by any means publicly known today. Breaking them requires some non-cryptographic input, such as a stolen key, stolen plaintext (at either end of the transmission), or some other extra cryptanalytic information.Social engineering and direct computer access (physical) attacks can only be prevented by non-computer means, which can be difficult to enforce, relative to the sensitivity of the information. Even in a highly disciplined environment, such as in military organizations, social engineering attacks can still be difficult to foresee and prevent.In practice, only a small fraction of computer program code is mathematically proven, or even goes through comprehensiveinformation technology auditsor inexpensive but extremely valuablecomputer security audits, so it is usually possible for a determined hacker to read, copy, alter or destroy data in well secured computers, albeit at the cost of great time and resources. Few attackers would audit applications for vulnerabilities just to attack a single specific system. It is possible to reduce an attacker's chances by keeping systems up to date, using a security scanner or/and hiring competent people responsible for security. The effects of data loss/damage can be reduced by carefulbacking upandinsurance.Security by design[edit]Main article:Secure by designSecurity by design, or alternately secure by design, means that the software has been designed from the ground up to be secure. In this case, security is considered as a main feature.Some of the techniques in this approach include: Theprinciple of least privilege, where each part of the system has only the privileges that are needed for its function. That way even if anattackergains access to that part, they have only limited access to the whole system. Automated theorem provingto prove the correctness of crucial software subsystems. Code reviewsandunit testing, approaches to make modules more secure where formal correctness proofs are not possible. Defense in depth, where the design is such that more than one subsystem needs to be violated to compromise the integrity of the system and the information it holds. Default secure settings, and design to "fail secure" rather than "fail insecure" (seefail-safefor the equivalent insafety engineering). Ideally, a secure system should require a deliberate, conscious, knowledgeable and free decision on the part of legitimate authorities in order to make it insecure. Audit trailstracking system activity, so that when a security breach occurs, the mechanism and extent of the breach can be determined. Storing audit trails remotely, where they can only be appended to, can keep intruders from covering their tracks. Full disclosureof all vulnerabilities, to ensure that the "window of vulnerability" is kept as short as possible when bugs are discovered.Security architecture[edit]The Open Security Architecture organization defines IT security architecture as "the designartifactsthat describe how the security controls (security countermeasures) are positioned, and how they relate to the overall information technology architecture. These controls serve the purpose to maintain the system's quality attributes: confidentiality, integrity, availability, accountability andassurance services".[11]Hardware protection mechanisms[edit]See also:Computer security compromised by hardware failureWhile hardware may be a source of insecurity, such as with microchip vulnerabilities maliciously introduced during the manufacturing process,[12][13]hardware-based or assisted computer security also offers an alternative to software-only computer security. Using devices and methods such asdongles,trusted platform modules, intrusion-aware cases, drive locks, disabling USB ports, and mobile-enabled access may be considered more secure due to the physical access (or sophisticatedbackdoor access) required in order to be compromised. Each of these is covered in more detail below. USBdonglesare typically used in software licensing schemes to unlock software capabilities,[14]but they can also be seen as a way to prevent unauthorized access to a computer or other device's software. The dongle, or key, essentially creates a secure encrypted tunnel between the software application and the key. The principle is that an encryption scheme on the dongle, such asAdvanced Encryption Standard(AES) provides a stronger measure of security, since it is harder to hack and replicate the dongle than to simply copy the native software to another machine and use it. Another security application for dongles is to use them for accessing web-based content such as cloud software orVirtual Private Networks(VPNs).[15]In addition, a USB dongle can be configured to lock or unlock a computer.[16] Trusted platform modules(TPMs) secure devices by integrating cryptographic capabilities onto access devices, through the use of microprocessors, or so-called computers-on-a-chip. TPMs used in conjunction with server-side software offer a way to detect and authenticate hardware devices, preventing unauthorized network and data access.[17] Computer case intrusion detectionrefers to a push-button switch which is triggered when a computer case is opened. The firmware or BIOS is programmed to show an alert to the operator when the computer is booted up the next time. Drive locks are essentially software tools to encrypt hard drives, making them inaccessible to thieves.[18]Tools exist specifically for encrypting external drives as well.[19] Disabling USB ports is a security option for preventing unauthorized and malicious access to an otherwise secure computer. Infected USB dongles connected to a network from a computer inside the firewall are considered by Network World as the most common hardware threat facing computer networks.[20] Mobile-enabled access devices are growing in popularity due to the ubiquitous nature of cell phones. Built-in capabilities such asBluetooth, the newerBluetooth low energy(LE),Near field communication(NFC) on non-iOS devices andbiometricvalidation such as thumb print readers, as well asQR codereader software designed for mobile devices, offer new, secure ways for mobile phones to connect to access control systems. These control systems provide computer security and can also be used for controlling access to secure buildings.[21]Secure operating systems[edit]Main article:Security-focused operating systemOne use of the term "computer security" refers to technology that is used to implement secureoperating systems. Much of this technology is based on science developed in the 1980s and used to produce what may be some of the most impenetrable operating systems ever. Though still valid, the technology is in limited use today, primarily because it imposes some changes to system management and also because it is not widely understood. Such ultra-strong secure operating systems are based onoperating system kerneltechnology that can guarantee that certain security policies are absolutely enforced in an operating environment. An example of such aComputer security policyis theBell-LaPadula model. The strategy is based on a coupling of specialmicroprocessorhardware features, often involving thememory management unit, to a special correctly implemented operating system kernel. This forms the foundation for a secure operating system which, if certain critical parts are designed and implemented correctly, can ensure the absolute impossibility of penetration by hostile elements. This capability is enabled because the configuration not only imposes a security policy, but in theory completely protects itself from corruption. Ordinary operating systems, on the other hand, lack the features that assure this maximal level of security. The design methodology to produce such secure systems is precise, deterministic and logical.Systems designed with such methodology represent the state of the art[clarification needed]of computer security although products using such security are not widely known. In sharp contrast to most kinds of software, they meet specifications with verifiable certainty comparable to specifications for size, weight and power. Secure operating systems designed this way are used primarily to protect national security information, military secrets, and the data of international financial institutions. These are very powerful security tools and very few secure operating systems have been certified at the highest level (Orange BookA-1) to operate over the range of "Top Secret" to "unclassified" (including Honeywell SCOMP, USAF SACDIN, NSA Blacker and Boeing MLS LAN). The assurance of security depends not only on the soundness of the design strategy, but also on the assurance of correctness of the implementation, and therefore there are degrees of security strength defined for COMPUSEC. TheCommon Criteriaquantifies security strength of products in terms of two components, security functionality and assurance level (such as EAL levels), and these are specified in aProtection Profilefor requirements and aSecurity Targetfor product descriptions. None of these ultra-high assurance secure general purpose operating systems have been produced for decades or certified under Common Criteria.In USA parlance, the term High Assurance usually suggests the system has the right security functions that are implemented robustly enough to protect DoD and DoE classified information. Medium assurance suggests it can protect less valuable information, such as income tax information. Secure operating systems designed to meet medium robustness levels of security functionality and assurance have seen wider use within both government and commercial markets. Medium robust systems may provide the same security functions as high assurance secure operating systems but do so at a lower assurance level (such as Common Criteria levels EAL4 or EAL5). Lower levels mean we can be less certain that the security functions are implemented flawlessly, and therefore less dependable. These systems are found in use on web servers, guards, database servers, and management hosts and are used not only to protect the data stored on these systems but also to provide a high level of protection for network connections and routing services.Secure coding[edit]Main article:Secure codingIf the operating environment is not based on a secure operating system capable of maintaining a domain for its own execution, and capable of protecting application code from malicious subversion, and capable of protecting the system from subverted code, then high degrees of security are understandably not possible. While such secure operating systems are possible and have been implemented, most commercial systems fall in a 'low security' category because they rely on features not supported by secure operating systems (like portability, and others). In low security operating environments, applications must be relied on to participate in their own protection. There are 'best effort' secure coding practices that can be followed to make an application more resistant to malicious subversion.In commercial environments, the majority of software subversionvulnerabilitiesresult from a few known kinds of coding defects. Common software defects includebuffer overflows,format string vulnerabilities,integer overflow, andcode/command injection. These defects can be used to cause the target system to execute putative data. However, the "data" contain executable instructions, allowing the attacker to gain control of the processor.Some common languages such as C and C++ are vulnerable to all of these defects (see Seacord,"Secure Coding in C and C++").[22]Other languages, such as Java, are more resistant to some of these defects, but are still prone to code/command injection and other software defects which facilitate subversion.Another bad coding practice occurs when an object is deleted during normal operation yet the program neglects to update any of the associated memory pointers, potentially causing system instability when that location is referenced again. This is calleddangling pointer, and the first known exploit for this particular problem was presented in July 2007. Before this publication the problem was known but considered to be academic and not practically exploitable.[23]Unfortunately, there is no theoretical model of "secure coding" practices, nor is one practically achievable, insofar as the code (ideally, read-only) and data (generally read/write) generally tends to have some form of defect.Capabilities and access control lists[edit]Main articles:Access control listandCapability (computers)Within computer systems, two security models capable of enforcing privilege separation areaccess control lists(ACLs) andcapability-based security. Using ACLs to confine programs has been proven to be insecure in many situations, such as if the host computer can be tricked into indirectly allowing restricted file access, an issue known as theconfused deputy problem. It has also been shown that the promise of ACLs of giving access to an object to only one person can never be guaranteed in practice. Both of these problems are resolved by capabilities. This does not mean practical flaws exist in all ACL-based systems, but only that the designers of certain utilities must take responsibility to ensure that they do not introduce flaws.[citation needed]Capabilities have been mostly restricted to researchoperating systems, while commercial OSs still use ACLs. Capabilities can, however, also be implemented at the language level, leading to a style of programming that is essentially a refinement of standard object-oriented design. An open source project in the area is theE language.The most secure computers are those not connected to the Internet and shielded from any interference. In the real world, the most secure systems areoperating systemswheresecurityis not an add-on.Hacking back[edit]There has been a significant debate regarding the legality of hacking back against digital attackers (who attempt to or successfully breach an individual's, entity's, or nation's computer). The arguments for such counter-attacks are based on notions of equity, active defense, vigilantism, and theComputer Fraud and Abuse Act(CFAA). The arguments against the practice are primarily based on the legal definitions of "intrusion" and "unauthorized access", as defined by the CFAA. As of October 2012, the debate is ongoing.[24]Notable computer breaches[edit]Several notable computer breaches are discussed below.Rome Laboratory[edit]In 1994, over a hundred intrusions were made by unidentified crackers into theRome Laboratory, the US Air Force's main command and research facility. Usingtrojan horses, hackers were able to obtain unrestricted access to Rome's networking systems and remove traces of their activities. The intruders were able to obtain classified files, such as air tasking order systems data and furthermore able to penetrate connected networks ofNational Aeronautics and Space Administration's Goddard Space Flight Center, Wright-Patterson Air Force Base, some Defense contractors, and other private sector organizations, by posing as a trusted Rome center user.[25]Robert Morris and the first computer worm[edit]One event shows what mainstreamgenerativetechnology leads to in terms of online security breaches, and is also the story of the Internet's firstworm.In 1988, 60,000 computers were connected to the Internet, but not all of them were PCs. Most were mainframes, minicomputers and professional workstations. On November 2, 1988, the computers acted strangely. They started to slow down, because they were running a malicious code that demanded processor time and that spread itself to other computers. The purpose of such software was to transmit a copy to the machines and run in parallel with existing software and repeat all over again. It exploited a flaw in a common e-mail transmission program running on a computer by rewriting it to facilitate its entrance or it guessed users' password, because, at that time, passwords were simple (e.g. username 'harry' with a password '...harry') or were obviously related to a list of 432 common passwords tested at each computer.[26]The software was traced back to 23 year oldCornell Universitygraduate studentRobert Tappan Morris, Jr.When questioned about the motive for his actions, Morris said 'he wanted to count how many machines were connected to the Internet'.[26]His explanation was verified with his code, but it turned out to be buggy, nevertheless.Legal issues and global regulation[edit]Some of the main challenges and complaints about the antivirus industry are the lack of global web regulations, a global base of common rules to judge, and eventually punish,cyber crimesandcyber criminals. In fact, nowadays, even if an antivirus firm locates the cyber criminal behind the creation of a particularvirusor piece ofmalwareor again one form ofcyber attack, often the local authorities cannot take action due to lack of laws under which to prosecute.[27][28]This is mainly caused by the fact that many countries have their own regulations regardingcyber crimes."[Computer viruses] switch from one country to another, from one jurisdiction to another moving around the world, using the fact that we don't have the capability to globally police operations like this. So the Internet is as if someone [had] given free plane tickets to all the online criminals of the world."[27](Mikko Hyppnen)Businesses are eager to expand to less developed countries due to the low cost of labor, says White et al. (2012). However, these countries are the ones with the least amount of Internet safety measures, and the Internet Service Providers are not so focused on implementing those safety measures (2010). Instead, they are putting their main focus on expanding their business, which exposes them to an increase in criminal activity.[29]In response to the growing problem of cyber crime, theEuropean Commissionestablished theEuropean Cybercrime Centre(EC3).[30]The EC3 effectively opened on 1 January 2013 and will be the focal point in the EU's fight against cyber crime, contributing to faster reaction to online crimes. It will support member states and the EU's institutions in building an operational and analytical capacity for investigations, as well as cooperation with international partners.[31]Computer security policies[edit]Country-specific computer security policies are discussed below.United States[edit]See also:Cyber security standardsCybersecurity Act of 2010[edit]On July 1, 2009, SenatorJay Rockefeller(D-WV) introduced the "Cybersecurity Act of 2009 - S. 773"[32]in theSenate; the bill, co-written with SenatorsEvan Bayh(D-IN),Barbara Mikulski(D-MD),Bill Nelson(D-FL), andOlympia Snowe(R-ME), was referred to theCommittee on Commerce, Science, and Transportation, which approved a revised version of the same bill (the "Cybersecurity Act of 2010") on March 24, 2010.[33]The bill seeks to increase collaboration between the public and the private sector on cybersecurity issues, especially those private entities that own infrastructures that are critical to national security interests (the bill quotesJohn Brennan, the Assistant to the President for Homeland Security and Counterterrorism: "our nations security and economic prosperity depend on the security, stability, and integrity of communications andinformation infrastructurethat are largely privately owned and globally operated" and talks about the country's response to a "cyber-Katrina"),[34]increase public awareness on cybersecurity issues, and foster and fund cybersecurity research. Some of the most controversial parts of the bill include Paragraph 315, which grants thePresidentthe right to "order the limitation or shutdown of Internet traffic to and from any compromised Federal Government or United States critical infrastructure information system or network."[34]TheElectronic Frontier Foundation, an internationalnon-profitdigital rightsadvocacy and legal organization based in theUnited States, characterized the bill as promoting a "potentially dangerous approach that favors the dramatic over the sober response".[35]International Cybercrime Reporting and Cooperation Act[edit]On March 25, 2010, RepresentativeYvette Clarke(D-NY) introduced the "International Cybercrime Reporting and Cooperation Act - H.R.4962"[36]in theHouse of Representatives; the bill, co-sponsored by seven other representatives (among whom only oneRepublican), was referred to threeHouse committees.[37]The bill seeks to make sure that the administration keepsCongressinformed on information infrastructure,cybercrime, and end-user protection worldwide. It also "directs the President to give priority for assistance to improve legal, judicial, and enforcement capabilities with respect to cybercrime to countries with low information and communications technology levels of development or utilization in their critical infrastructure, telecommunications systems, and financial industries"[37]as well as to develop an action plan and an annual compliance assessment for countries of "cyber concern".[37]Protecting Cyberspace as a National Asset Act of 2010[edit]On June 19, 2010,United States SenatorJoe Lieberman(I-CT) introduced a bill called "Protecting Cyberspace as a National Asset Act of 2010 - S.3480"[38]which he co-wrote with SenatorSusan Collins(R-ME) and SenatorThomas Carper(D-DE). If signed into law, this controversial bill, which the American media dubbed the "Kill switch bill", would grant thePresidentemergency powers over the Internet. However, all three co-authors of the bill issued a statement claiming that instead, the bill "[narrowed] existing broad Presidential authority to take over telecommunications networks".[39]White House proposes cybersecurity legislation[edit]On May 12, 2011, the White House sent Congress a proposed cybersecurity law designed to force companies to do more to fend off cyberattacks, a threat that has been reinforced by recent reports about vulnerabilities in systems used in power and water utilities.[40]Executive order13636Improving Critical Infrastructure Cybersecuritywas signed February 12, 2013.Germany[edit]Berlin starts National Cyber Defense Initiative[edit]On June 16, 2011, the German Minister for Home Affairs, officially opened the new German NCAZ (National Center for Cyber Defense)Nationales Cyber-Abwehrzentrum, which is located in Bonn. The NCAZ closely cooperates with BSI (Federal Office for Information Security)Bundesamt fr Sicherheit in der Informationstechnik, BKA (Federal Police Organisation)Bundeskriminalamt (Deutschland), BND (Federal Intelligence Service)Bundesnachrichtendienst, MAD (Military Intelligence Service)Amt fr den Militrischen Abschirmdienstand other national organisations in Germany taking care of national security aspects. According to the Minister the primary task of the new organisation founded on February 23, 2011, is to detect and prevent attacks against the national infrastructure and mentioned incidents likeStuxnet.South Korea[edit]Following cyberattacks in the first half of 2013, whereby government, news-media, television station, and bank websites were compromised, the national government committed to the training of 5,000 new cybersecurity experts by 2017. The South Korean government blamed its northern counterpart on these attacks, as well as incidents that occurred in 2009, 2011, and 2012, but Pyongyang denies the accusations.[41]Seoul, March 7, 2011 - South Korean police have contacted 35 countries to ask for cooperation in tracing the origin of a massive cyber attack on the Web sites of key government and financial institutions, amid a nationwide cyber security alert issued against further threats. The Web sites of about 30 key South Korean government agencies and financial institutions came under a so-called distributed denial-of-service (DDoS) attack for two days from Friday, with about 50,000 "zombie" computers infected with a virus seeking simultaneous access to sele