The federation of networks that became the Internet consisted of a relatively small community of users by the 1980s, primarily in the research and academic communities. Because it was rather difficult to get access to these systems and the user communities were rather closely knit, security was not much of a concern.
The main objective of connecting these various networks together was to share information, not keep it locked away. Technologies such as the UNIX operating system and the TCP/IP networking protocols that were designed for this environment reflected this lack of security concern; security was simply viewed as unnecessary. By the early 1990s, however, commercial interest in the Internet grew.
These commercial interests had very different perspectives on security, often in opposition to those of academia. Commercial information had value, and access to it had to be limited to specifically authorized people. UNIX,TCP/IP, and connections to the Internet became avenues of attack and did not have much capability to implement and enforce confidentiality, integrity, and availability.
As the Internet grew in commercial importance, with numerous companies connecting to it and even building entire business models around it, the need for increased security became acute. Connected organizations now faced threats they never had to consider before. When the corporate computing environment was a closed and limited-access system, threats mostly came from inside the organizations.
These internal threats came from disgruntled employees with privileged access who could cause a lot of damage. Attacks from the outside were not much of an issue since there were typically only a few, if any, private connections to trusted entities. Potential attackers were few in number, since the combination of necessary skills and malicious intent were not widespread. With the growth of the Internet, external threats grew as well.
There are now millions of hosts on the Internet as potential attack targets, which entice the now large numbers of attackers. This group has grown in size and skill over the years as its members share information on how to break into systems for both fun and profit. Geography no longer serves as an obstacle, either. You can be attacked from another continent thousands of miles away just as easily as from your own town.
Threats can be classified as structured or unstructured. Unstructured threats are from people with low skill and perseverance. These usually come from people called script kiddies—attackers who have little to no programming skill and very little system knowledge. Script kiddies tend to conduct attacks just for bragging rights among their groups, which are often linked only by an Internet Relay Chat (IRC) channel.
They obtain attack tools that have been built by others with more skill, and use them, often indiscriminately, to attempt to exploit vulnerability in their target. If their attack fails, they will likely go elsewhere and keep trying. Additional risk comes from the fact that they often use these tools with little to no knowledge of the target environment, so attacks can wind up causing unintended results.
Unstructured threats can cause significant damage or disruption, despite the attacker’s lack of sophistication. These attacks are usually detectable with current security tools. Structured attacks are more worrisome because they are conducted by hackers with significant skill. If the existing tools do not work for them, they are likely to modify them or write their own.
They are able to discover new vulnerabilities in systems by executing complex actions the system designers did not protect against. Structured attackers often use so-called zero-day exploits, which target vulnerabilities the system vendor has not yet issued a patch for or does not know about. Structured attacks often have stronger motivations behind them than simple mischief.
These can include theft of source code, theft of credit card numbers for resale or fraud, retribution, or destruction or disruption of a competitor. A structured attack might not be blocked by traditional methods such as firewall rules or detected by an IDS. It could even use noncomputer methods such as social engineering.
Another key task in securing your systems is closing vulnerabilities by turning off unneeded services and bringing them up to date on patches. Services that have no defined business need present an additional possible avenue of attack and are just another component that needs patch attention.
Keeping patches current is one of the most important activities you can perform to protect yourself, yet one that many organizations neglect. The Code Red and Nimda worms of 2001 were successful primarily because so many systems had not been patched for the vulnerabilities they exploited, including multiple Microsoft Internet Information Server (IIS) and Microsoft Outlook vulnerabilities.
Patching, especially when you have hundreds or even thousands of systems, can be a monumental task. However, by defining and documenting processes, using tools to assist in configuration management, subscribing to multiple vulnerability alert mailing lists, and prioritizing patches according to criticality, you can get a better handle on the job.
One useful document to assist in this process has been published by the U.S. National Institute of Standards and Technology (NIST), which can be found at csrc.nist.gov/publications/nistpubs/800-40/sp800-40.pdf (800-40 is the document number).
Also important is having a complete understanding of your network topology and some of the key information flows within it, and in and out of it.This understanding helps you define different zones of trust and highlights where re-architecting the network in places might improve security—for example, by deploying additional firewalls internally or on your network perimeter.