Chapter 7 – Security in Networks

 

Networks are becoming vital to modern life as we know it, having become critical to both computing and commerce.  There has even been a movement to view most computers as Internet appliances – the network being the real tool and computers just appendages.  This may be a bit of a stretch, but we must acknowledge our reliance on networks.  Every time I visit my daughter in Minneapolis, MN and charge dinner on my credit card, I make use of networks and rely on their correctly transmitting the details of my charge and the subsequent approval of that charge.  I also assume that only persons and processes authorized to access the credit card data actually do access those data.  In other words, I assume a lot.

 

In worrying about the number of possible attacks on networks, one should not overlook natural problems, such as solar storms that have been quite common in the fall preceding the writing of these notes.  Most natural occurrences can be planned for, so we focus on malicious activities in our study of network security.

 

Networks are studied extensively in a course that is prerequisite to this course, so we shall not spend much time in discussing them per se, but focus on their security problems.  There are several features of the Internet that lead to security problems.

 

Anonymity

Here is the famous New Yorker cartoon referenced in the book.  A network removes many of the clues we normally use to assess a person’s age and other personal habits.  If the author of these notes claims to be a 20-year old hacker, how would you tell he’s not via the web?


Automation

When malicious hackers were limited to manual input of each attack, things were a lot safer.  Now we have many tools that will automate attacks, such as port scans.  I can set my computer to scan every port on every computer in a specific Internet address range and then go to the bar and have a drink while my little robot does my dirty work.

 

Opaqueness

Due to the structure of the Internet, one cannot tell if another user is local, in the same city, or even in the same country.  Occasionally, a sophisticated timing test can be performed, but usually the only way to tell for sure is to do a back-trace, which requires special equipment.  Similarly, one cannot tell the nature of the source – is it a laptop in an airport, a terminal in a school lunch room, or a PC in some malicious hacker’s basement.

 

Boundary Issues

In physical security, the boundaries of an entity are often clear – anything within the walls of the company building, on the company property, etc.  The Internet has no fixed boundaries; moreover a specific network may not have the actual boundaries it appears to have.  Consider a very common problem that occurs when a worker will attach a telephone modem to a company computer in order to be able to access that computer from home.

 

 

Network Transmission Media

There are a number of ways to transmit information over a computer network.  We cover each briefly, only in the context of access by unintended recipients.

 

Copper Wire

There are a number of technologies that use copper wire, including twisted pair and coaxial cable.  In many network set-ups, the maximum distance over which these technologies are used is rather small, thus limiting the vulnerability to unauthorized access.  One should note that methods to “tap” these technologies are well known and hard to detect if done well.

 

Optical Fiber

Optical fiber cables have become more popular due to the very high bandwidth that they support, recently up to 1Gbps (109 bits per second) for a reasonable sized cable.  Optical fibers are also considerably harder to “tap” or gain unauthorized access to the signal content without disrupting the signal as received by the authorized recipient.

 

Wireless: Microwave and Infrared

These media use electromagnetic waves to carry information.  Depending on the wavelength of the radiation, we call these waves “radio”, “microwave”, “infrared”, or “optical”.  The general rule of thumb is that the shorter the wavelength, the shorter the transmission distance.  The real difference is whether or not the transmitter and receiver must share a common “line of sight”; i.e., be visible to each other.  Longer-wave radiations, such as AM broadcast radio in the United States, can “bend” around corners and travel for hundreds of miles.

 


One should note that the term “wireless”, when applied in the context of networks, generally refers to specific technologies, such as cell phones with a normal maximum range of a few kilometers or an 802.11 device with a normal range of a few hundred meters.

 

It can be shown that the maximum distance between two microwave towers, each of height h, for transmission by line of sight is approximately D = 7.1·, where h is given in meters and D is the distance in kilometers.  The books example of a 30 mile (or 50 kilometer) range seems to correspond to a tower height of about 50 meters.

 

It should be noted that the interception of any electromagnetic waves is quite simple.  For longer waves, such as AM radio transmission, the broadcast is omni-directional, so that all one has to do is place an antenna somewhere.  Microwaves, infrared, and shorter-wavelength broadcasts tend to be directional, so that one has to be in the line of transmission in order to have access to the broadcast.  However, this is also easy to do.

 

The figure below illustrates the basic geometry for interception of a directional signal.  We postulate that the transmitter has the beam focused towards the receiver and note that the interceptor can be anywhere in the beam, either closer or more distant from the transmitter than the intended receiver.  No microwave beam can be focused only on the receiver.

 

T (Transmitter), R (Intended Receiver) and I (two Interceptors)

 

The textbook states that the beam is not focused on a single site, but allowed to “spread out”.  In fact the fundamental laws of physics dictate that the beam will be spread, with a specific angular width.  The beam will be most intense along its central axis and become less intense as one moves off the axis.  One measure of beam width is the angular width for half power.  The width of the beam can be considered to be 2·F, where F is given by the formula

 

F » 1.22·l / D

where   l is the wavelength of the radiation
            D is the diameter of the antenna (same units as the wavelength)
            F is the half-angle in radians.

 

Microwave frequencies lie in the 2 to 40 GHz range, corresponding to a wavelength of 15 to 0.75 centimeters.  For a 0.75 centimeter wavelength and 1 meter (100 cm) diameter antenna, the half angle would be F » 1.22·0.75 / 100 = 0.915 / 100 = 9.15·10–3 radian, or about 0.5 degree for a full-width of about a degree.  At 10 kilometers, the width of the beam would be about 2·104·tan(9.15·10–3 radian) =  2·104·9.15·10–3 = 183 meters.  At 100 kilometers from the antenna, the full beam would span a lateral distance of 1.83 kilometer.  Any antenna within that width would receive a high-quality signal.

 


The ISO Open Systems Interconnection Model

At this level, the student is expected to be familiar with the ISO 7-level model for network communications.  As a practical matter, the more important model is TCP/IP (which stands for Transmission Control Protocol / Internet Protocol).  The TCP/IP layers roughly match those used in the ISO model, but the mismatch is not important.

 

In each model, an application is viewed as communicating directly with another application at the same level, despite the fact that the communication is indirect via lower levels of the protocol (excepting the physical layer).  The ISO model is a good approach for creating network services and conceptualizing their interaction.  The TCP/IP model is the more important for actual implementation of a network.

 

Why Do People Attack Networks?

Malicious hackers attack networks for a number of reasons, including the challenge of the “sport”, fame, money, revenge, and espionage.  For practical reasons, there are only two motives, corresponding to a targeted attack (the attacker wants to get this network) and a random attack (the network is just a convenient target of opportunity).

 

Targeted attacks are usually carried out by rather sophisticated hackers who have a specific reason to attack the targeted network and no other.  Espionage attacks definitely fall in this category.  If I want to steal some government secrets, I am less likely to hack into a network owned by a fast-food restaurant.  On the other hand, if I carry a grudge against fast-food restaurants for allowing me to eat all that fattening food and consequently to get fat, I might target these web sites and networks.

 

Untargeted (random) attacks seem to be more common.  These attacks are often carried out by unskilled attackers, sometimes called “script kiddies” because all they can do is to copy and slightly modify attack scripts written by skilled hackers.

 

 

Social Engineering

One of the most potent attacks against a network takes advantage of the well-known weakest point of any computer network: the humans who interact with that network.  As commonly defined, social engineering involves use of social skills in order to persuade a person to reveal information that should remain secret.

 

One of the more interesting applications of social engineering comes from the days in which bomb threats seem to be telephoned to a different company every day.  The instruction sheet for answering the telephone and talking to the person making the threat included instructions on getting him to describe the bomb, its location, and timing mechanism.  The person answering the telephone was instructed to be polite and respectful when speaking to this criminal in order to obtain the maximum information before he discontinued the call.  The last two questions in the list for those answering such a call were “Who are you” and “Where do you live”.  A surprising number of callers actually answered the question.  One of the best references on social engineering is the book The Art of Deception, written by Kevin Mitnick and published by Wiley Publishing, Inc. in 2002 (ISBN 0-471-23712-4)


Other Attacks

The book discusses a number of other attacks, including impersonation, spoofing, and session hijacking.  One famous example of session hijacking occurred recently in which then-President Bill Clinton admitted to a fondness for internet pornography.  What actually happened is that the president was being interviewed over the Internet and the session was hijacked by a malicious hacker who inserted the reference to pornography.

 

The book describes a number of denial of service or DOS attacks, including the ping of death and smurf attacks (I am certain that Papa Smurf would disapprove).  For those of you who are culturally impoverished, I have included a picture of Papa Smurf, taken from the web site www.smurf.com.  The Smurfs is an animated cartoon show adapted from a comic strip that first appeared in France in 1958.  It is still running.

 

 

DDOS (Distributed Denial of Service) attacks are one of the more malicious attacks.  The basic of a DOS attack is to send a target computer a stream of traffic too large for it to handle, thus shutting it down.  The one problem for the hacker is the relative speeds of the attacking computer and the target computer; if the target is faster then the attack will fail.

 

The result of this last observation is the DDOS attack, in which a malicious hacker infects a number of intermediate machines (called “zombies”) with code to attack the target machine.  These all attack at once, possibly on a signal from the attacker and suddenly the target machine has to defend against a large number of attackers.

 

 

Network Security Controls

Right up front, we should mention the fact that network security controls, as all security precautions, should irritate everybody but not excessively.  If the controls do not bother anybody, they are probably not sufficient.  If they bother everybody, they will be ignored or circumvented.  Passwords are a good example – if you make them too easy they will fall to a password cracker (such as a dictionary attack) and if you make them too hard to remember, such as “z79*Wq423Jftp$99” they will be written down and exposed.

 

As an aside, everybody thinks he or she has a clever way to disguise passwords, such as writing a combination “32 – 47 – 15” as a telephone number “832-4715”, but all malicious hackers know these tricks.  Suspecting that the above “telephone number” hides a six digit combination, a hacker would try the obvious 14 options.

 

The first step in devising security controls is a risk assessment, which is discussed in the next chapter.  For now, we merely claim that knowledge of what we have to protect goes a long way towards deciding how we should protect it.  There is a corollary here – some controls are so simple that they should be applied in any case.  Reasonable passwords and locks on office doors are examples of such simple controls.

 

A vulnerability in a network is a weakness that might be attacked; it is a potential avenue of attack – a way by which the system might fail.  In this it is differentiated from a threat, which is an action or event that might break the security of a system.  One can classify either vulnerabilities or threats by the targets of the attack.  The text presents a table of common network vulnerabilities on page 426. 

 

Encryption is probably the best protection against network vulnerabilities.  It is amazingly easy for a practiced malicious hacker to break into a network, either by guessing a username and password pair or by use of social engineering to convince a user to give up a password.  The next step is to make the files on a system hard to use except by those authorized to have access to them.  Encryption is the key.

 

Encryption is also applied to data in transit.  Using the OSI model, we can name two layers at which the encryption might be applied – the Data Link Layer and the Presentation Layer (I know that the book says Application Layer for this, it is a small matter of semantics).  Of course, the data could be encrypted at the Presentation Layer and again at the Data Link.

 

Link encryption offers many advantages.  The data are encrypted just prior to being presented to the physical layer for transmission and are decrypted just after receipt.  There are other advantages that will be presented below as disadvantages of end-to-end encryption.  The disadvantage of link encryption is that the data exist in the computer in the “plaintext” or unencrypted form and can be stolen there.

 

End-to-end encryption offers the advantage that data exist in the computer system only very briefly in plaintext form and are mostly handled in the encrypted form.  The difficulty here is that the message may contain certain clues, such as a priority level, that would help in setting up the routing.  If the priority level is in the part that is encrypted by the end-to-end method, then it is unavailable.  This actually appeared in a military system which followed a common security model called “red-black”.  In the “red state” the data are in plaintext form.  Data in this form are encrypted and passed as being in the “black state” or acceptable for handling by anybody – it is just a collection of bytes with no obvious structure or meaning.  Then the requirement was levied that the messages in the “black state” be accorded priority routing.  The problem is that, in this “black state”, the messages had no indication of priority, as that was considered sensitive (consider a FLASH message from the Pentagon to the U.S. missile submarine fleet – it is not likely to concern payroll data) and thus unavailable for use in the routing decisions.  This author was not directly involved in this project and does not know how this conundrum was solved.

 

The textbook discusses a number of applications of encryption to network security.  One of the more common today is a VPN (Virtual Private Network), in which access to a network resource is through an encrypted link, thus mimicking a true private network, which is implemented on a dedicated (and costly) private point-to-point physical data line.

 


PKI (Public Key Infrastructure) is an evolving technique that may enhance network security.  Two other protocols are SSH (Secure Shell) and SSL (Secure Socket Layer).  The security architecture to watch is the one associated with the new IPv6 protocol (version 6 of the IP Protocol Suite).  The transition to IPv6 was motivated by the inadequacy of the existing
32-bit address structure for the ever-expanding Internet.  As the change to a larger address space (128 bits, allowing for more than 3·1038 distinct addresses – is that enough) required a major overhaul of the protocol, it was decided to address other concerns, such as security.

 

This author has been informed that one of the goals of the security redesign was to hinder spoofing, in which the sender of a message can alter the source IP address so that the message appears to come from another source.  We can hope that this nuisance goes away.

 

One of the primary services of network security is to guarantee content integrity; that is, to insure that the message has not been altered in transit.  Here is an example taken from one this author’s favorite space-fantasy novels by David Weber.  The message sent concerned the territorial interests of one of two antagonistic nations over a piece of disputed territory.

      Original message:                “We are not intending to seize the planet by force”.
      Altered message:                “We are intending to seize the planet by force”.

 

In this novel, the omission of one word leads to war – a not unrealistic scenario.

 

Encryption is one guarantee of message integrity, but only if the original message can be verified by sight.  If I send you a message written in standard English and then encrypted, any alteration of the encrypted message would almost certainly cause the message to be decrypted as gibberish.  But suppose that the message consists of a series of 32-bit integers, sent as four-byte entries.  A corrupted message might not be so easily detectable.

 

Error correcting codes provide simple guards against accidental message corruption, but are not really effective against an intentional attack.  The reason for this lack of security is that the codes are so easy to compute.  If you give me a message with a specified error correcting code, I can forge another message with the same error correcting code – this is only a bit more difficult for some of the cyclic redundancy check codes.  For this reason, we now have what is called message digests or cryptographic checksums.  There are two characteristics of a cryptographic checksum that the book forgets to mention.

 

      1)   It must be impossible to retrieve the entire message, given only the checksum.  This
            requirement is met by any checksum that distills an entire message into 160 bits or
            less.  No many-to-one function is invertible.

 

      2)   Given a message and a checksum, it must be computationally infeasible to produce
            another message with the same checksum.  Within this context, computational
            infeasibility implies that it will take hundreds of years to produce the desired result.

 


At this point in the discussion, we should mention that many of these security features are based on problems that belong to the mathematical class NP-Hard.  While the precise definition of this class of problems is tedious, there is a practical difference that is important.

 

      Intractable    a problem is classified as intractable if it can be proven that no efficient
                            solution to the problem exists or can exist.

 

      NP-Hard       one of the characteristics of problems in this class is that there are
                            no known efficient algorithms that solve the problem, but no proof
                            that efficient solutions cannot exist.  When you base security on one of
                            these, you are betting that nobody can solve a problem that has resisted
                            solution by the best mathematical minds for over 50 years.  A good bet.

 

 

Authentication in Distributed Networks

What we are discussing is how to authenticate a user in a network of anonymous computers where the network links are not to be trusted.  Passwords provide one mechanism for user authentication, but one wants to avoid sending a password in clear text over the network.  The Kerberos protocol, developed by MIT, provides an interesting solution to the password problem.  There is a ticket-granting server that each user’s password.  When a user logs on to the network, the user’s workstation sends the user ID only to the ticket-granting server, which then responds with a ticket encrypted by the user’s password.  If the user’s workstation can decrypt the ticket using the password just typed in, the user is OK.  Note that the password is not stored on the workstation and never was transmitted on the network.

 

Any serious student of network security should undertake a study of the Kerberos protocol, especially focusing on how the protocol evolved in response to new attacks as they were detected and analyzed.  No product can be considered secure if it has not been under continuous attack by a “red team” for some time.  Even then it may not be secure.  How nasty is your red team and how dedicated are its members to detecting flaws?

 

Kerberos is a complete solution, which means that every part of the network must use the protocol or it cannot be used.  However, one can use the above insight on passwords to design a simpler system.  In this version, a server would send a one-time password to the user, with this one-time password encrypted with the user’s password.  The user could then use the decrypted one-time password for the specific session only.  What are the weaknesses of such an approach?  One that comes to mind is that the session might be hijacked.  There may be other problems with this proposed protocol.

 

The book then discusses routers and firewalls.  Routers can be used as a part of a security solution by placing access control lists on the routers.  This solution is of limited utility, mostly due to the design goals for routers – to facilitate traffic movement.

 

The more practical approach involves firewalls, either as stand-alone computers or as software packages placed on a personal computer.  Here is a general rule that is always true: No computer should be attached to the Internet unless it has a working firewall installed.

 

For a company network, the preferred approach is to have a single computer designated as a firewall for the company’s interior network and all the assets associated with that interior network.  A computer designated to be a firewall should be stripped of all software and data not directly related to its function as a firewall, such as editors, programming tools, password files, etc.  The only user interaction with the firewall should be to scan its audit logs.

 

The book then discusses intrusion detection systems (IDS’s), which monitor a network to identify activity that is malicious or suspicious.  When the IDS operates as a separate device, they often operate in stealth mode, with a network interface card that listens to the network but never places any packets onto that network; the NID has no published network address and cannot be detected by an outside device – hence stealth mode.

 

The chapter closes with a discussion of secure e-mail. The student is reminded never to trust any e-mail, especially from one’s friends as such messages could have been initiated by a virus without the friend’s knowledge.  This author’s wife is a frequent computer user who accesses the Internet frequently as a part of her job; hence her vulnerability to attack by viruses is somewhat higher that normal.  Imagine her surprise when her friends noted that she was sending out e-mail claiming to have, as attachments, pictures of her naked.

 

This author has received e-mails entitled “I Love You”, but quickly discarded them as they were from people he had never heard of.  You guessed it – it was the Love Bug virus.

 


Appendix: Maximum Distance for Line-Of-Sight for a Given Tower

 

The task here is to compute the maximum distance over which two towers can communicate if the towers may communicate only via line-of-sight.  This means that the towers cannot communicate if they are not visible to each other.  There are many reasons that the towers might not be mutually visible, intervening mountains and large buildings can certainly obscure the line-of-sight between them.  Here we consider a theoretical upper limit to the line-of-sight due to the curvature of the earth.  We shall assume a perfectly spherical earth and ignore terrain variations, such as mountains and atmospheric effects.  It is for that reason that the distance obtained will be an upper limit that is not often realized.

 

Consider a tower transmitting to a receiver that is on the surface of the earth.  The maximum distance will be obtained when the beam barely grazes the surface; i.e. is tangent to the great circle drawn through the transmitter and receiver.  This situation is illustrated in the figure.  As the beam continues to propagate, we are faced with a similar problem – how high must an antenna be to be in the path of the beam as it radiates further and further from the earth’s surface and finally into space.

 

The key to solving this problem is to obtain the distance from the transmitting tower to the point on the great circle at which the beam is tangent to the earth.  We do a little geometry here.  The first is to recall the definition of angular measurement in radians.  If an angle projected from the center of a circle of radius R onto its circumference spans a distance of
D, then the angular measure in radians is Q = D / R.  Note that if it spans 2·p radians, then the total distance is D = Q·R = 2·p·R; thus 2·p radians = 360 degrees.

 

 

Two Towers of Height h Communicating by Line-of-Sight


Inspection of the figure shows that the distance from the tower of height h to the farthest point that can see the top of the tower is given by D = R·Q, where R is the radius of the earth and Q is determined by cos(Q) =  » .  Before using the approximation in our derivations, let’s justify it.  The radius of the earth is approximately 6.3784·106 meters.  Suppose that h/R = 2.0·10–4, corresponding to a tower height of 1.276 kilometers or 4186 feet.  Then, we have the following.

              = 1.0002

              = 0.999800039992

              = 0.999800000000, for an error of 4·10–6 percent.

 

This establishes the value  as an acceptable estimate of cos(Q) for our purposes.

So we are using the equality cos(Q) =  to get a value of the angle Q.  To avoid taking the inverse cosine of a number, we resort to another approximation.

 

We use the series expansion for cos(Q), which begins cos(Q) = 1 – Q2/2 + Q4/24 – …. to conclude that for |Q| very small that we can say cos(Q) = 1 – Q2/2.  Hence we have

 

Q2/2 = h/R, or

 

Q = , and

 

D = R·Q = R· = .

 

Suppose that h = 1 kilometer = 103 meters, a fairly tall tower.

Then 2·h·R = 2·103 meters · 6.3784·106 meters = 1.27568 1010 (meters)2 and
D = 1.1295 105 meters, or approximately 113 kilometers.

 

One can make a general formula by noting that  =  = 3.572·103, so that the distance in meters is given by D = 3.572·103·, where h is given in meters.  For the same tower, h = 103 meters and  = 31.623, so that D =  1.1295 105 meters, as above.

 

For two towers, each of height 1 kilometer, trying to transmit by line of sight, the maximum separation is approximately twice the above number, or 226 kilometers.

 

Just to be complete, let’s estimate the error in using the partial series 1 – Q2/2 as the value of cos(Q).  In the first example, with the monstrous tower of height given by h/R = 2.0·10–4, we would say that cos(Q) = 0.999800039992, without the first approximation used for the reciprocal of .  Using the approximation of  as the reciprocal of  and using the approximation of 1 – Q2/2 for cos(Q), we arrived at Q = , or Q =
= 0.02 radians.  An exact calculation gives cos(0.02) = 0.9998000066666, for an error of

3.33·10–6 percent.  Thus, we conclude that for very small numbers that we can use some of these approximations, and specifically that for any reasonable tower height that the formula derived above for maximum range is sufficiently accurate.