IP traceback
IP traceback is a name given to any method for reliably determining the origin of a packet on the Internet. The datagram nature of the Internet makes it difficult to determine the originating host of a packet – the source id supplied in an IP packet can be falsified (IP spoofing) allowing for Denial Of Service attacks (DoS) or one-way attacks (where the response from the victim host is so well known that return packets need not be received to continue the attack). The problem of finding the source of a packet is called the IP traceback problem. IP Traceback is a critical ability for identifying sources of attacks and instituting protection measures for the Internet. Most existing approaches to this problem have been tailored toward DoS attack detection. Such solutions require high numbers of packets (tens of thousands) to converge on the attack path(s). By nature, a solution requiring large packet volume is specifically targeted toward DoS attacks and tend to be probablistic in nature.
A brute force solution to traceback can be obtained by having every router mark every packet as it passes through it. An alternative brute force solution requires every router to keep a record of every packet that passes through it. Such solutions are infeasible because: (1) they require a large and unbounded space in each packet (or router); (2) they require a large overhead at every router. Thus, most existing approaches to traceback attempt to reduce the above two effects.
IP Traceback was first suggested by Savage et al and later added to by many others.
Of the approaches described, some are probabalistic, and a few are deterministic. Some try to store information about packets in routers along the way, and some try to store information in the packet in either the fragment id field or the options field.
IP address retracing
Retracing an IP address is a process by which a company may discover who was assigned a particular IP address at a particular time. The company must have firm grounds of reason (or authority) to do so. Uses of this process include enforcing standards of conduct for an Internet forum, or tracing illegal activity.
If an individual has violated the standards of conduct for a web forum (e.g., by threatening another person), and the terms of conduct state that violators will pay an $80 fine, then the company that owns the forum may initiate IP address retracing to discover the identity of the violator. The process typically works as follows:
- The person who is threatened reports the post to the forum moderator.
- The moderator removes the post and looks in the forum's Web server logs. There, the moderator discovers that someone using the IP address 123.456.789.00 posted the threat at 12:00AM.
- The forum company then performs a WHOIS search to find out which Internet service provider (ISP) was allotted the IP address 123.456.789.00 at 12:00AM (as the IP address may change periodically). The search results show that abcdef ISP owned the IP address at that time.
- The forum company contacts abcdef ISP and asks which of its customers was allotted 123.456.789.00 at 12:00AM.
- abcdef ISP is now obliged to respond, giving the name and address of the person who was assigned the IP address. Most likely, this will be the person who posted the threat on the forum, and he or she can then be fined as stated in the forum rules.
Packet marking
Probabilistic packet marking
Savage et al suggest probabilistically marking packets as they traverse routers in the Internet. More specifically, they propose that router mark the packet, with low probability (say, 1/20,000), with either the router’s IP address or the edges of the path that the packet traversed to reach the router.
For the first alternative, analysis shows that in order to gain the correct attack path with 95% accuracy as many as 294,000 packets are required [1]. The second approach, edge marking, requires that the two nodes that make up an edge mark the path with their IP addresses along with the distance between them (the latter requires 8 bits to represent the maximum hop count allowable in IP). This approach would require more state information in each packet than simple node marking but would converge much faster. They suggest 3 ways to reduce the state information of these approaches into something more manageable.
The first approach is to XOR each node forming an edge in the path with each other. Node a inserts it’s IP address into the packet and sends it to b. Upon being detected at b (By detecting a 0 in the distance), b XORs it’s address with the address of a. This new data entity is called an edge id and reduces the required state for edge sampling by half. Their next approach is to further take this edge id and fragment it into k smaller fragments. Then, randomly select a fragment and encode it, along with the fragment offset so that the correct corresponding fragment is selected from a downstream router for processing.
When enough packets are received, the victim will receive all edges and all fragments so that an attack path can be reconstructed (even in the presence of multiple attackers). The low probability of marking reduces the associated overheads. Moreover, only a fixed space is needed in each packet.
Due to the high number of combinations required to rebuild a fragmented edge id, the reconstruction of such an attack graph is computationally intensive according to research by Song and Perrig . Furthermore, the approach results in a large number of false positives. As an example, with only 25 attacking hosts in a DDoS attack the reconstruction process takes days to build and results in thousands of false positives[2].
Accordingly, Song and Perrig propose the following traceback scheme: instead of encoding the IP address interleaved with a hash, they suggest encoding the IP address into an 11 bit hash and maintain a 5 bit hop count, both stored in the 16-bit fragment ID field. This is based on the observation that a 5-bit hop count (32 max hops) is sufficient for almost all Internet routes. Further, they suggest that two different hashing functions be used so that the order of the routers in the markings can be determined. Next, if any given hop decides to mark it first checks the distance field for a 0, which implies that a previous router has already marked it. If this is the case, it generates an 11-bit hash of it’s own IP address and then XORs it with the previous hop. If it finds a non-zero hop count it inserts it’s IP-hash, sets the hop count to zero and forwards the packet on. If a router decides not to mark the packet it merely increments the hop count in the overloaded fragment id field.
Song and Perrig [2] identify that this is not robust enough against collisions and thus suggest using a set of independent hash functions, randomly selecting one, and then hashing the IP along with a FID or function id and then encoding this. They state that this approach essentially reduces the probability of collision to (1/(211)m). For further details see Song and Perrig [2].
Deterministic packet marking
Belenky and Ansari, with an approach nearly identical to the one taken in this paper, outlined deterministic packet marking initially. In their letter they describe a more realistic topology for the Internet – that is composed of LANs and ASs with a connective boundary – and attempt to put a single mark in the packet on inbound packets at the point of network ingress. Their idea is to put, with random probability of .5, the upper or lower half of the IP address of the ingress interface into the fragment id field of the packet, and then set a reserve bit indicating which portion of the address is contained in the fragment field. By using this approach they claim to be able to obtain 0 false positives with .99 probability after only 7 packets.
Unfortunately, their approach is extremely light on details and fails to address how upstream routers are to account for their marking if they too are also marked. They also fail to discuss trans-leaf segments, shared segments and transit segments. Also, their approach does not take into account the non-unique status of an IP address that NAT confers on network topologies. No follow up work to this paper has been discovered.
Rayanchu and Barua provide another spin on this approach (called DERM). Their approach is similar in that they wish to use and encoded IP address, of the input interface, in the fragment id field of the packet. Where they differ from Belenky and Ansari is that they wish to encode the IP address as a 16-bit hash of that IP address. Initially they choose a known hashing function. They state that there would be some collisions if there were greater than 2^16 edge routers doing the marking. In truth, it need not require so many to achieve a collision (by nature of probabilities); this collision space is further increased by the fact that inbound interface address are used and any “router” will very likely have multiple interfaces; all with differing IP addresses.
They attempt to mitigate the collision problem by introducing a random distributed selection of a hash function from the universal set, and then applying it to the IP address (while a hash identifier is discussed, how it is communicated with the receiver is unknown). In either hashing scenario, the source address and the hash are mapped together in a table for later lookup along with a bit indicating which portion of the address they have received. Through a complicated procedure and a random hash selection, they are capable of reducing address collision.
By using a deterministic approach they reduce the time for their reconstruction procedure for their mark (the 16 bit hash). However, by encoding that mark through hashing they introduce the probability of collisions, and thus false-positives. As a parallel, just like Belenky and Ansari, their approach does not address NAT – a significant shortcoming.
We know nothing about deterministic packet marking...but we are forced to do a project in this area...Actually how to encode the packet price??
Router based approach
In this type of solution, the router is charged with maintaining information regarding packets that pass through it. For example, Sager proposes to log packets and then data mine them later. This has the benefit of being out of band and thus not hindering the fast path. However, given link speeds on core routers, this simply will not scale. Tera-bytes would be required for logging and even if that much space were used, the sheer I/O demand on the disks would slow everything down. Of course, probabilistic sampling techniques can reduce the data demands for this scheme.
Snoernen et al propose marking within the router. The idea proposed in their paper is to generate a fingerprint of the packet, based upon the invariant portions of the packet (source, destination, etc.) and the first 8 bytes of payload (which is unique enough to have a low probability of collision). More specifically, m independent simple hash functions each generate an output in the range of 2n-1. A bit is then set at the index generated to create a fingerprint when combined with the output of all other hash functions. All fingerprints are stored in a 2n bit table for later retrieval. The paper shows a simple family of hash functions suitable for this purpose and present a hardware implementation of it.
The space needed at each router is limited and controllable (2n bits). A small n makes the probability of collision of packet hashes (and false identification) higher. When a packet is to be traced back, it is forwarded to originating routers where fingerprint matches are checked. As time passes, the fingerprint information is “clobbered” by hashes generated by other packets. Thus, the selectivity of this approach degrades with the time that has passed between the passage of the packet and the traceback interrogation. A large drawback of this approach is that it requires significant changes to router infrastructure – true for any packet-marking scheme.
The final known take on the router-based schemes comes from Hiroaki Hazeyamay et al . In their approach, they wish to integrate the SPIE approach as outlined by Snoernen [6] (above), with their approach of recording the layer 2 link-id along with the network ID (VLAN or true ID), the MAC address of the layer 2 switch that received the packet and the link id it came in on. This information is then put into two lookup tables – both containing the switch (layer 2 router) MAC id for lookup. They rely on the MAC:port tuple as a method of tracing a packet back (even if the MAC address has been spoofed).
This mechanism certainly could work, but there is a finite space on any given switch to store such information. To help mitigate this problem, they use Snoernen’s hashing approach and implementation (SPIE) – modifying it to accept their information for hashing. By their own admission, their algorithm is slow (O(N2)) and, with only 3.3 million packet hashes being stored the approximate time before the digest tables are invalid is 1 minute. This dictates that any attack response must be real-time – a possibility only on single-administrative LAN domains. To their credit though, they are using real hardware implementations of their idea – which alleviates some arguments about plausibility.
Out-of-band approaches
The ICMP traceback scheme Steven M. Bellovin proposes probabilistically sending an ICMP traceback packet forward to the destination host of an IP packet with some low probability. Thus, the need to maintain state in either the packet or the router is obviated. Furthermore, the low probability keeps the processing overhead as well as the bandwidth requirement low. Bellovin suggests that the selection also be based on pseudo-random numbers to help block attempts to time attack bursts. The problem with this approach is that routers commonly block ICMP messages because of security issues associated with them.
Trace-back of active attack flows
In this type of solution, an observer tracks an existing attack flow by examining incoming and outgoing ports on routers starting from the host under attack. Thus, such a solution requires having privileged access to routers along the attack path.
To bypass this restriction and automate this process, Stone proposes routing suspicious packets on an overlay network using ISP edge routers. By simplifying the topology, suspicious packets can easily be re-routed to a specialized network for further analysis.
This is an interesting approach. By nature of DoS, any such attack will be sufficiently long lived for tracking in such a fashion to be possible. Layer-three topology changes, while hard to mask to a determined attacker, have the possibility of alleviating the DoS until the routing change is discovered and subsequently adapted to. Once the attacker has adapted, the re-routing scheme can once again adapt and re-route; causing an oscillation in the DoS attack; granting some ability to absorb the impact of such an attack.
Other approaches
Burch and Cheswick propose a controlled flooding of links to determine how this flooding affects the attack stream. Flooding a link will cause all packets, including packets from the attacker, to be dropped with the same probability. We can conclude from this that if a given link were flooded, and packets from the attacker slowed, then this link must be part of the attack path. Then recursively upstream routers are “coerced” into performing this test until the attack path is discovered. The most important problem with this approach is that it is resource intensive and highly intrusive. In fact, this approach may be viewed itself as a DoS attack.
The traceback problem is complicated because of spoofed packets. Thus, a related effort is targeted towards preventing spoofed packets; known as Ingress Filtering. Ingress Filtering restricts spoofed packets at ingress points to the network by tracking the set of legitimate source networks that can use this router.
Park and Lee present an extension of Ingress Filtering at layer 3. They present a means of detecting false packets, at least to the subnet, by essentially making use of existing OSPF routing state to have routers make intelligent decisions about whether or not a packet should be routed. Their approach makes sense and can be implemented inside of routers with little overhead.
Ingress filtering is recommended IETF practice and is widely deployed. However, a sophisticated attacker can still spoof packets from the set of networks that its Ingress Filter allows. This is especially true for networks that are used for transit, where the set of legal sources may be large. Ingress filtering extensions down to the leaf networks have been proposed to further limit spoofing. Regardless, by blocking packets with known illegitimate addresses at routing devices, the space of the attack origin is substantially reduced.
Ultimately, until there is something on the magnitude of a legal mandate, IP Traceback may never exist.