=================================================== From twngan@cs.rice.edu Wed Feb 18 15:53:04 2004 =================================================== This paper suggests a new approach to make nodes more robust against flooding. The idea is capabilities: nodes must obtain a permission to send ahead, or its packets would be dropped by any "verification points" in the network. This proposal seems to be effective in the sense that the destination would be able to limit unwanted traffic arrival. However, I seriously doubt if the approach would be generally useful. There is nothing wrong with the approach of capability, or the suggested mechanism to implement it. However, the bigger question lies on how could one make use of capability to prevent DoS. For example, if one does a DDoS on a web server, how could the web server reacts with capability? If it just refuses to grant permission, then the DoS is still successful, as real users are refused service, too. It seems that the only type of DoS that it can effectively prevent is flooding the victim's network with junk, which is a relatively rare, less destructive attack. =================================================== From gulati@cs.rice.edu Wed Feb 18 16:15:22 2004 =================================================== -------------------------------------------------------------- Preventing Internet Denial of service with capabilities This paper presents an architecture where a source needs to obtain permission to send from the destination. The destination node is given control over its own resources. Destination node sends out tokens or capabilities to the source, using Request-To-Send(RTS) servers and data packets need to contain these tokens that are verified by verification points (VPs) in the route. Thus decoupling the path for receiving of tokens and actual data packets, help preventing the existing data flows from attacks. The approach is incrementally deployable and scalable. The paper floats this idea of putting RTS servers along with BGP speakers and VPs closely coupled with RTSs. Many details are missing from the paper and authors point some of them as the future work. I feel that adding RTS servers and verification points as part of the route is making the already complicated Internet architecture more complex. This adds on to the burden on network administrators and makes the network more difficult to understand and maintain. Authors doesn't provide any numbers to support their claims about work required per packet, packet delay, scalability and ease of deployment. A better solution should be aimed at using the current infrastructure and leveraging optional fields in existing protocols. Also I am not sure about the claim that this solution helps in adding new applications. Since full control over the resource distribution is given to the destination, there is no way for an application to specify its requirements to the destination, which in my view might stop some applications to run. --------------------------------------------------------------------------- -Ajay =================================================== From santa@rice.edu Wed Feb 18 21:12:56 2004 =================================================== Preventing Internet Denial-of-Service with Capabilities Review: This paper outlines a novel approach to preventing denial-of-service attacks. The secondary goal is to keep the internet open to new applications, and not trade security with openness. The approach the authors take is based on the well founded notion of capabilities used in systems from many decades. The sendor node must obtain permission to send traffic to the destination before sending traffic. Only verified traffic is allowed to go through the internet. The approach is refreshingly different from most previous approaches towards internet security. There are a few concerns though. The scalability of the RTS servers is a cause for concern as state for so many flows needs to be remembered. True, distributed attack has no effect on already established connections, but no new connections can be created. One assumption about attackers cannot snoop links is absurt, as anyone can sniff traffic in the LAN and replay the capability. =================================================== From ahae@cs.rice.edu Wed Feb 18 23:15:25 2004 =================================================== The authors propose a method to prevent and constrain denial-of-service attacks. Before transmitting any data, the sender must first obtain permission from the receiver. Each packet is then marked with a token of authority that can be verified by the network. Unauthorized packets can thus be discarded before they actually reach the victim. The proposed approach has several significant weaknesses. First of all, it has a significant overhead because it involves not only a complex signaling protocol, which adds an entire round-trip delay to most connections, but also an entirely new infrastructure of RTS servers and validation points. Moreover, the issues of backwards compatibility and incremental deployment, which are crucial to the deployment of new services in the Internet, are not sufficiently addressed. Finally, the design is conceptually incompatible with a packet-oriented network like the Internet, where per-connection state has always been kept out of the network itself. =================================================== From anupamc@cs.rice.edu Thu Feb 19 01:36:10 2004 =================================================== "Preventing Internet Denial-of-Service with Capabilities" This paper outlines a mechanism to prevent DoS attacks based on capabilities. Their mechanism requires a source to ask permission from its destination to send packets. This permission is granted in the form of capabilites. RTS servers on the route between the source and the destination relay the RTS packets, while verification points (VPs) check the capabilities to weed out unauthorized packets. Authors claim the mechanism suitable for incremental deployment. This mechanism requires state to be maintained at each of the VPs for every source-destination pair. This could require significant amount of memory at some VPs. Thus these VPs themselves are open to DoS attacks if the attackers discover a VP which a potential bottleneck candidate. Failure at any VP on the source-destination path would require the connection to be set up again. This is going to be a big overhead. Consider such a scenario in an ad hoc network, where all nodes use this mechanism. =================================================== From dushu@cs.rice.edu Thu Feb 19 13:07:42 2004 =================================================== The authors proposed a virtual channel-like mechanism to prevent the denial of service attack. The mechanism asks the source to acquire the tokens, which stand for the bandwidth permitted, from the destination first before sending out the packets. And all the intermediate routers along the path from the source to the destination also store the token information and are able to verify them during their forwarding. They also use the one-way hash chain to give the destination continuous control over the virtual channel: the destination can hang up any time they want, which can further consolidate the walls against DOS attacks. I agree with the authors that the origin of the DOS problem comes from the openness of the Internet: anyone can send packet to any body at any time, although I am not convinced by the authors that the proposed mechanism can prevent DOS. The current trend of the DOS attack is that the attacker can compromise huge number of machines using virus or something else and then initiate the attacks to the target simultaneously. And although they are called attacks, if you look them one by one, they are really just the normal service requests. Maybe I did not fully understand what the authors want to do, but I can not see the RTS servers, which take charge of distributing the tokens, can be protected from malicious huge amounts of requests, since these requests are just like the normal requests, and if the RTS server block them, the normal request can also be blocked. If we look at the networking behavior, it's really hard to tell the difference between the DOS attack caused by a new virus and the high volume browsing requests when a hot event just happens, which we think is normal. Plus, this virtual channel like mechanism needs to deal with a lot problems: for example, it request the intermediate routers keep the soft states for the ongoing flows, which is resource consuming and hard to manage and I believe it makes the intermediate routers themselves to be potentially vulnerable to the DOS attacks. In general, I don't think virtual channel is a new idea although the authors put them into a new context here. =================================================== From muhammed@ece.rice.edu Thu Feb 19 13:47:33 2004 =================================================== The authors propose a scheme which can be used to prevent DoS attacks. Their scheme involves the source obtaining and using capabilities (implemented using one way hash chains) from the destination before sending any data packets to it. The destination application may refuse to issue "capabilities" to the source if it is resource constrained. The intermediaries, the RTS servers are responsible for indirectly contacting the destination and issuing a capability to the source. But I feel their scheme will not work until there is wide deployment of the RTS servers and VP's(Verification Points). Without their wide presence in the internet, the internet routing infrastructure will lose its ability to react to route changes. (The data paths are constrained to flow through some VP's). Also there will be enormous storage overhead at the VP's which must maintain state for each end-to-end flow (order of N^2 of them at the core of the network). The VP's will also have to do more processing to validate and forward packets. =================================================== From amsaha@cs.rice.edu Thu Feb 19 16:34:53 2004 =================================================== This is a good paper which presents the use of capabilities to prevent Denial Of Service (DoS) attacks in the internet. When a source wants to send packets to a destination, the destination grants permission to the source in the form of a capability. Some special intermediate nodes in the path from the source to the destination also remember these capabilities so that if a packet without a valid capability is observed at any of these intermediate nodes, then the packet can be immediately dropped, thus preventing DoS attack of the destination. The major advantage is that the proposed architecture allows for incremental deployment i.e. unlike several previous approaches to prevent DoS, this approach does not require widespread deployment to start off. The all-or-nothing behaviour of various previous protocols have rendered them useless since the internet is too large and managed by too diverse a community to expect any coordinated attempt to solve any of the Internet's problems. The major disadvantages are as follows: Distributed attack on the RTS is still a problem and new connections cant be opened in that scenario. However, they claim that unlike the Internet, existing connections will keep on working. But the existing connections are not going to last long and they will have to renew their capabilities in which they will fail. Besides doing a distributed attack requires less number of compromised hosts in this approach since we only need to choke the RTS channel bandwidth and not the entire bandwidth. As a result of this even though the data channel is free connections wont be allowed because the RTS channel is choked. Where are the policies maintained? It seems most logical that the policies will be maintained at the RTS servers. However, there may be a large number of servers managed by a single RTS server since the RTS server is colocated with BGP speakers (existing at network boundaries of ASs) and hence management of policies is going to be a nightmare. Of course the applications cannot maintain the policies because this means modifying the server (as well as client) applications. Mediocre problem: Each packet from the source to the destination must contain a 64 bit long capability. This is an overhead and probably can be bettered. The other paper says that even 16 bits is a problem to implement in an IP compatible way. Minor problem: A loose form of clock synchronization is required since if the rates at which the clocks of the verification points (VPs) run is higher than the rate of the source, then packets might be dropped at the VPs. This however, might not be much of an issue. Philosophical point: What makes the authors believe that administrators will be willing to implement their radically new architecture when till data simple approaches such as ingress/egress filtering have not been widely implemented. Thanks, Amit +----------------------------------------------------------------------+ | Amit Kumar Saha, amsaha@rice.edu, http://www.cs.rice.edu/~amsaha | | Rice University, 6100 Main St, MS-132, Houston, Texas 77005, USA. | +----------------------------------------------------------------------+ =================================================== From anwis@cs.rice.edu Thu Feb 19 18:45:51 2004 =================================================== This paper discusses how the fundamental architecture of the Internet can be redesigned to account for DOS attacks. The paper first gives a brief overview of the different types of technqiues that have been used to combat DOS: source address filetering, traceback and pushback, overlay filtering, and anomaly detection. The paper proposes a new way to handle DOS attacks: through RTS (Request to Send) servers and VP's (verfication points). This idea seems to be feasible because of the property of incremental deployment. If the destination that you wish to send to wishes to employ this scheme, then the VP can perform the request to the RTS server transparently. Ohterwise, the client can simply send to the destination. In other words, the philosophy employed is pull rather than push. Traditional networks push data onto the recipients. However, the scheme proposed has the recipient pulling data from the sender. A similar scheme has been proposed to solve the problem of email spamming. The other cool thing with this idea is that it has the ability to incorportate QoS into the scheme. Recipients can selectively hand out items in the hash chain to senders depending on what level of service should be provided.