Difference between revisions of "Understanding and Mitigating the Impact of Web Robot and IoT Traffic on Web Systems"

From Knoesis wiki
Jump to: navigation, search
(Robot detection)
Line 1: Line 1:
 
== Introduction & Motivation ==
 
== Introduction & Motivation ==
The kind of information shared on the Web has shifted dramatically over the past decade and a half. Compared to Web pages that mainly hosted static material during the early 2000’s, modern Web sites are full of dynamic content in the form of in-the-moment news articles, opinions, and social information. Consequently, Web robots or crawlers, which are software agents that automatically submit http requests for content from the Web without any human intervention, have been steadily rising in sophistication  and in volume. Latest industry reports suggest that 6/10 of all Web requests are originated by bots, and this proportion is slated to rise to ever higher levels with the advent of the Internet of Things.  
+
The kind of information shared on the Web has shifted dramatically over the past decade and a half. Compared to Web pages that mainly hosted static material during the early 2000’s, modern Web sites are full of dynamic content in the form of in-the-moment news articles, opinions, and social information. Consequently, Web robots or crawlers, which are software agents that automatically submit http requests for content from the Web without any human intervention, have been steadily rising in sophistication  and in volume. Latest industry reports suggest that 6/10 of all Web requests are originated by bots, and this proportion is slated to rise to ever higher levels with the advent of the Internet of Things. The figure below illustrates the rapid increase in the volume of robot traffic reported by various scientific and commercial studies over the past decade:
 +
 
 +
[[File:trend.png|500px|center]]
  
 
Present efforts have identified how the behavioral and statistical characteristics of Web robot traffic stand in contrast to traffic generated by humans. Unfortunately, current methods for optimizing the response rate, power consumption, and other performance aspects of Web systems rely on traffic to exhibit human-like characteristics. For example, caches are a critical tool to improve system response times and minimize energy usage, but they require traffic to display human behavioral patterns which robots do not show. Given how robots are the main form of Web traffic today and are likely to rise to higher levels in the future, they stand to threaten the performance, efficiency, and scalability of all kinds of Web systems, from single servers to farms and large-scale clouds.
 
Present efforts have identified how the behavioral and statistical characteristics of Web robot traffic stand in contrast to traffic generated by humans. Unfortunately, current methods for optimizing the response rate, power consumption, and other performance aspects of Web systems rely on traffic to exhibit human-like characteristics. For example, caches are a critical tool to improve system response times and minimize energy usage, but they require traffic to display human behavioral patterns which robots do not show. Given how robots are the main form of Web traffic today and are likely to rise to higher levels in the future, they stand to threaten the performance, efficiency, and scalability of all kinds of Web systems, from single servers to farms and large-scale clouds.
Line 15: Line 17:
 
Contemporary offline and real-time robot detection techniques suffer from several limitations. Offline detection ap- proaches based on: (i) simple, textual analysis can be easily obfuscated by Web robots; (ii) identifying common characteristics (such as percentage of image requests and response codes) of robots may not generalize across server domains; and (iii) training machine learning models like a a naive Bayes net or Hidden Markov Model  over traffic statistics  may see degraded performance as the statistical profile of robots and humans evolve continuously. Contemporary real-time detection approaches subject active sessions to tests such as checking for the execution of embedded code, or requesting hidden resources on an html page. These tests, however, are designed to identify specific ill-behaviors, such as robots participating in a distributed denial of service attack. They also require extensive modifications to Web servers and lack the ability to identify more general classes of ill-behaving robots. Furthermore, although offline and real-time approaches are synergistic and offer complementary benefits, they have been developed independently, with disparate underlying philosophies. Finally, real-time approaches are often carefully engineered systems that require costly Web server modifications that organizations may not wish to invest in. These drawbacks call for a simple integrated approach to online and offline detection that can reliability identify Web robots despite the continuous evolution of their traffic.
 
Contemporary offline and real-time robot detection techniques suffer from several limitations. Offline detection ap- proaches based on: (i) simple, textual analysis can be easily obfuscated by Web robots; (ii) identifying common characteristics (such as percentage of image requests and response codes) of robots may not generalize across server domains; and (iii) training machine learning models like a a naive Bayes net or Hidden Markov Model  over traffic statistics  may see degraded performance as the statistical profile of robots and humans evolve continuously. Contemporary real-time detection approaches subject active sessions to tests such as checking for the execution of embedded code, or requesting hidden resources on an html page. These tests, however, are designed to identify specific ill-behaviors, such as robots participating in a distributed denial of service attack. They also require extensive modifications to Web servers and lack the ability to identify more general classes of ill-behaving robots. Furthermore, although offline and real-time approaches are synergistic and offer complementary benefits, they have been developed independently, with disparate underlying philosophies. Finally, real-time approaches are often carefully engineered systems that require costly Web server modifications that organizations may not wish to invest in. These drawbacks call for a simple integrated approach to online and offline detection that can reliability identify Web robots despite the continuous evolution of their traffic.
  
We propose a novel approach to detect Web robots in both offline and in real-time. The approach relies on training analytical models over the request patterns of robots and humans for different types of resources. Results demonstrating our state-of-the-art results compared to a suite of baseline systems to be posted.
+
We propose a novel approach to detect Web robots in both offline and in real-time. The approach relies on training analytical models over the request patterns of robots and humans for different types of resources encoded as DTMC models. The approach works well in offline settings where, in a post-mortem analysis, users are interested in capturing realistic and complete samples of Web robot traffic. The figure below illustrates the superior F_1 measure (a harmonic mean of precision and recall) against a number of baseline multinomial predictors used as the core learning algorithm in a number of other members.
 +
 
 +
[[File:cmp.png|300px|center]]
 +
 
 +
Our proposed detection approach is adaptable to real-time settings, where a system wishes to identify that an active session is a Web robot. This classification  must be performed as early as possible so that the system can choose to take an alternative action to the session (e.g. admit requests with low priority, block access to specific files, or block the robot). Our adaptation algorithm performs well in a number of settings, and we are able to identify robot sessions within a small number (k < 20) requests.
 +
 
 +
[[File:rtp.png|300px|center]]
 +
 
 +
Ongoing work seeks to develop algorithms for identifying <em>malicious</em> Web robots. One promising approach is being pursued with international collaborators, where we seek to integrate fuzzy logic with markov clustering algorithms to separate bots from humans, and of the robots found, those who are malicious.
 +
[[File:malflow.png|600px|center]]
 +
 
 +
Initial results are promising, but our work is ongoing:
 +
[[File:mal.png|600px|center]]
  
 
=== Traffic Generation ===
 
=== Traffic Generation ===

Revision as of 14:25, 10 December 2015

Introduction & Motivation

The kind of information shared on the Web has shifted dramatically over the past decade and a half. Compared to Web pages that mainly hosted static material during the early 2000’s, modern Web sites are full of dynamic content in the form of in-the-moment news articles, opinions, and social information. Consequently, Web robots or crawlers, which are software agents that automatically submit http requests for content from the Web without any human intervention, have been steadily rising in sophistication and in volume. Latest industry reports suggest that 6/10 of all Web requests are originated by bots, and this proportion is slated to rise to ever higher levels with the advent of the Internet of Things. The figure below illustrates the rapid increase in the volume of robot traffic reported by various scientific and commercial studies over the past decade:

Trend.png

Present efforts have identified how the behavioral and statistical characteristics of Web robot traffic stand in contrast to traffic generated by humans. Unfortunately, current methods for optimizing the response rate, power consumption, and other performance aspects of Web systems rely on traffic to exhibit human-like characteristics. For example, caches are a critical tool to improve system response times and minimize energy usage, but they require traffic to display human behavioral patterns which robots do not show. Given how robots are the main form of Web traffic today and are likely to rise to higher levels in the future, they stand to threaten the performance, efficiency, and scalability of all kinds of Web systems, from single servers to farms and large-scale clouds.

This NSF funded project will devise essential methods towards understanding the complexity of Web robot traffic and its impact on Web systems. The research activities will operationalize our existing knowledge about the features and behaviors of robot requests to devise novel classification methods and to build a robot traffic generator and robot- resilient caching system. The results of the project stand to transform the how Web systems of all kinds are designed and optimized so that the performance and energy cost associated with servicing robot requests is mitigated. The results also lay a foundation to build analytic models of the demand robots impose based on the specific architecture of a system, and to develop open-source plugins that control robot activity and their access to information in unprecedented ways.

People

Graduate Students: Nathan Rude, Ning Xie
Undergraduate Students: Logan Rickert
Faculty: Derek Doran (PI)

Thrusts

Robot detection

Contemporary offline and real-time robot detection techniques suffer from several limitations. Offline detection ap- proaches based on: (i) simple, textual analysis can be easily obfuscated by Web robots; (ii) identifying common characteristics (such as percentage of image requests and response codes) of robots may not generalize across server domains; and (iii) training machine learning models like a a naive Bayes net or Hidden Markov Model over traffic statistics may see degraded performance as the statistical profile of robots and humans evolve continuously. Contemporary real-time detection approaches subject active sessions to tests such as checking for the execution of embedded code, or requesting hidden resources on an html page. These tests, however, are designed to identify specific ill-behaviors, such as robots participating in a distributed denial of service attack. They also require extensive modifications to Web servers and lack the ability to identify more general classes of ill-behaving robots. Furthermore, although offline and real-time approaches are synergistic and offer complementary benefits, they have been developed independently, with disparate underlying philosophies. Finally, real-time approaches are often carefully engineered systems that require costly Web server modifications that organizations may not wish to invest in. These drawbacks call for a simple integrated approach to online and offline detection that can reliability identify Web robots despite the continuous evolution of their traffic.

We propose a novel approach to detect Web robots in both offline and in real-time. The approach relies on training analytical models over the request patterns of robots and humans for different types of resources encoded as DTMC models. The approach works well in offline settings where, in a post-mortem analysis, users are interested in capturing realistic and complete samples of Web robot traffic. The figure below illustrates the superior F_1 measure (a harmonic mean of precision and recall) against a number of baseline multinomial predictors used as the core learning algorithm in a number of other members.

Cmp.png

Our proposed detection approach is adaptable to real-time settings, where a system wishes to identify that an active session is a Web robot. This classification must be performed as early as possible so that the system can choose to take an alternative action to the session (e.g. admit requests with low priority, block access to specific files, or block the robot). Our adaptation algorithm performs well in a number of settings, and we are able to identify robot sessions within a small number (k < 20) requests.

Rtp.png

Ongoing work seeks to develop algorithms for identifying malicious Web robots. One promising approach is being pursued with international collaborators, where we seek to integrate fuzzy logic with markov clustering algorithms to separate bots from humans, and of the robots found, those who are malicious.

Malflow.png

Initial results are promising, but our work is ongoing:

Mal.png

Traffic Generation

Robot-resilient Web caching


Because robots and humans exhibit different traffic characteristics and behaviors, caching polices that use heuristic rules to admit and evict resources based on human behaviors are unlikely to yield high hit ratios over robot requests. Furthermore, predictive polices that learn behavioral patterns may perform poorly over present-day mixtures of robot and human traffic because they are unable to learn patterns within pure streams of requests from either type of traffic. To overcome these obstacles, an innovative dual-caching architecture is used that incorporates independent caches for robot and human traffic. Dual-caches enable the use of separate caching policies compatible with human and robot traffic.

Ideally, Web caches should be equipped to predict the exact resource that will be requested next by a Web Robot session. This is not feasible due to the large set of resources that are available on a Web server. Even predicting the extension of the next resource may require a model to predict one type out of hundreds, a task that is challenging for a lightweight classifier to perform in real time. Instead we follow previous work <ref name="robotAnalysis" /> and cluster resources into types. Predicting the next type of resource may provide a smarter alternative since the popularity of robot requests exhibits a power tail <ref name="detectingRobots" /> and as such the most popular resources of a predicted type are the ones likely to be requested next. The resource types used can be seen in Table 1.

Table 1: Breakdown of Resource Types
Class Extensions
text txt, xml, sty, tex, cpp, java
web asp, jsp, cgi, php, html, htm, css, js
img tiff, ico, raw, pgm, gif, bmp, png, jpeg, jpg
doc xls, xlsx, doc, docx, ppt, pptx, pdf, ps, dvi
av avi, mp3, wvm, mpg, wmv, wav
prog exe, dll, dat, msi, jar
compressed zip, rar, gzip, tar, gz, 7z
malformed request strings that are not well-formed
noExtention request for directory contents

Predicting Cache Resources

To predict the type of a Web robot request, we consider algorithms that try to predict the type of the nth resource requested given a sequence of past n - 1 request types. A training record is denoted ri = (vi,li) where vi is the ordered sequence of the past n - 1 requests and li = xn is the type of resource requested after the sequence vi. Figure [ID] shows an example with n = 10. The first record is composed of the first nine requests, and its class label is the tenth request; the second record is composed of the second request through the tenth request and its label is given by the eleventh request. The trained predictor will maintain a history of the previous n - 1 requests and, based on this history, generate the predicted label for the next request.

PredData.png

Cache Design

  • predictive
  • Cloud-based
  • replacement policies (adaptive LRU)

Publications

  • N. Rude and D. Doran. “Request Type Prediction for Web Robot and Internet of Things Traffic”, Proc. Of IEEE Intl. Conference on Machine Learning and Applications, Miami, FL, Dec. 2015


Acknowledgement

This article is based on work supported by the National Science Foundation (NSF) under Grant No. 1464104. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF.

References

<ref name="robotAnalysis"> D. Doran, “Detection, classification, and workload analysis of web robots,” Ph.D. dissertation, University of Connecticut, 2014.</ref> <ref name="detectingRobots"> D. Doran and S. Gokhale, “Detecting Web Robots Using Resource Request Patterns,” in Proc. of Intl. Conference on Machine Learning and Applications, 2012, pp. 7–12.</ref> <references/>