Understanding and Mitigating the Impact of Web Robot and IoT Traffic on Web Systems

From Knoesis wiki
Revision as of 22:17, 15 December 2015 by Nathan (Talk | contribs) (Robot-resilient Web caching)

Jump to: navigation, search

Introduction & Motivation

The kind of information shared on the Web has shifted dramatically over the past decade and a half. Compared to Web pages that mainly hosted static material during the early 2000’s, modern Web sites are full of dynamic content in the form of in-the-moment news articles, opinions, and social information. Consequently, Web robots or crawlers, which are software agents that automatically submit http requests for content from the Web without any human intervention, have been steadily rising in sophistication and in volume. Latest industry reports suggest that 6/10 of all Web requests are originated by bots, and this proportion is slated to rise to ever higher levels with the advent of the Internet of Things. The figure below illustrates the rapid increase in the volume of robot traffic reported by various scientific and commercial studies over the past decade:

Trend.png

Present efforts have identified how the behavioral and statistical characteristics of Web robot traffic stand in contrast to traffic generated by humans. Unfortunately, current methods for optimizing the response rate, power consumption, and other performance aspects of Web systems rely on traffic to exhibit human-like characteristics. For example, caches are a critical tool to improve system response times and minimize energy usage, but they require traffic to display human behavioral patterns which robots do not show. Given how robots are the main form of Web traffic today and are likely to rise to higher levels in the future, they stand to threaten the performance, efficiency, and scalability of all kinds of Web systems, from single servers to farms and large-scale clouds.

This NSF funded project will devise essential methods towards understanding the complexity of Web robot traffic and its impact on Web systems. The research activities will operationalize our existing knowledge about the features and behaviors of robot requests to devise novel classification methods and to build a robot traffic generator and robot- resilient caching system. The results of the project stand to transform the how Web systems of all kinds are designed and optimized so that the performance and energy cost associated with servicing robot requests is mitigated. The results also lay a foundation to build analytic models of the demand robots impose based on the specific architecture of a system, and to develop open-source plugins that control robot activity and their access to information in unprecedented ways.

People

Graduate Students: Nathan Rude, Ning Xie
Undergraduate Students: Logan Rickert
Faculty: Derek Doran (PI)

Thrusts

Robot detection

Contemporary offline and real-time robot detection techniques suffer from several limitations. Offline detection approaches based on: (i) simple, textual analysis can be easily obfuscated by Web robots; (ii) identifying common characteristics (such as percentage of image requests and response codes) of robots may not generalize across server domains; and (iii) training machine learning models like a a naive Bayes net or Hidden Markov Model over traffic statistics may see degraded performance as the statistical profile of robots and humans evolve continuously. Contemporary real-time detection approaches subject active sessions to tests such as checking for the execution of embedded code, or requesting hidden resources on an html page. These tests, however, are designed to identify specific ill-behaviors, such as robots participating in a distributed denial of service attack. They also require extensive modifications to Web servers and lack the ability to identify more general classes of ill-behaving robots. Furthermore, although offline and real-time approaches are synergistic and offer complementary benefits, they have been developed independently, with disparate underlying philosophies. Finally, real-time approaches are often carefully engineered systems that require costly Web server modifications that organizations may not wish to invest in. These drawbacks call for a simple integrated approach to online and offline detection that can reliability identify Web robots despite the continuous evolution of their traffic.

We propose a novel approach to detect Web robots in both offline and in real-time. The approach relies on training analytical models over the request patterns of robots and humans for different types of resources encoded as DTMC models. The approach works well in offline settings where, in a post-mortem analysis, users are interested in capturing realistic and complete samples of Web robot traffic. The figure below illustrates the superior F_1 measure (a harmonic mean of precision and recall) against a number of baseline multinomial predictors used as the core learning algorithm in a number of other members.

Cmp.png

Our proposed detection approach is adaptable to real-time settings, where a system wishes to identify that an active session is a Web robot. This classification must be performed as early as possible so that the system can choose to take an alternative action to the session (e.g. admit requests with low priority, block access to specific files, or block the robot). Our adaptation algorithm performs well in a number of settings, and we are able to identify robot sessions within a small number (k < 20) requests.

Rtp.png

Ongoing work seeks to develop algorithms for identifying malicious Web robots. One promising approach is being pursued with international collaborators, where we seek to integrate fuzzy logic with markov clustering algorithms to separate bots from humans, and of the robots found, those who are malicious.

Malflow.png

Initial results are promising, but our work is ongoing:

Mal.png

Traffic Generation

Robot-resilient Web caching


Because robots and humans exhibit different traffic characteristics and behaviors, caching polices that use heuristic rules to admit and evict resources based on human behaviors are unlikely to yield high hit ratios over robot requests. Furthermore, predictive polices that learn behavioral patterns may perform poorly over present-day mixtures of robot and human traffic because they are unable to learn patterns within pure streams of requests from either type of traffic (see figure below). To overcome these obstacles, an innovative dual-caching architecture is used that incorporates independent caches for robot and human traffic. Dual-caches enable the use of separate caching policies and prediction algorithms that are compatible with human and robot traffic.

Parallel coordinate comparison.png

Ideally, Web caches should be equipped to predict the exact resource that will be requested next by a Web Robot session. This is not feasible due to the large set of resources that are available on a Web server. Even predicting the extension of the next resource may require a model to predict one type out of hundreds, a task that is challenging for a lightweight classifier to perform in real time. Instead we follow previous work <ref name="robotAnalysis" /> and cluster resources into types. Predicting the next type of resource may provide a smarter alternative since the popularity of robot requests exhibits a power tail <ref name="detectingRobots" /> and as such the most popular resources of a predicted type are the ones likely to be requested next. The resource types used can be seen in Table 1.

Table 1: Breakdown of Resource Types
Class Extensions
text txt, xml, sty, tex, cpp, java
web asp, jsp, cgi, php, html, htm, css, js
img tiff, ico, raw, pgm, gif, bmp, png, jpeg, jpg
doc xls, xlsx, doc, docx, ppt, pptx, pdf, ps, dvi
av avi, mp3, wvm, mpg, wmv, wav
prog exe, dll, dat, msi, jar
compressed zip, rar, gzip, tar, gz, 7z
malformed request strings that are not well-formed
noExtention request for directory contents

Predicting Cache Resources

Request Features

To predict the type of a Web robot request, we consider algorithms that try to predict the type of the nth resource requested given a sequence of past n - 1 request types. A training record is denoted ri = (vi,li) where vi is the ordered sequence of the past n - 1 requests and li = xn is the type of resource requested after the sequence vi. The following figure shows an example with n = 10. The first record is composed of the first nine requests, and its class label is the tenth request; the second record is composed of the second request through the tenth request and its label is given by the eleventh request. The trained predictor will maintain a history of the previous n - 1 requests and, based on this history, generate the predicted label for the next request.

PredData.png
Elman Neural Network


Prediction Algorithms

We want to be able to predict the behavior of robot and human traffic. In order to do this, we consider algorithms that utilize resource types requested in a web session and the order the requests occur. Specifically we investigate the following algorithms:

  • Resource order
    • Discrete-Time Markov Chain (DTMC)
  • Resource type (features)
    • Multinomial Logistic Regression (MLR)
    • Decision Trees (J48)
    • Naive Bayes (NB)
    • Support Vector Machines (SVM)
    • Neural Nets
  • An Elman Neural Network (ENN) - combines both resource order and type for predictions.

The graph above depicts the design of an ENN. Similar to a traditional Neural Network, the ENN has a recurrent connection from its hidden layer to a context layer which saves the state of the current hidden layer. This state is fed back into the hidden layer on the next iteration to capture the temporal aspects of the data.

The performance of the previous algorithms can be seen in the graph below.

Results.png

This graph shows that the feature-based algorithms outperformed both the DTMC and ENN on the WSU data, however the feature-based algorithms had comparable performance to the DTMC with the UConn data. The ENN also outperformed all other algorithms with this dataset.

We examine the rational behind these results by first determining the impact that request order had on prediction. The two figures below show the markov models for WSU and UConn respectively. These models show the probability that the next resource will be of a given type, given the current resource type.


WSU markov.png
UConn markov.png


For the WSU model we observe a prominent diagonal. This implies that the DTMC learns that the next resource type to predict should be the same resource type we currently have. However this is not an accurate representation of web traffic on a server. Web traffic is more dynamic and cannot be limited to sequential requests for the same resource type. The UConn model, with its higher complexity, offers more information that an order-based algorithm can use for predictions. We see the effect of this when we compared the performances of the feature-based algorithms to the DTMC. For WSU, the resource order carried no predictive power and thus the DTMC performance suffered. However with the UConn data, the DTMC was able to learn sufficient patterns to give comparable performance to the feature based algorithms.

The Elman Neural Network, which utilizes resource types and their order, appears to be a robust solution for predicting the next type of resource to be requested. We observed that for certain datasets, the request order does not assist with predictions, and the performance of the ENN reflected this. For the WSU data, the ENN had slightly worse performance than the feature-based algorithms. However when the request order did carry predictive power, the ENN promoted this and gained significantly better performance than other algorithms.

Thus the ENN is a suitable algorithm that can predict the next resource type to be precached. By building two models, one for robots and one for humans, we can leverage the benefits that a dual caching scheme provides by having two separate prediction models determining the behavior of humans and robots independently.

Publications

  • N. Rude and D. Doran. “Request Type Prediction for Web Robot and Internet of Things Traffic”, Proc. Of IEEE Intl. Conference on Machine Learning and Applications, Miami, FL, Dec. 2015
  • D. Doran and S. Gokhale. "An Integrated Method for Offline and Real-time Web Robot Detection", Under Review.
  • M. Zabihi, R. Sadeghi, and D. Doran. "Unsupervised Detection of Benign and Malicious Web Robots and Crawlers", In Preparation.


Acknowledgement

This article is based on work supported by the National Science Foundation (NSF) under Grant No. 1464104. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF.

References

<ref name="robotAnalysis"> D. Doran, “Detection, classification, and workload analysis of web robots,” Ph.D. dissertation, University of Connecticut, 2014.</ref> <ref name="detectingRobots"> D. Doran and S. Gokhale, “Detecting Web Robots Using Resource Request Patterns,” in Proc. of Intl. Conference on Machine Learning and Applications, 2012, pp. 7–12.</ref> <references/>