Web servers, browsers, and proxies communicate by exchanging HTTP messages on a network structure using the request-response virtual circuit.

Web severs enable HTTP access to a collection of documents. And other information organized into a tree structure, much like a computer file system.

Figure 1 - Request-Response Schema

Web server receives and interprets HTTP requests from a client generally a browser. Then it examines the requests and maps the resource identifier to a file or forwards the request to a program which then produces the requested data. Finally, the server sends the response back to the client.

The behaviour of a single-tasking HTTP Server using the Petri Net1 formalism is shown in Fig. 2.

1 A Petri net consists of places, transitions, and directed arcs. Arcs run from a place to a transition or vice versa, never between places or between transitions. The places from which an arc runs to a transition are called the input places of the transition; the places to which arcs run from a transition are called the output places of the transition. More information is at link http://en.wikipedia.org/wiki/Petri_net.

Figure 2 – Behavior of a single-tasking HTTP server.

WEB SERVER REFERENCE ARCHITECTURE

In this section we are going to show the reference architecture for web server domain. It defines the fundamental components of the domain and the relations between these components.

The reference architecture provides a common nomenclature across all software systems in the same domain, which allows:

  1. to describe uniformly the architecture of a web server and to understand a particular web server passing before through its conceptual architecture and then through its concrete architecture, which may have extra features based on its design goals. For example, not all web servers can serve Java Servlets;

  2. to compare different architecture by using a common level of abstraction.

The web server reference architecture proposed is shown in Fig. 3. As you can see, it specifies the data flow and the dependencies between the seven subsystems. These major subsystems are divided between two layers: a server layer and a support layer.

Figure 3 - Web Server reference architecture.

The Server Layer contains five subsystems that encapsulate the operating system and provides the requested resources to the browser using the functionality of the local operating system. We will now describe every subsystem of the layer.

  • The Reception subsystem implements the following functionalities:

  1. It is waiting for the HTTP requests from the user agent that arrive through the network. Moreover it contains the logic and the data structures needed to handle multiple browser requests simultaneously.

  2. Then it parses the requests and, after building an internal representation of the request, sends it to the next subsystem.

  3. At the end it sends back the request’s response according to the capabilities of the browser.

  • The Request Analyzer subsystem operates on the internal request received by the Reception subsystem. This subsystem translates the location of the resource from a network location to a local file name. It also corrects typing user error. For example if the user typed indAx.html, the Request Analyzer automatically corrects it in index.html.

  • The Access Control subsystem authenticates the browsers, requesting a username and password, and authorizes their access to the requested resources.

  • The Resource Handler subsystem determines the type of resource requested by the browser. If it is a static file that can be sent back directly to the user or if it is a program that must be executed to generate the response.

  • The Transaction Log subsystem records all the requests and their results.

The support layer contains two subsystems that provide services used by the upper server layer.

  • The Utility subsystem contains functions that are used by all other subsystems.

  • The Operating System Abstraction Layer (OSAL) encapsulates the operating system specific functionality to facilitate the porting of the web server to different platforms. This layer will not exist in a server that is designed to run on only one platform.

There are two others aspects that characterize web server architecture and go in during its activity:

  • The processing model: it describes the type of process or threading model used to support a Web Server operation;

  • The pool-size behaviour: it specifies how the size of the pool or threads varies over time in function of workload.

The main processing models are:

    • Process-based servers: the web server uses multiple single-threaded processes each of which handles one HTTP request at a time.

Figure 4 - Web Server: Process-Based model.
    • Thread-based servers: the web server consists of a single multi-thread process. Each thread handles one request at a time.

Figure 5 - Web Server: Thread-Based model.
    • Hybrid model servers: the web server consists of multiple multithreaded processes, with each thread of any process handling one request at a time.

Figure 6 - Web Server: multiple multi-threaded processes.

For the pool size behaviour we have two approaches:

  1. Static approach: the web server creates a fixed number of processes and threads at the start-up time. If the number of requests exceeds the number of threads/processes, the requests wait in the queue until a thread/process becomes free to serve it.

  2. Dynamic approach: the web server increases or decreases the pool of workers (processes and threads) in function of the numbers of requests. These behaviour decreases the queue size and the waiting time of each request.


Reception Subsystem: queue of requests and responses management

The Reception Subsystem maintains a queue of requests and responses to carry out its job within the context of a single continuously open connection. A series of requests may be transmitted on it and the responses to these requests must be sent back in the order of request arrival (FIFO). One common solution is for the server to maintain both an input and an output queue of requests. When a request is submitted for processing, it is removed from the input queue and inserted into the output queue. Once the processing is complete, the request is marked for release, but it remains on the Output Queue while at least one of its predecessors is still there. When the response is sent back to the browser the related request is released. Here is a code snippet using a C-like language to show how the queue of requests and responses are managed.

// DEFINITIONS

// UserRequest: represents the user request

// WebResponse: represents the relative web response

// DATA STRUCTURES

RequestQueueElement = (UserRequest, Marker);

ResponseQueueElement = (WebServerResponse, RelatedUserRequest);

// Requests that are not processed yet

Queue RequestQueueElement RequestInputQueue; 

// Requests that are in processing or already processed

Queue RequestQueueElement RequestOutputQueue;

// Responses related to User Requests 

ResponseOutputQueue; // FIFO politics

// ALGORITHM

While (true) {

If <User Request arrived> {

Enqueue(UserRequest, RequestInputQueue); 

};

If <User Request can be processed> {

UserRequestInProcessing = 

RemoveFrom(UseRequest,RequestInputQueue);

Enqueue(UserRequestInProcessing, RequestOutputQueue);

SubmitForProcessing(UserRequestInProcessing);

};

If <User Request has been already processed> {

MarkforRelease(UserRequest, RequestOutputQueue); 

Enqueue(WebResponse, ResponseOutputQueue);

};

If <Length(ResponseOutputQueue)> 0 > {

WebResponse= Dequeue(ResponseOutputQueue);

RemoveFrom(WebResponse.UserRequest RequestOutputQueue);

SendResponse(WebResponse);

};

}


REFERENCES

[01] Andrea Nicchi, Web Applications: technologies and models, EAI, 2014;

 

1 Comment

Strong interests in the cyberspace produce lots of highly sophisticated malicious software.

CYBERSPACE INHABITANTS
To enter the cyberspace means to probably be the target of thieves, hackers, activists, terrorists, nation-states cyber warriors and foreign intelligence services. In this scenario the strong competition in cybercrime and cyberwarfare continuously brings an increasing proliferation of malicious programs and an increment in their level of sophistication.

 

MALWARE PROLIFERATION

According to the data published by the major antivirus companies we have an average of 400000 new malware samples every day.

Malware per Day

This data could be a little bit inflated by the antivirus companies but if we consider as true only the 2% of 400000, this means that we have 8000 new strains of computer malware per day in the wild.

Today it is impossible to live without digital technology, which is the base of digital society where governments, institutions, industries and individuals operate and interact in the everyday life.

So, to face the high-profile data breaches and ever increasing cyber threats coming from the same digital world, huge investments in information security are made around the world (according to Gartner in 2015 the spending was of above $75.4 billions).

But the security seems an illusion after hearing about the result of a research made at Imperva, a data security research firm in California.
A group of researchers infected a computer with 82 new malwares and ran against them 40 threat-detection engines of the most important antivirus companies.
The result was that only 5 percent of the malwares was detected. This means that even if the antivirus software is almost useless for fighting new malwares, it is necessary to protect us from the already known ones by increasing the level of security and protection.

 

EVERYONE COULD BE A TARGET

In the leakage involving Twitter on June 8th 2016 user accounts have been hacked, but not on Twitter's servers. This means that 32.888.300 users have been singularly hacked by a Russian hacker. This is amazing and underlines how easy it is to guess the users' passwords and to infect users' computers in order to steal users' credentials.
The password frequencies in the following chart show how users don’t pay too much attention to the passwords they use. In the chart we consider only the first 25th most used passwords. The statistic is done on 20210641 user accounts released from several leakages [04].
They probably think: why should I be hacked? I’m a normal ordinary guy, who cares about me? But what it is important for a bad guy is to get some profit. So, a huge quantity of accounts to sell in the dark market is a good reason to steal every Twitter user's credentials. In fact, the amount is the key factor which attracts the buyer.

Most Used Password

Even if the chameleon attacks or the werewolf attacks are able to bypass easily the antivirus defense, it is important to pay more attention to our access keys to prevent the leakage of this huge quantity of user accounts because, I think, most of Twitter user accounts are simply guessed by the bad guy.

 

 

MALICIOUS SOFTWARE ANALYSIS

Malicious Software is characterized by four components:

  • propagation methods,
  • exploits,
  • payloads,
  • level of sophistication.

 

Propagations are the means of transportation of malicious code from the origin to the target. The propagation methods depend on scale and specificity. The target may be consituted by machines connected to the internet (large scale) this could mean for example that someone tries to create a bot-net. Or the target could be a small area network (small scale), for example if a company is going to be attacked for some reason.
Specificity could be connected to constraints placed on malicious code. If they are based on technical limitations they could be a particular operating system or a software version. If they are based on personal information they could be account credentials, details about co-workers or the presence of certain filenames on the victim's machine.
The level of propagation is directly proportional to the probability of detection and the limitation of defensive response.

Exploits act to enable the propagation method and payloads operation.
The exploit severity is indicated by the score (CVSS) assigned to a vulnerability.

The payloads is code written to manipulate system resources and create some effect on a computer system.
We can see that, today, there is an increase in the level of payload customization. We have payload for a web server, for a desktop computer, for a Domain Controller, for a smart phone, and so on. Every payload is tailored to a specific target in order to be very small and guarantee the maximum likelihood of success.

The level of sophistication of a malicious code can speak and tell us some useful information. MAlicious Software Sophistication analysis is an approach that can be used to figure out who is behind it: individuals, groups, organizations or states.
In this scenario we have, from one side generic malwares that are created by individuals or a small group who generally makes use of third-party exploit kits like Blackhole Exploit Kit [05], from the other side we have organizations or states with greater resources who can develop innovative attack methods and new exploits like Duqu 2.0 [06] the Most Sophisticated Malware ever seen.

 

The power between attacker and defender is strongly asymmetric. The defender needs huge quantities of resources to defend himself, even because he should operate in a proactive manner to fight against these kind of threats.
The study of malicious code is important to understand how attackers act in order to detect in progress attacks and to prepare a better defense response.

 

REFERENCES

[01] Trey Herr, Eric Armbrust, Milware: Identification and Implications of State Authored Malicious Software, The George Washington University, 2015;
[02] https://www.first.org/: CVSS: Common Vulnerability Scoring System;

[03] Marc Goodman, Future Crimes: Inside the Digital Underground and the Battle for Or Connected world, Anchor Books, 2015.
[04] https://www.leakedsource.com/: leaked databases that contain information of large public interest.
[05] https://en.wikipedia.org/wiki/Blackhole_exploit_kit: The Blackhole exploit kit is as of 2012 the most prevalent web threat.

[06] https://en.wikipedia.org/wiki/Duqu_2.0: Kaspersky discovered the malware, and Symantec confirmed those findings.

2 Comments

Combining complex networks and data mining: Why and how

The increasing power of computer technology does not dispense with the need to extract meaningful information out of data sets of ever growing size, and indeed typically exacerbates the complexity of this task. To tackle this general problem, two methods have emerged, at chronologically different times, that are now commonly used in the scientific community: data mining and complex network theory. Not only do complex network analysis and data mining share the same general goal, that of extracting information from complex systems to ultimately create a new compact quantifiable representation, but they also often address similar problems too. In the face of that, a surprisingly low number of researchers turn out to resort to both methodologies. One may then be tempted to conclude that these two fields are either largely redundant or totally antithetic. The starting point of this review is that this state of affairs should be put down to contingent rather than conceptual differences, and that these two fields can in fact advantageously be used in a synergistic manner. An overview of both fields is first provided, some fundamental concepts of which are illustrated. A variety of contexts in which complex network theory and data mining have been used in a synergistic manner are then presented. Contexts in which the appropriate integration of complex network metrics can lead to improved classification rates with respect to classical data mining algorithms and, conversely, contexts in which data mining can be used to tackle important issues in complex network theory applications are illustrated. Finally, ways to achieve a tighter integration between complex networks and data mining, and open lines of research are discussed.

Keywords
Complex networks; Data mining; Big Data

PhisicsReport