The PHP Server Engine Architecture

PHP is a recursive acronym that stands for “PHP Hypertext Preprocessor” (though it originally stood for “Personal Home Page” in 1995). It allows embedding code within HTML templates, using a language similar to Perl and Unix shells.

It is parsed and executed by the Zend Engine on the server side.

PHP Web Server
Figure 1 - PHP Web Server Architecture

Zend refers to the language engine, PHP's core. “The Zend Engine is an open source scripting engine opcode-based: (a Virtual Machine), commonly known for the important role it plays in the web automation language PHP. It was originally developed by Andi Gutmans and Zeev Suraski while they were students at the Technion - Israel Institute of Technology. They later founded a company called Zend Technologies in Ramat Gan, Israel. The name Zend is a combination of their forenames, Zeev and Andi.”1

Now we are going to explain the most important modules of PHP web server shown in Figure 1.

External modules can be loaded from the disk at script runtime using the function “bool dl (string $library)”. After the script is terminated, the external module is discarded from memory.

Built-in modules are compiled directly into PHP and carried around with every PHP process; their functionality is available to every script that's being run.

Memory Management: Zend gets full control over all memory allocations in fact it determines whether a block is in use, automatically freeing unused blocks and blocks with lost references, and thus prevent memory leaks.

Zend Executor: Zend Engine compiles the PHP Code in the intermediate code Opcode which is executed by the Zend Executor which converts it to machine language.

 

How PHP Server Engine works

A PHP script is executed by walking it through the following steps:

  1. The script is run through a lexical analyzer to convert the human-readable code into tokens. These tokens are then passed to the parser.

  1. The parser parses, manipulates and optimizes the stream of tokens passed to it from the lexical analyzer and generates an intermediate code called opcodes2 that runs on the Zend Engine. This two steps which represents the compilation phase are provided by the Run-Time Compiler module as shown in Figure 1.

  2. After the intermediate code is generated, it is passed to the Executor. The executor steps through the op array, using a function for each opcode and HTML is generated for the same.

  3. This generated HTML is sent to client, if the web browser supports compressed web pages the HTML is encoded using gzip or deflate before sending.

  4. This opcode is flushed from memory after execution.

Here it is the modern working flow using of cached to improve speed of PHP processing:

2 This intermediate code (opcodes ) is an ordered array of instructions (known as opcodesshort for operation code) that are basically three-address code: two operands for the inputs, a third operand for the result, plus the handler that will process the operands. The operands are either constants or an offset to a temporary variable, which is effectively a register in the Zend virtual machine.

Zend Processing
Figure 2 -Zend Processing

 

An easy example

Let’s consider the following PHP document (.php) in order to understand what happens:

<html>

<head>

<title>Party List</title>

</head>

<body>

<?php
$guest[00]=”Irma”;
$guest[01]=”Salvatore”;
$guest[02]=”Caterina”;
$guest[03]=”Simone”;
?>
<p> The list of participants to the event is: </p>
<ol>
<?php
Foreach ($aGuest as $Guest) {
Echo “<li>”.$aGuest.”</li>;
};
?>
</ol>
</body>
</html>

PHP document of input

The .php file is pre-processed by the server considering the text embedded within “<?php ?>” blocks as PHP syntax, while text outside these blocks as arguments passed to “print” statements. The resulting output file of pre-processing phase is the following file.

Print “<html>”;
Print “<head>”;
Print “<title>Party List</title>”;
Print “</head>”;
Print “<body>”;
$guest[00]=”Irma”;
$guest[01]=”Salvatore”;
$guest[02]=”Caterina”;
$guest[03]=”Simone”;
Print “<p> The list of participants to the event is: </p>”;
Print “<ol>”;
Foreach ($aGuest as $Guest) {
Echo “<li>”.$aGuest.”</li>;
};
Print </ol>”;
Print “</body>”;
Print “</html>”;

PHP document after pre-processing

Then the file above is processed by PHP processor (Zend Engine) generating the following HTML document which is sent back to the user agent:

<html>
<head>
<title>Party List</title>
</head>
<body>
<p> The list of participants to the event is: </p>
<ol>
<li>Irma</li>
<li>Salvatore</li>
<li>Caterina</li>
<li>Simone</li>
</ol>
</body>
</html>

PHP document after Zend Engine Processing

 

Web servers, browsers, and proxies communicate by exchanging HTTP messages on a network structure using the request-response virtual circuit.

Web severs enable HTTP access to a collection of documents. And other information organized into a tree structure, much like a computer file system.

Figure 1 - Request-Response Schema

Web server receives and interprets HTTP requests from a client generally a browser. Then it examines the requests and maps the resource identifier to a file or forwards the request to a program which then produces the requested data. Finally, the server sends the response back to the client.

The behaviour of a single-tasking HTTP Server using the Petri Net1 formalism is shown in Fig. 2.

1 A Petri net consists of places, transitions, and directed arcs. Arcs run from a place to a transition or vice versa, never between places or between transitions. The places from which an arc runs to a transition are called the input places of the transition; the places to which arcs run from a transition are called the output places of the transition. More information is at link http://en.wikipedia.org/wiki/Petri_net.

Figure 2 – Behavior of a single-tasking HTTP server.

WEB SERVER REFERENCE ARCHITECTURE

In this section we are going to show the reference architecture for web server domain. It defines the fundamental components of the domain and the relations between these components.

The reference architecture provides a common nomenclature across all software systems in the same domain, which allows:

  1. to describe uniformly the architecture of a web server and to understand a particular web server passing before through its conceptual architecture and then through its concrete architecture, which may have extra features based on its design goals. For example, not all web servers can serve Java Servlets;

  2. to compare different architecture by using a common level of abstraction.

The web server reference architecture proposed is shown in Fig. 3. As you can see, it specifies the data flow and the dependencies between the seven subsystems. These major subsystems are divided between two layers: a server layer and a support layer.

Figure 3 - Web Server reference architecture.

The Server Layer contains five subsystems that encapsulate the operating system and provides the requested resources to the browser using the functionality of the local operating system. We will now describe every subsystem of the layer.

  • The Reception subsystem implements the following functionalities:

  1. It is waiting for the HTTP requests from the user agent that arrive through the network. Moreover it contains the logic and the data structures needed to handle multiple browser requests simultaneously.

  2. Then it parses the requests and, after building an internal representation of the request, sends it to the next subsystem.

  3. At the end it sends back the request’s response according to the capabilities of the browser.

  • The Request Analyzer subsystem operates on the internal request received by the Reception subsystem. This subsystem translates the location of the resource from a network location to a local file name. It also corrects typing user error. For example if the user typed indAx.html, the Request Analyzer automatically corrects it in index.html.

  • The Access Control subsystem authenticates the browsers, requesting a username and password, and authorizes their access to the requested resources.

  • The Resource Handler subsystem determines the type of resource requested by the browser. If it is a static file that can be sent back directly to the user or if it is a program that must be executed to generate the response.

  • The Transaction Log subsystem records all the requests and their results.

The support layer contains two subsystems that provide services used by the upper server layer.

  • The Utility subsystem contains functions that are used by all other subsystems.

  • The Operating System Abstraction Layer (OSAL) encapsulates the operating system specific functionality to facilitate the porting of the web server to different platforms. This layer will not exist in a server that is designed to run on only one platform.

There are two others aspects that characterize web server architecture and go in during its activity:

  • The processing model: it describes the type of process or threading model used to support a Web Server operation;

  • The pool-size behaviour: it specifies how the size of the pool or threads varies over time in function of workload.

The main processing models are:

    • Process-based servers: the web server uses multiple single-threaded processes each of which handles one HTTP request at a time.

Figure 4 - Web Server: Process-Based model.
    • Thread-based servers: the web server consists of a single multi-thread process. Each thread handles one request at a time.

Figure 5 - Web Server: Thread-Based model.
    • Hybrid model servers: the web server consists of multiple multithreaded processes, with each thread of any process handling one request at a time.

Figure 6 - Web Server: multiple multi-threaded processes.

For the pool size behaviour we have two approaches:

  1. Static approach: the web server creates a fixed number of processes and threads at the start-up time. If the number of requests exceeds the number of threads/processes, the requests wait in the queue until a thread/process becomes free to serve it.

  2. Dynamic approach: the web server increases or decreases the pool of workers (processes and threads) in function of the numbers of requests. These behaviour decreases the queue size and the waiting time of each request.


Reception Subsystem: queue of requests and responses management

The Reception Subsystem maintains a queue of requests and responses to carry out its job within the context of a single continuously open connection. A series of requests may be transmitted on it and the responses to these requests must be sent back in the order of request arrival (FIFO). One common solution is for the server to maintain both an input and an output queue of requests. When a request is submitted for processing, it is removed from the input queue and inserted into the output queue. Once the processing is complete, the request is marked for release, but it remains on the Output Queue while at least one of its predecessors is still there. When the response is sent back to the browser the related request is released. Here is a code snippet using a C-like language to show how the queue of requests and responses are managed.

// DEFINITIONS

// UserRequest: represents the user request

// WebResponse: represents the relative web response

// DATA STRUCTURES

RequestQueueElement = (UserRequest, Marker);

ResponseQueueElement = (WebServerResponse, RelatedUserRequest);

// Requests that are not processed yet

Queue RequestQueueElement RequestInputQueue; 

// Requests that are in processing or already processed

Queue RequestQueueElement RequestOutputQueue;

// Responses related to User Requests 

ResponseOutputQueue; // FIFO politics

// ALGORITHM

While (true) {

If <User Request arrived> {

Enqueue(UserRequest, RequestInputQueue); 

};

If <User Request can be processed> {

UserRequestInProcessing = 

RemoveFrom(UseRequest,RequestInputQueue);

Enqueue(UserRequestInProcessing, RequestOutputQueue);

SubmitForProcessing(UserRequestInProcessing);

};

If <User Request has been already processed> {

MarkforRelease(UserRequest, RequestOutputQueue); 

Enqueue(WebResponse, ResponseOutputQueue);

};

If <Length(ResponseOutputQueue)> 0 > {

WebResponse= Dequeue(ResponseOutputQueue);

RemoveFrom(WebResponse.UserRequest RequestOutputQueue);

SendResponse(WebResponse);

};

}


REFERENCES

[01] Andrea Nicchi, Web Applications: technologies and models, EAI, 2014;

 

1 Comment

Strong interests in the cyberspace produce lots of highly sophisticated malicious software.

CYBERSPACE INHABITANTS
To enter the cyberspace means to probably be the target of thieves, hackers, activists, terrorists, nation-states cyber warriors and foreign intelligence services. In this scenario the strong competition in cybercrime and cyberwarfare continuously brings an increasing proliferation of malicious programs and an increment in their level of sophistication.

 

MALWARE PROLIFERATION

According to the data published by the major antivirus companies we have an average of 400000 new malware samples every day.

Malware per Day

This data could be a little bit inflated by the antivirus companies but if we consider as true only the 2% of 400000, this means that we have 8000 new strains of computer malware per day in the wild.

Today it is impossible to live without digital technology, which is the base of digital society where governments, institutions, industries and individuals operate and interact in the everyday life.

So, to face the high-profile data breaches and ever increasing cyber threats coming from the same digital world, huge investments in information security are made around the world (according to Gartner in 2015 the spending was of above $75.4 billions).

But the security seems an illusion after hearing about the result of a research made at Imperva, a data security research firm in California.
A group of researchers infected a computer with 82 new malwares and ran against them 40 threat-detection engines of the most important antivirus companies.
The result was that only 5 percent of the malwares was detected. This means that even if the antivirus software is almost useless for fighting new malwares, it is necessary to protect us from the already known ones by increasing the level of security and protection.

 

EVERYONE COULD BE A TARGET

In the leakage involving Twitter on June 8th 2016 user accounts have been hacked, but not on Twitter's servers. This means that 32.888.300 users have been singularly hacked by a Russian hacker. This is amazing and underlines how easy it is to guess the users' passwords and to infect users' computers in order to steal users' credentials.
The password frequencies in the following chart show how users don’t pay too much attention to the passwords they use. In the chart we consider only the first 25th most used passwords. The statistic is done on 20210641 user accounts released from several leakages [04].
They probably think: why should I be hacked? I’m a normal ordinary guy, who cares about me? But what it is important for a bad guy is to get some profit. So, a huge quantity of accounts to sell in the dark market is a good reason to steal every Twitter user's credentials. In fact, the amount is the key factor which attracts the buyer.

Most Used Password

Even if the chameleon attacks or the werewolf attacks are able to bypass easily the antivirus defense, it is important to pay more attention to our access keys to prevent the leakage of this huge quantity of user accounts because, I think, most of Twitter user accounts are simply guessed by the bad guy.

 

 

MALICIOUS SOFTWARE ANALYSIS

Malicious Software is characterized by four components:

  • propagation methods,
  • exploits,
  • payloads,
  • level of sophistication.

 

Propagations are the means of transportation of malicious code from the origin to the target. The propagation methods depend on scale and specificity. The target may be consituted by machines connected to the internet (large scale) this could mean for example that someone tries to create a bot-net. Or the target could be a small area network (small scale), for example if a company is going to be attacked for some reason.
Specificity could be connected to constraints placed on malicious code. If they are based on technical limitations they could be a particular operating system or a software version. If they are based on personal information they could be account credentials, details about co-workers or the presence of certain filenames on the victim's machine.
The level of propagation is directly proportional to the probability of detection and the limitation of defensive response.

Exploits act to enable the propagation method and payloads operation.
The exploit severity is indicated by the score (CVSS) assigned to a vulnerability.

The payloads is code written to manipulate system resources and create some effect on a computer system.
We can see that, today, there is an increase in the level of payload customization. We have payload for a web server, for a desktop computer, for a Domain Controller, for a smart phone, and so on. Every payload is tailored to a specific target in order to be very small and guarantee the maximum likelihood of success.

The level of sophistication of a malicious code can speak and tell us some useful information. MAlicious Software Sophistication analysis is an approach that can be used to figure out who is behind it: individuals, groups, organizations or states.
In this scenario we have, from one side generic malwares that are created by individuals or a small group who generally makes use of third-party exploit kits like Blackhole Exploit Kit [05], from the other side we have organizations or states with greater resources who can develop innovative attack methods and new exploits like Duqu 2.0 [06] the Most Sophisticated Malware ever seen.

 

The power between attacker and defender is strongly asymmetric. The defender needs huge quantities of resources to defend himself, even because he should operate in a proactive manner to fight against these kind of threats.
The study of malicious code is important to understand how attackers act in order to detect in progress attacks and to prepare a better defense response.

 

REFERENCES

[01] Trey Herr, Eric Armbrust, Milware: Identification and Implications of State Authored Malicious Software, The George Washington University, 2015;
[02] https://www.first.org/: CVSS: Common Vulnerability Scoring System;

[03] Marc Goodman, Future Crimes: Inside the Digital Underground and the Battle for Or Connected world, Anchor Books, 2015.
[04] https://www.leakedsource.com/: leaked databases that contain information of large public interest.
[05] https://en.wikipedia.org/wiki/Blackhole_exploit_kit: The Blackhole exploit kit is as of 2012 the most prevalent web threat.

[06] https://en.wikipedia.org/wiki/Duqu_2.0: Kaspersky discovered the malware, and Symantec confirmed those findings.