Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon

How-To Tutorials - Security

174 Articles
article-image-penetration-testing-and-setup
Packt
27 Sep 2013
35 min read
Save for later

Penetration Testing and Setup

Packt
27 Sep 2013
35 min read
(For more resources related to this topic, see here.) Penetration Testing goes beyond an assessment by evaluating identified vulnerabilities to verify if the vulnerability is real or a false positive. For example, an audit or an assessment may utilize scanning tools that provide a few hundred possible vulnerabilities on multiple systems. A Penetration Test would attempt to attack those vulnerabilities in the same manner as a malicious hacker to verify which vulnerabilities are genuine reducing the real list of system vulnerabilities to a handful of security weaknesses. The most effective Penetration Tests are the ones that target a very specific system with a very specific goal. Quality over quantity is the true test of a successful Penetration Test. Enumerating a single system during a targeted attack reveals more about system security and response time to handle incidents than wide spectrum attack. By carefully choosing valuable targets, a Penetration Tester can determine the entire security infrastructure and associated risk for a valuable asset. This is a common misinterpretation and should be clearly explained to all potential customers. Penetration Testing evaluates the effectiveness of existing security. If a customer does not have strong security then they will receive little value from Penetration Testing services. As a consultant, it is recommended that Penetration Testing services are offered as a means to verify security for existing systems once a customer believes they have exhausted all efforts to secure those systems and are ready to evaluate if there are any existing gaps in securing those systems. Positioning a proper scope of work is critical when selling Penetration Testing services. The scope of work defines what systems and applications are being targeted as well as what toolsets may be used to compromise vulnerabilities that are found. Best practice is working with your customer during a design session to develop an acceptable scope of work that doesn't impact the value of the results. Web Penetration Testing with Kali Linux—the next generation of BackTrack —is a hands-on guide that will provide you step-by-step methods for finding vulnerabilities and exploiting web applications. This article will cover researching targets, identifying and exploiting vulnerabilities in web applications as well as clients using web application services, defending web applications against common attacks, and building Penetration Testing deliverables for professional services practice. We believe this article is great for anyone who is interested in learning how to become a Penetration Tester, users who are new to Kali Linux and want to learn the features and differences in Kali versus BackTrack, and seasoned Penetration Testers who may need a refresher or reference on new tools and techniques. This article will break down the fundamental concepts behind various security services as well as guidelines for building a professional Penetration Testing practice. Concepts include differentiating a Penetration Test from other services, methodology overview, and targeting web applications. This article also provides a brief overview of setting up a Kali Linux testing or real environment. Web application Penetration Testing concepts A web application is any application that uses a web browser as a client. This can be a simple message board or a very complex spreadsheet. Web applications are popular based on ease of access to services and centralized management of a system used by multiple parties. Requirements for accessing a web application can follow industry web browser client standards simplifying expectations from both the service providers as well as the hosts accessing the application. Web applications are the most widely used type of applications within any organization. They are the standard for most Internet-based applications. If you look at smartphones and tablets, you will find that most applications on these devices are also web applications. This has created a new and large target-rich surface for security professionals as well as attackers exploiting those systems. Penetration Testing web applications can vary in scope since there is a vast number of system types and business use cases for web application services. The core web application tiers which are hosting servers, accessing devices, and data depository should be tested along with communication between the tiers during a web application Penetration Testing exercise. An example for developing a scope for a web application Penetration Test is testing a Linux server hosting applications for mobile devices. The scope of work at a minimum should include evaluating the Linux server (operating system, network configuration, and so on), applications hosted from the server, how systems and users authenticate, client devices accessing the server and communication between all three tiers. Additional areas of evaluation that could be included in the scope of work are how devices are obtained by employees, how devices are used outside of accessing the application, the surrounding network(s), maintenance of the systems, and the users of the systems. Some examples of why these other areas of scope matter are having the Linux server compromised by permitting connection from a mobile device infected by other means or obtaining an authorized mobile device through social media to capture confidential information. Some deliverable examples in this article offer checkbox surveys that can assist with walking a customer through possible targets for a web application Penetration Testing scope of work. Every scope of work should be customized around your customer's business objectives, expected timeframe of performance, allocated funds, and desired outcome. As stated before, templates serve as tools to enhance a design session for developing a scope of work. Penetration Testing methodology There are logical steps recommended for performing a Penetration Test. The first step is identifying the project's starting status. The most common terminology defining the starting state is Black box testing, White box testing, or a blend between White and Black box testing known as Gray box testing. Black box assumes the Penetration Tester has no prior knowledge of the target network, company processes, or services it provides. Starting a Black box project requires a lot of reconnaissance and, typically, is a longer engagement based on the concept that real-world attackers can spend long durations of time studying targets before launching attacks. As a security professional, we find Black box testing presents some problems when scoping a Penetration Test. Depending on the system and your familiarity with the environment, it can be difficult to estimate how long the reconnaissance phase will last. This usually presents a billing problem. Customers, in most cases, are not willing to write a blank cheque for you to spend unlimited time and resources on the reconnaissance phase; however, if you do not spend the time needed then your Penetration Test is over before it began. It is also unrealistic because a motivated attacker will not necessarily have the same scoping and billing restrictions as a professional Penetration Tester. That is why we recommend Gray box over Black box testing. White box is when a Penetration Tester has intimate knowledge about the system. The goals of the Penetration Test are clearly defined and the outcome of the report from the test is usually expected. The tester has been provided with details on the target such as network information, type of systems, company processes, and services. White box testing typically is focused on a particular business objective such as meeting a compliance need, rather than generic assessment, and could be a shorter engagement depending on how the target space is limited. White box assignments could reduce information gathering efforts, such as reconnaissance services, equaling less cost for Penetration Testing services. Gray box testing falls in between Black and White box testing. It is when the client or system owner agrees that some unknown information will eventually be discovered during a Reconnaissance phase, but allows the Penetration Tester to skip this part. The Penetration Tester is provided some basic details of the target; however, internal workings and some other privileged information is still kept from the Penetration Tester. Real attackers tend to have some information about a target prior to engaging the target. Most attackers (with the exception of script kiddies or individuals downloading tools and running them) do not choose random targets. They are motivated and have usually interacted in some way with their target before attempting an attack. Gray box is an attractive choice approach for many security professionals conducting Penetration Tests because it mimics real-world approaches used by attackers and focuses on vulnerabilities rather than reconnaissance. The scope of work defines how penetration services will be started and executed. Kicking off a Penetration Testing service engagement should include an information gathering session used to document the target environment and define the boundaries of the assignment to avoid unnecessary reconnaissance services or attacking systems that are out of scope. A well-defined scope of work will save a service provider from scope creep (defined as uncontrolled changes or continuous growth in a project's scope), operate within the expected timeframe and help provide more accurate deliverable upon concluding services. Real attackers do not have boundaries such as time, funding, ethics, or tools meaning that limiting a Penetration Testing scope may not represent a real-world scenario. In contrast to a limited scope, having an unlimited scope may never evaluate critical vulnerabilities if a Penetration Test is concluded prior to attacking desired systems. For example, a Penetration Tester may capture user credentials to critical systems and conclude with accessing those systems without testing how vulnerable those systems are to network-based attacks. It's also important to include who is aware of the Penetration Test as a part of the scope. Real attackers may strike at anytime and probably when people are least expecting it. Some fundamentals for developing a scope of work for a Penetration Test are as follows: Definition of Target System(s): This specifies what systems should be tested. This includes the location on the network, types of systems, and business use of those systems. Timeframe of Work Performed: When the testing should start and what is the timeframe provided to meet specified goals. Best practice is NOT to limit the time scope to business hours. How Targets Are Evaluated: What types of testing methods such as scanning or exploitation are and not permitted? What is the risk associated with permitted specific testing methods? What is the impact of targets that become inoperable due to penetration attempts? Examples are; using social networking by pretending to be an employee, denial of service attack on key systems, or executing scripts on vulnerable servers. Some attack methods may pose a higher risk of damaging systems than others. Tools and software: What tools and software are used during the Penetration Test? This is important and a little controversial. Many security professionals believe if they disclose their tools they will be giving away their secret sauce. We believe this is only the case when security professionals used widely available commercial products and are simply rebranding canned reports from these products. Seasoned security professionals will disclose the tools being used, and in some cases when vulnerabilities are exploited, documentation on the commands used within the tools to exploit a vulnerability. This makes the exploit re-creatable, and allows the client to truly understand how the system was compromised and the difficulty associated with the exploit. Notified Parties: Who is aware of the Penetration Test? Are they briefed beforehand and able to prepare? Is reaction to penetration efforts part of the scope being tested? If so, it may make sense not to inform the security operations team prior to the Penetration Test. This is very important when looking at web applications that may be hosted by another party such as a cloud service provider that could be impacted from your services. Initial Access Level: What type of information and access is provided prior to kicking off the Penetration Test? Does the Penetration Tester have access to the server via Internet and/or Intranet? What type of initial account level access is granted? Is this a Black, White, or Gray box assignment for each target? Definition of Target Space: This defines the specific business functions included in the Penetration Test. For example, conducting a Penetration Test on a specific web application used by sales while not touching a different application hosted from the same server. Identification of Critical Operation Areas: Define systems that should not be touched to avoid a negative impact from the Penetration Testing services. Is the active authentication server off limits? It's important to make critical assets clear prior to engaging a target. Definition of the Flag: It is important to define how far a Penetration Test should compromise a system or a process. Should data be removed from the network or should the attacker just obtain a specific level of unauthorized access? Deliverable: What type of final report is expected? What goals does the client specify to be accomplished upon closing a Penetration Testing service agreement? Make sure the goals are not open-ended to avoid scope creep of expected service. Is any of the data classified or designated for a specific group of people? How should the final report be delivered? It is important to deliver a sample report or periodic updates so that there are no surprises in the final report. Remediation expectations: Are vulnerabilities expected to be documented with possible remediation action items? Who should be notified if a system is rendered unusable during a Penetration Testing exercise? What happens if sensitive data is discovered? Most Penetration Testing services do NOT include remediation of problems found. Some service definitions that should be used to define the scope of services are: Security Audit: Evaluating a system or an application's risk level against a set of standards or baselines. Standards are mandatory rules while baselines are the minimal acceptable level of security. Standards and baselines achieve consistency in security implementations and can be specific to industries, technologies, and processes. Most requests for security serves for audits are focused on passing an official audit (for example preparing for a corporate or a government audit) or proving the baseline requirements are met for a mandatory set of regulations (for example following the HIPAA and HITECH mandates for protecting healthcare records). It is important to inform potential customers if your audit services include any level of insurance or protection if an audit isn't successful after your services. It's also critical to document the type of remediation included with audit services (that is, whether you would identify a problem, offer a remediation action plan or fix the problem). Auditing for compliance is much more than running a security tool. It relies heavily on the standard types of reporting and following a methodology that is an accepted standard for the audit. In many cases, security audits give customers a false sense of security depending on what standards or baselines are being audited. Most standards and baselines have a long update process that is unable to keep up with the rapid changes in threats found in today's cyber world. It is HIGHLY recommended to offer security services beyond standards and baselines to raise the level of security to an acceptable level of protection for real-world threats. Services should include following up with customers to assist with remediation along with raising the bar for security beyond any industry standards and baselines. Vulnerability Assessment: This is the process in which network devices, operating systems and application software are scanned in order to identify the presence of known and unknown vulnerabilities. Vulnerability is a gap, error, or weakness in how a system is designed, used, and protected. When a vulnerability is exploited, it can result in giving unauthorized access, escalation of privileges, denial-of-service to the asset, or other outcomes. Vulnerability Assessments typically stop once a vulnerability is found, meaning that the Penetration Tester doesn't execute an attack against the vulnerability to verify if it's genuine. A Vulnerability Assessment deliverable provides potential risk associated with all the vulnerabilities found with possible remediation steps. There are many solutions such as Kali Linux that can be used to scan for vulnerabilities based on system/server type, operating system, ports open for communication and other means. Vulnerability Assessments can be White, Gray, or Black box depending on the nature of the assignment. Vulnerability scans are only useful if they calculate risk. The downside of many security audits is vulnerability scan results that make security audits thicker without providing any real value. Many vulnerability scanners have false positives or identify vulnerabilities that are not really there. They do this because they incorrectly identify the OS or are looking for specific patches to fix vulnerabilities but not looking at rollup patches (patches that contain multiple smaller patches) or software revisions. Assigning risk to vulnerabilities gives a true definition and sense of how vulnerable a system is. In many cases, this means that vulnerability reports by automated tools will need to be checked. Customers will want to know the risk associated with vulnerability and expected cost to reduce any risk found. To provide the value of cost, it's important to understand how to calculate risk. Calculating risk It is important to understand how to calculate risk associated with vulnerabilities found, so that a decision can be made on how to react. Most customers look to the CISSP triangle of CIA when determining the impact of risk. CIA is the confidentiality, integrity, and availability of a particular system or application. When determining the impact of risk, customers must look at each component individually as well as the vulnerability in its entirety to gain a true perspective of the risk and determine the likelihood of impact. It is up to the customer to decide if the risk associated to vulnerability found justifies or outweighs the cost of controls required to reduce the risk to an acceptable level. A customer may not be able to spend a million dollars on remediating a threat that compromises guest printers; however, they will be very willing to spend twice as much on protecting systems with the company's confidential data. The Certified Information Systems Security Professional (CISSP) curriculum lists formulas for calculating risk as follow. A Single Loss Expectancy (SLE) is the cost of a single loss to an Asset Value (AV). Exposure Factor (EF) is the impact the loss of the asset will have to an organization such as loss of revenue due to an Internet-facing server shutting down. Customers should calculate the SLE of an asset when evaluating security investments to help identify the level of funding that should be assigned for controls. If a SLE would cause a million dollars of damage to the company, it would make sense to consider that in the budget. The Single Loss Expectancy formula: SLE = AV * EF The next important formula is identifying how often the SLE could occur. If an SLE worth a million dollars could happen once in a million years, such as a meteor falling out of the sky, it may not be worth investing millions in a protection dome around your headquarters. In contrast, if a fire could cause a million dollars worth of damage and is expected every couple of years, it would be wise to invest in a fire prevention system. The number of times an asset is lost is called the Annual Rate of Occurrence (ARO). The Annualized Loss Expectancy (ALE) is an expression of annual anticipated loss due to risk. For example, a meteor falling has a very low annualized expectancy (once in a million years), while a fire is a lot more likely and should be calculated in future investments for protecting a building. Annualized Loss Expectancy formula: ALE = SLE * ARO The final and important question to answer is the risk associated with an asset used to figure out the investment for controls. This can determine if and how much the customer should invest into remediating vulnerability found in a asset. Risk formula: Risk = Asset Value * Threat * Vulnerability * Impact It is common for customers not to have values for variables in Risk Management formulas. These formulas serve as guidance systems, to help the customer better understand how they should invest in security. In my previous examples, using the formulas with estimated values for a meteor shower and fire in a building, should help explain with estimated dollar value why a fire prevention system is a better investment than metal dome protecting from falling objects. Penetration Testing is the method of attacking system vulnerabilities in a similar way to real malicious attackers. Typically, Penetration Testing services are requested when a system or network has exhausted investments in security and clients are seeking to verify if all avenues of security have been covered. Penetration Testing can be Black, White, or Gray box depending on the scope of work agreed upon. The key difference between a Penetration Test and Vulnerability Assessment is that a Penetration Test will act upon vulnerabilities found and verify if they are real reducing the list of confirmed risk associated with a target. A Vulnerability Assessment of a target could change to a Penetration Test once the asset owner has authorized the service provider to execute attacks against the vulnerabilities identified in a target. Typically, Penetration Testing services have a higher cost associated since the services require more expensive resources, tools, and time to successfully complete assignments. One popular misconception is that a Penetration Testing service enhances IT security since services have a higher cost associated than other security services: Penetration Testing does not make IT networks more secure, since services evaluate existing security! A customer should not consider a Penetration Test if there is a belief the target is not completely secure. Penetration Testing can cause a negative impact to systems: It's critical to have authorization in writing from the proper authorities before starting a Penetration Test of an asset owned by another party. Not having proper authorization could be seen as illegal hacking by authorities. Authorization should include who is liable for any damages caused during a penetration exercise as well as who should be contacted to avoid future negative impacts once a system is damaged. Best practice is alerting the customers of all the potential risks associated with each method used to compromise a target prior to executing the attack to level set expectations. This is also one of the reasons we recommend targeted Penetration Testing with a small scope. It is easier to be much more methodical in your approach. As a common best practice, we receive confirmation, which is a worst case scenario, that a system can be restored by a customer using backups or some other disaster recovery method. Penetration Testing deliverable expectations should be well defined while agreeing on a scope of work. The most common methods by which hackers obtain information about targets is through social engineering via attacking people rather than systems. Examples are interviewing for a position within the organization and walking out a week later with sensitive data offered without resistance. This type of deliverable may not be acceptable if a customer is interested in knowing how vulnerable their web applications are to remote attack. It is also important to have a defined end-goal so that all parties understand when the penetration services are considered concluded. Usually, an agreed-upon deliverable serves this purpose. A Penetration Testing engagement's success for a service provider is based on profitability of time and services used to deliver the Penetration Testing engagement. A more efficient and accurate process means better results for less services used. The higher the quality of the deliverables, the closer the service can meet customer expectation, resulting in a better reputation and more future business. For these reasons, it's important to develop a methodology for executing Penetration Testing services as well as for how to report what is found. Kali Penetration Testing concepts Kali Linux is designed to follow the flow of a Penetration Testing service engagement. Regardless if the starting point is White, Black, or Gray box testing, there is a set of steps that should be followed when Penetration Testing a target with Kali or other tools. Step 1 – Reconnaissance You should learn as much as possible about a target's environment and system traits prior to launching an attack. The more information you can identify about a target, the better chance you have to identify the easiest and fastest path to success. Black box testing requires more reconnaissance than White box testing since data is not provided about the target(s). Reconnaissance services can include researching a target's Internet footprint, monitoring resources, people, and processes, scanning for network information such as IP addresses and systems types, social engineering public services such as help desk and other means. Reconnaissance is the first step of a Penetration Testing service engagement regardless if you are verifying known information or seeking new intelligence on a target. Reconnaissance begins by defining the target environment based on the scope of work. Once the target is identified, research is performed to gather intelligence on the target such as what ports are used for communication, where it is hosted, the type of services being offered to clients, and so on. This data will develop a plan of action regarding the easiest methods to obtain desired results. The deliverable of a reconnaissance assignment should include a list of all the assets being targeted, what applications are associated with the assets, services used, and possible asset owners. Kali Linux offers a category labeled Information Gathering that serves as a Reconnaissance resource. Tools include methods to research network, data center, wireless, and host systems. The following is the list of Reconnaissance goals: Identify target(s) Define applications and business use Identify system types Identify available ports Identify running services Passively social engineer information Document findings Step 2 – Target evaluation Once a target is identified and researched from Reconnaissance efforts, the next step is evaluating the target for vulnerabilities. At this point, the Penetration Tester should know enough about a target to select how to analyze for possible vulnerabilities or weakness. Examples for testing for weakness in how the web application operates, identified services, communication ports, or other means. Vulnerability Assessments and Security Audits typically conclude after this phase of the target evaluation process. Capturing detailed information through Reconnaissance improves accuracy of targeting possible vulnerabilities, shortens execution time to perform target evaluation services, and helps to avoid existing security. For example, running a generic vulnerability scanner against a web application server would probably alert the asset owner, take a while to execute and only generate generic details about the system and applications. Scanning a server for a specific vulnerability based on data obtained from Reconnaissance would be harder for the asset owner to detect, provide a good possible vulnerability to exploit, and take seconds to execute. Evaluating targets for vulnerabilities could be manual or automated through tools. There is a range of tools offered in Kali Linux grouped as a category labeled Vulnerability Analysis. Tools range from assessing network devices to databases. The following is the list of Target Evaluation goals: Evaluation targets for weakness Identify and prioritize vulnerable systems Map vulnerable systems to asset owners Document findings Step 3 – Exploitation This step exploits vulnerabilities found to verify if the vulnerabilities are real and what possible information or access can be obtained. Exploitation separates Penetration Testing services from passive services such as Vulnerability Assessments and Audits. Exploitation and all the following steps have legal ramifications without authorization from the asset owners of the target. The success of this step is heavily dependent on previous efforts. Most exploits are developed for specific vulnerabilities and can cause undesired consequences if executed incorrectly. Best practice is identifying a handful of vulnerabilities and developing an attack strategy based on leading with the most vulnerable first. Exploiting targets can be manual or automated depending on the end objective. Some examples are running SQL Injections to gain admin access to a web application or social engineering a Helpdesk person into providing admin login credentials. Kali Linux offers a dedicated catalog of tools titled Exploitation Tools for exploiting targets that range from exploiting specific services to social engineering packages. The following is the list of Exploitation goals: Exploit vulnerabilities Obtain foothold Capture unauthorized data Aggressively social engineer Attack other systems or applications Document findings Step 4 – Privilege Escalation Having access to a target does not guarantee accomplishing the goal of a penetration assignment. In many cases, exploiting a vulnerable system may only give limited access to a target's data and resources. The attacker must escalate privileges granted to gain the access required to capture the flag, which could be sensitive data, critical infrastructure, and so on. Privilege Escalation can include identifying and cracking passwords, user accounts, and unauthorized IT space. An example is achieving limited user access, identifying a shadow file containing administration login credentials, obtaining an administrator password through password cracking, and accessing internal application systems with administrator access rights. Kali Linux includes a number of tools that can help gain Privilege Escalation through the Password Attacks and Exploitation Tools catalog. Since most of these tools include methods to obtain initial access and Privilege Escalation, they are gathered and grouped according to their toolsets. The following is a list of Privilege Escalation goals: Obtain escalated level access to system(s) and network(s) Uncover other user account information Access other systems with escalated privileges Document findings Step 5 – maintaining a foothold The final step is maintaining access by establishing other entry points into the target and, if possible, covering evidence of the penetration. It is possible that penetration efforts will trigger defenses that will eventually secure how the Penetration Tester obtained access to the network. Best practice is establishing other means to access the target as insurance against the primary path being closed. Alternative access methods could be backdoors, new administration accounts, encrypted tunnels, and new network access channels. The other important aspect of maintaining a foothold in a target is removing evidence of the penetration. This will make it harder to detect the attack thus reducing the reaction by security defenses. Removing evidence includes erasing user logs, masking existing access channels, and removing the traces of tampering such as error messages caused by penetration efforts. Kali Linux includes a catalog titled Maintaining Access focused on keeping a foothold within a target. Tools are used for establishing various forms of backdoors into a target. The following is a list of goals for maintaining a foothold: Establish multiple access methods to target network Remove evidence of authorized access Repair systems impacting by exploitation Inject false data if needed Hide communication methods through encryption and other means Document findings Introducing Kali Linux The creators of BackTrack have released a new, advanced Penetration Testing Linux distribution named Kali Linux. BackTrack 5 was the last major version of the BackTrack distribution. The creators of BackTrack decided that to move forward with the challenges of cyber security and modern testing a new foundation was needed. Kali Linux was born and released on March 13th, 2013. Kali Linux is based on Debian and an FHS-compliant filesystem. Kali has many advantages over BackTrack. It comes with many more updated tools. The tools are streamlined with the Debian repositories and synchronized four times a day. That means users have the latest package updates and security fixes. The new compliant filesystems translate into running most tools from anywhere on the system. Kali has also made customization, unattended installation, and flexible desktop environments strong features in Kali Linux. Kali Linux is available for download at http://www.kali.org/. Kali system setup Kali Linux can be downloaded in a few different ways. One of the most popular ways to get Kali Linux is to download the ISO image. The ISO image is available in 32-bit and 64-bit images. If you plan on using Kali Linux on a virtual machine such as VMware, there is a VM image prebuilt. The advantage of downloading the VM image is that it comes preloaded with VMware tools. The VM image is a 32-bit image with Physical Address Extension support, or better known as PAE. In theory, a PAE kernel allows the system to access more system memory than a traditional 32-bit operating system. There have been some well-known personalities in the world of operating systems that have argued for and against the usefulness of a PAE kernel. However, the authors of this article suggest using the VM image of Kali Linux if you plan on using it in a virtual environment. Running Kali Linux from external media Kali Linux can be run without installing software on a host hard drive by accessing it from an external media source such as a USB drive or DVD. This method is simple to enable; however, it has performance and operational implementations. Kali Linux having to load programs from a remote source would impact performance and some applications or hardware settings may not operate properly. Using read-only storage media does not permit saving custom settings that may be required to make Kali Linux operate correctly. It's highly recommended to install Kali Linux on a host hard drive. Installing Kali Linux Installing Kali Linux on your computer is straightforward and similar to installing other operating systems. First, you'll need compatible computer hardware. Kali is supported on i386, amd64, and ARM (both armel and armhf) platforms. The hardware requirements are shown in the following list, although we suggest exceeding the minimum amount by at least three times. Kali Linux, in general, will perform better if it has access to more RAM and is installed on newer machines. Download Kali Linux and either burn the ISO to DVD, or prepare a USB stick with Kali Linux Live as the installation medium. If you do not have a DVD drive or a USB port on your computer, check out the Kali Linux Network Install. The following is a list of minimum installation requirements: A minimum of 8 GB disk space for installing Kali Linux. For i386 and amd64 architectures, a minimum of 512MB RAM. CD-DVD Drive / USB boot support. You will also need an active Internet connection before installation. This is very important or you will not be able to configure and access repositories during installation. When you start Kali you will be presented with a Boot Install screen. You may choose what type of installation (GUI-based or text-based) you would like to perform. Select the local language preference, country, and keyboard preferences. Select a hostname for the Kali Linux host. The default hostname is Kali. Select a password. Simple passwords may not work so chose something that has some degree of complexity. The next prompt asks for your timezone. Modify accordingly and select Continue. The next screenshot shows selecting Eastern standard time. The installer will ask to set up your partitions. If you are installing Kali on a virtual image, select Guided Install – Whole Disk. This will destroy all data on the disk and install Kali Linux. Keep in mind that on a virtual machine, only the virtual disk is getting destroyed. Advanced users can select manual configurations to customize partitions. Kali also offers the option of using LVM, logical volume manager. LVM allows you to manage and resize partitions after installation. In theory, it is supposed to allow flexibility when storage needs change from initial installation. However, unless your Kali Linux needs are extremely complex, you most likely will not need to use it. The last window displays a review of the installation settings. If everything looks correct, select Yes to continue the process as shown in the following screenshot: Kali Linux uses central repositories to distribute application packages. If you would like to install these packages, you need to use a network mirror. The packages are downloaded via HTTP protocol. If your network uses a proxy server, you will also need to configure the proxy settings for you network. Kali will prompt to install GRUB. GRUB is a multi-bootloader that gives the user the ability to pick and boot up to multiple operating systems. In almost all cases, you should select to install GRUB. If you are configuring your system to dual boot, you will want to make sure GRUB recognizes the other operating systems in order for it to give users the options to boot into an alternative operating system. If it does not detect any other operating systems, the machine will automatically boot into Kali Linux. Congratulations! You have finished installing Kali Linux. You will want to remove all media (physical or virtual) and select Continue to reboot your system. Kali Linux and VM image first run On some Kali installation methods, you will be asked to set the root's password. When Kali Linux boots up, enter the root's username and the password you selected. If you downloaded a VM image of Kali, you will need the root password. The default username is root and password is toor. Kali toolset overview Kali Linux offers a number of customized tools designed for Penetration Testing. Tools are categorized in the following groups as seen in the drop-down menu shown in the following screenshot: Information Gathering: These are Reconnaissance tools used to gather data on your target network and devices. Tools range from identifying devices to protocols used. Vulnerability Analysis: Tools from this section focus on evaluating systems for vulnerabilities. Typically, these are run against systems found using the Information Gathering Reconnaissance tools. Web Applications: These are tools used to audit and exploit vulnerabilities in web servers. Many of the audit tools we will refer to in this article come directly from this category. However web applications do not always refer to attacks against web servers, they can simply be web-based tools for networking services. For example, web proxies will be found under this section. Password Attacks: This section of tools primarily deals with brute force or the offline computation of passwords or shared keys used for authentication. Wireless Attacks: These are tools used to exploit vulnerabilities found in wireless protocols. 802.11 tools will be found here, including tools such as aircrack, airmon, and wireless password cracking tools. In addition, this section has tools related to RFID and Bluetooth vulnerabilities as well. In many cases, the tools in this section will need to be used with a wireless adapter that can be configured by Kali to be put in promiscuous mode. Exploitation Tools: These are tools used to exploit vulnerabilities found in systems. Usually, a vulnerability is identified during a Vulnerability Assessment of a target. Sniffing and Spoofing: These are tools used for network packet captures, network packet manipulators, packet crafting applications, and web spoofing. There are also a few VoIP reconstruction applications. Maintaining Access: Maintaining Access tools are used once a foothold is established into a target system or network. It is common to find compromised systems having multiple hooks back to the attacker to provide alternative routes in the event a vulnerability that is used by the attacker is found and remediated. Reverse Engineering: These tools are used to disable an executable and debug programs. The purpose of reverse engineering is analyzing how a program was developed so it can be copied, modified, or lead to development of other programs. Reverse Engineering is also used for malware analysis to determine what an executable does or by researchers to attempt to find vulnerabilities in software applications. Stress Testing: Stress Testing tools are used to evaluate how much data a system can handle. Undesired outcomes could be obtained from overloading systems such as causing a device controlling network communication to open all communication channels or a system shutting down (also known as a denial of service attack). Hardware Hacking: This section contains Android tools, which could be classified as mobile, and Ardunio tools that are used for programming and controlling other small electronic devices. Forensics: Forensics tools are used to monitor and analyze computer network traffic and applications. Reporting Tools: Reporting tools are methods to deliver information found during a penetration exercise. System Services: This is where you can enable and disable Kali services. Services are grouped into BeEF, Dradis, HTTP, Metasploit, MySQL, and SSH. Summary This article served as an introduction to Penetration Testing Web Applications and an overview of setting up Kali Linux. We started off defining best practices for performing Penetration Testing services including defining risk and differences between various services. The key takeaway is to understand what makes a Penetration Test different from other security services, how to properly scope a level of service and best method to perform services. Positioning the right expectations upfront with a potential client will better qualify the opportunity and simplify developing an acceptable scope of work. This article continued with providing an overview of Kali Linux. Topics included how to download your desired version of Kali Linux, ways to perform the installation, and brief overview of toolsets available. The next article will cover how to perform Reconnaissance on a target. This is the first and most critical step in delivering Penetration Testing services. Resources for Article: Further resources on this subject: BackTrack 4: Security with Penetration Testing Methodology [Article] CISSP: Vulnerability and Penetration Testing for Access Control [Article] Making a Complete yet Small Linux Distribution [Article]
Read more
  • 0
  • 0
  • 3319

article-image-mobile-and-social-threats-you-should-know-about
Packt
17 Sep 2013
8 min read
Save for later

Mobile and Social - the Threats You Should Know About

Packt
17 Sep 2013
8 min read
(For more resources related to this topic, see here.) A prediction of the future (and the lottery numbers for next week) scams Security threats, such as malware, are starting to be manifested on mobile devices, as we are learning that mobile devices are not immune to virus, malware, and other attacks. As PCs are increasingly being replaced by the use of mobile devices, the incidence of new attacks against mobile devices is growing. The user has to take precautions to protect their mobile devices just as they would protect their PC. One major type of mobile cybercrime is the unsolicited text message that captures personal details. Another type of cybercrime involves an infected phone that sends out an SMS message that results in excess connectivity charges. Mobile threats are on the rise according to the Symantec Report of 2012; 31 percent of all mobile users have received an SMS from someone that they didn't know. An example is where the user receives an SMS message that includes a link or phone number. This technique is used to install malware onto your mobile device. Also, these techniques are an attempt to hoax you into disclosing personal or private data. In 2012, Symantec released a new cybercrime report. They concluded that countries like Russia, China, and South Africa have the highest cybercrime incidents. Their rate of exploitation ranges from 80 to 92 percent. You can find this report at http://now-static.norton.com/now/en/pu/images/Promotions/2012/cybercrimeReport/2012_Norton_Cybercrime_Report_Master_FINAL_050912.pdf. Malware The most common type of threat is known as malware . It is short for malicious software. Malware is used or created by attackers to disrupt many types of computer operations, collect sensitive information, or gain access to a private mobile device/computer. It includes worms, Trojan horses, computer viruses, spyware, keyloggers and root kits, and other malicious programs. As mobile malware is increasing at a rapid speed, the U.S. government wants users to be aware of all the dangers. So in October 2012, the FBI issued a warning about mobile malware (http://www.fbi.gov/sandiego/press-releases/2012/smartphone-users-should-be-aware-of-malware-targeting-mobile-devices-and-the-safety-measures-to-help-avoid-compromise). The IC3 has been made aware of various malware attacking Android operating systems for mobile devices. Some of the latest known versions of this type of malware are Loozfon and FinFisher. Loozfon hooks its victims by emailing the user with promising links such as: a profitable payday just for sending out email. It then plants itself onto the phone when the user clicks on this link. This specific malware will attach itself to the device and start to collect information from your device, including: Contact information E-mail address Phone numbers Phone number of the compromised device On the other hand, a spyware called FinFisher can take over various components of a smartphone. According to IC3, this malware infects the device through a text message and via a phony e-mail link. FinFisher attacks not only Android devices, but also devices running Blackberry, iOS, and Windows. Various security reports have shown that mobile malware is on the rise. Cyber criminals tend to target Android mobile devices. As a result, Android users are getting an increasing amount of destructive Trojans, mobile botnets, and SMS-sending malware and spyware. Some of these reports include: http://www.symantec.com/security_response/publications/threatreport.jsp http://pewinternet.org/Commentary/2012/February/Pew-Internet-Mobile.aspx https://www.lookout.com/resources/reports/mobile-threat-report As stated recently in a Pew survey, more than fifty percent of U.S. mobile users are overly suspicious/concerned about their personal information, and have either refused to install apps for this reason or have uninstalled apps. In other words, the IC3 says: Use the same precautions on your mobile devices as you would on your computer when using the Internet. Toll fraud Since the 1970s and 1980s, hackers have been using a process known as phreaking . This trick provides a tone that tells the phone that a control mechanism is being used to manage long-distance calls. Today, the hackers are now using a technique known as toll fraud . It's a malware that sends premium-rate SMSs from your device, incurring charges on your phone bill. Some toll fraud malware may trick you into agreeing to murky Terms of Service, while others can send premium text messages without any noticeable indicators. This is also known as premium-rate SMS malware or premium service abuser . The following figure shows how toll fraud works, portrayed by Lookout Mobile Security: According to VentureBeat, malware developers are after money. The money is in the toll fraud malware. Here is an example from http://venturebeat.com/2012/09/06/toll-fraud-lookout-mobile/: Remember commercials that say, "Text 666666 to get a new ringtone everyday!"? The normal process includes: Customer texts the number, alerting a collector—working for the ringtone provider—that he/she wants to order daily ringtones. Through the collector, the ringtone provider sends a confirmation text message to the customer (or sometimes two depending on that country's regulations) to the customer. That customer approves the charges and starts getting ringtones. The customer is billed through the wireless carrier. The wireless carrier receives payment and sends out the ringtone payment to the provider. Now, let's look at the steps when your device is infected with the malware known as FakeInst : The end user downloads a malware application that sends out an SMS message to that same ringtone provider. As normal, the ringtone provider sends the confirmation message. In this case, instead of reaching the smartphone owner, the malware blocks this message and sends a fake confirmation message before the user ever knows. The malware now places itself between the wireless carrier and the ringtone provider. Pretending to be the collector, the malware extracts the money that was paid through the user's bill. FakeInst is known to get around antivirus software by identifying itself as new or unique software. Overall, Android devices are known to be impacted more by malware than iOS. One big reason for this is that Android devices can download applications from almost any location on the Internet. Apple limits its users to downloading applications from the Apple App store. SMS spoofing The third most common type of scam is called SMS spoofing . SMS spoofing allows a person to change the original mobile phone number or the name (sender ID) where the text message comes from. It is a fairly new technology that uses SMS on mobile phones. Spoofing can be used in both lawful and unlawful ways. Impersonating a company, another person, or a product is an illegal use of spoofing. Some nations have banned it due to concerns about the potential for fraud and abuse, while others may allow it. An example of how SMS spoofing is implemented is as follows: SMS spoofing occurs when the message sender's address information has been manipulated. This is done many times to impersonate a cell phone user who is roaming on a foreign network and sending messages to a home area network. Often, these messages are addressed to users who are outside the home network, which is essentially being "hijacked" to send messages to other networks. The impacts of this activity include the following: The customer's network can receive termination charges caused by the valid delivery of these "bad" messages to interlink partners. Customers may criticize about being spammed, or their message content may be sensitive. Interlink partners can cancel the home network unless a correction of these errors is implemented. Once this is done, the phone service may be unable to send messages to these networks. There is a great risk that these messages will look like real messages, and real users can be billed for invalid roaming messages that they did not send. There is a flaw within iPhone that allows SMS spoofing. It is vulnerable to text messaging spoofing, even with the latest beta version, iOS 6. The problem with iPhone is that when the sender specifies a reply-to number this way, the recipient doesn't see the original phone number in the text message. That means there's no way to know whether a text message has been spoofed or not. This opens up the user to other spoofing types of manipulation where the recipient thinks he/she is receiving a message from a trusted source. According to pod2g (http://www.pod2g.org/2012/08/never-trust-sms-ios-text-spoofing.html): In a good implementation of this feature, the receiver would see the original phone number and the reply-to one. On iPhone, when you see the message, it seems to come from the reply-to number, and you loose track of the origin.
Read more
  • 0
  • 0
  • 1457

article-image-vcloud-networks
Packt
13 Sep 2013
14 min read
Save for later

vCloud Networks

Packt
13 Sep 2013
14 min read
(For more resources related to this topic, see here.) Basics Network Virtualization is what makes vCloud Director such an awesome tool. However, before we go full out in the next article, we need to set up the Network virtualization, and this is what we will be focusing on here. When we talk about isolated networks we are talking about vCloud director making use of different methods of Network Layer three encapsulation (OSI/ISO model). Basically it is the same concept as was introduced with VLANs. VLANs split up the network communication in physical network cables in different totally isolated communication streams. vCloud makes use of these isolated networks to create isolated Org and vApp Networks. VCloud Director has three different Network items: An external network is a network that exists outside the vCloud, for example, a production network. It is basically a PortGroup in vSphere that is used in vCloud to connect to the outside world. An External Network can be connected to multiple Organization Networks. External Networks are not virtualized and are based on existing PortGroups on a vSwitch or Distributed vSwitch. An organization network (Org Net) is a network that exists only inside one organization. You can have multiple Org Nets in an organization. Organizational Networks come in three different shapes: Isolated: An isolated Org Net exists only in this organization and is not connected to an external network; however, it can be connected to vApp networks or VMs. This network type uses network virtualization and its own Network setting. Routed Network (Edge Gateway): An Org Network connects to an existing Edge Device. An Edge Gateway allows defining firewall, NAT rules, as well as VPN connections and load balance functionality. Routed gateways connect external networks to vApp networks and/or VMs. This Network uses virtualized networks and its own Network setting. Directly connected: These Org Nets are an extension of an external network into the organization. They directly connect external networks to the vApp networks or VMs. These networks do NOT use network virtualization and they make use of the network settings of an External Network. A vApp network is a virtualized network that only exists inside a vApp. You can have multiple vApp networks inside one vApp. A vApp network can connect to VMs and to Org networks. It has its own network settings. When connecting a vApp Network to an Org Network you can create a Router between the vApp and the Org Network that lets you define DHCP, Firewall, NAT rules and Static Routing. To create isolated networks, vCloud Director uses Network Pools. Network pools are collection of VLANs, PortGroups and VLANs that can use L2 in L3 encapsulation. The content of these pools can be used by Org and vApp Networks for network virtualization. Network Pools There are four kinds of network pools that can be created: VXLAN: VXLAN networks are Layer 2 networks that are encapsulated in Layer 3 packages. VMware calls this Software Defined Networking (SDN). VXLANs are automatically created by vCD, however, they don't work out of the box and require some extra configuration in vCloud Network and Security (see later) Network isolation-backed: These are basically the same as VXLANs, however, they work out of the box and use Mac in Mac encapsulation. The difference is that VXLAN can transcend routers, network isolation-backed networks can't. vSphere Portgroups backed: vCD will use pre-created portgroups to build the vApp or organization networks. You need to pre-provision one portgroup for every vApp/Org network you would like to use. VLAN backed: vCD will use a pool of VLAN numbers to automatically provision portgroups on demand; however, you still need to configure the VLAN trunking. You will need to reserve one VLAN for every vApp/Org network you would like to use. VXLANs and network isolation networks solve the problems of pre-provisioning and reserving a multitude of VLANs, which makes them extremely important. However using PortGroup or VLAN Network Pools can have additional benefits that we will explore later. Types of vCloud Network VCloud Director has basically 3 different Network items. An external network is basically a PortGroup in vSphere that is imported into vCloud. An Org Network is an isolated network that exists only in an Organization. The same is true for vApp Network, they exist only in vApps. In the picture above you can see all possible connections. Let’s play through the scenarios and see how one can use them Isolated vApp Network An Isolated vApp network exist only inside a vApp. They are useful if one needs to test how VM’s behave in a network or to test using an IP range that is already in use (e.g. Production). The downside of them is that they are isolated, meaning it is hard to get information or software in and out. Have a look at the Recipe for RDP (or SSH) forward into an isolated vApp to find some answers to this problem. VMs directly connected to External Network VM’s inside a vApp are connected to a direct OrgNet, meaning they will be able to get IP’s from the External Network pool. Typically these VM’s are used for Production, meaning that customers choose vCloud for fast provisioning of predefined templates. As vCloud manages the IP’s for a given IP range it can be quite easy to fast provision a VM. vApp Network connected via vApp Router to External network VMs are connected to a vApp Network that has a vApp Router defined as its Gateway. The Gateway connects to a direct OrgNet, meaning that the Gateway will be automatically be given an IP from the External Network Pool. These configurations come in handy to reduce the amount of “physical” Networking that has to be done. The vApp Router can act as a Router with defined Firewall rules, it can do SNAT and DNAT as well as define static routing. So instead of using up a “physical” VLAN or SubNet, one can hide away applications this way. As an added benefit these Applications can be used as templates for fast deployment. VMs direct connected to isolated OrgNet VMs are connected directly to an isolated OrgNet. Connecting VMs directly to an Isolated Network normally only makes sense if there is more than one vApp/VM connected to the OrgNet. What they are used for is an extension of the isolated vApp concept. You need to test repeatedly complex Applications that require certain infrastructure like Active Directory, DHCP, DNS, Database, Exchange Servers etc. Instead of deploying large isolated vApps that contain these, you could deploy them in one vApp and connect them via an Isolated OrgNet directly to the vApp that contains your testing VMs. This makes it possible to reuse this base infrastructure. By using Sharing you even can hide away the Infrastructure vApp from your users. vApp connected via vApp Router to isolate OrgNet VMs are connected to an vApp network that has as its Gateway a vApp Router . The vApp router gets automatically its IP from the OrgNet Pool. Basically, a variant of the idea before. A test vApp or an infrastructure vApp can be packaged this way and be made ready for fast deployment. VMs connected directly to Edge VMs are directly connected to the Edge OrgNet and get their IP from the OrgNet Pool. Their Gateway is the Edge device that connects them to the External Networks through the Edge Firewall. A very typical setup is using the Edge Load balancing feature to load balance VM’s out of a vApp via the Edge. Another one is that the Organization is secured using the Edge Gateway against other Organizations that use the same External Network. This is mostly the case if the External Network is the internet and each Organization is an external customer. vApp connected to Edge via vApp Router VMs are connected to a vApp network that has the vApp router as its Gateway. The vApp Router will automatically get an IP form the OrgNet, which has its Gateway the Edge. This is a more complicated variant of the above scenario, allowing customers to package their VM’s, secure them against other vApps or VMs or subdivide their allocated networks. IP Management Let’s have a look into IP management with vCloud. vCloud knows about three different settings for IP management of VM’s. DHCP You need to provide a DHCP, vCloud doesn’t automatically create one. However a vApp Router or an Edge can create one. Static – IP Pool The IP for the VM comes from the Static IP Pool of the network it is connected to. In addition to that DNS and Domain Suffix will be written to the VM. Static – Manual The IP can be defined on the spot; however, it must be in the network defined by the Gateway and the Network mask of the network the VM is connected to. In addition to that, DNS and Domain Suffix will be written to the VM. All these settings require Guest Customization to be effective. If no Guest Customization is selected, it doesn’t work and whatever the VM was configured with as a Template will be used. vSphere and vCloud vApps One think that need to be said about vApps is that they actually come in two completely different versions. The vCenter vApp and the vCloud vApp. The vSphere vApp concept was introduced in vSphere 4.0 as a container for VMs. In vSphere a vApp is essentially a resource pool with some extras, such as starting and stopping order and (if you configured it) Network IP allocation method. The idea is it to have an entity of VMs that build one unit. Such vApp then can be exported or imported using the OVF format. A very good example for an vApp is VMware Operations Manager. It comes as a vApp in an OVF format and contains not only the VMs but also the start-up sequence as well as some setup script. When the vApp is deployed the first time, additional information like Network settings are asked and then implemented. As vSphere vApp is a resource pool, it can be configured so that it will only demand resources that it is using; on the other hand resource pool configuration is something that most people struggle with. A vSphere vApp is ONLY a resource pool, it is not automatically a folder in the Folder and Template View of vSphere, but is viewed there as again as a vApp. The vCloud vApp is a very different concept; first of all it is not a resource pool. The VMs of the vCloud vApp live in the OvDC resource Pool. However the vCloud vAppp is automatically a folder in the Folder and Template View of vSphere. It is a construct that is created by vCloud, it consists of VMs, a Start and Stop sequence and Networks. The Network part is one of the major differences (next to the resource pool). In vSphere only network information, like how IPs gets assigned to it and settings like Gateway and DNS are given to the vApp, a vCloud vApp actually encapsulates Networks. The vCloud vApp Networks are full networks, meaning they contain the full information for a given network including network settings and IP Pools. For more details see the last article. This information is kept when importing and exporting vCloud vApps. When I’m talking about vApps in the book, I will always mean vCloud vApps. vCenter vApp, if they feature will be written as vCenter vApp. Datastores, profiles and clusters I probably don’t have to explain what a datastore is, but here is a short intro just in case . A Datastore is a VMware object that exists in ESXi. This Object can be a hard disk that is attached to an ESXi server, a NFS or iSCSSI mount on a ESXi host or an fibre channel disk that is attached to an HBA on the ESXi server. A Storage Profile is a container that contains one or more Datastores. A Storage Profile doesn’t have any intelligence implemented, it just groups the Storage. However, it is extremely beneficial in vCloud. If you run out of storage on a datastore you can just add another datastore to the same Storage Profile and your back in business. Datastore Clusters again are containers for datastores, but now there is intelligence included. A Datastore Cluster can use Storage DRS, which allows for VMs to automatically use Storage vMotion to move from one datastore to another if the I/O latency is high or the storage low. Depending on your storage backend system this can be extremely useful. vCloud Director doesn’t know the difference between a Storage Profile and a Datastore Cluster. If you add a Datastore cluster, vCloud will pick it up as a Storage Profile, but that’s ok because it’s not a problem at all. Be aware that Storage profiles are part of the vSphere Enterprise Plus licensing. If you don’t have Enterprise Plus you won’t get storage profiles, and the only thing you can do in vCloud is use the storage profile ANY, which doesn’t contribute to productivity. Thin provisioning Thin Provisioning means that the file that contains the virtual hard disk (.vmdk) is only as big as the the amount of data written to the virtual hard disk.. As an example, if you have a 40GB hard disk attached to a Windows VM and have just installed Windows on it you are using around 2GB of the 40GB disk. When using Thin provisioning only 2GB will be written to the datastore not 40GB. If you don’t use thin provisioning the .vmdk file wil be 40GB big. If your storage vendors Storage APIs is integrated in your ESXi servers Thin Provisioning may be offloaded to your storage backend, making it even faster. Fast Provisioning Fast provisioning is similar to linked clones that you may know from Lab Manager or VMware View. However, in vCloud they are a bit more intelligent than in the other products. In the other products linked clones can NOT be deployed across different datastores but in vCloud they can. Let’s talk about how linked clones work. If you have a VM with a hard disk of 40GB and you clone that VM you would normally have to spend another 40GB (not using Thin Provisioning). Using Linked clones you will not need another 40GB but less. What happens in layman’s terms is that vCloud creates two snapshots of the original VM’s hard disk. A snapshot contains only the differences between the original and the Snapshot. The original hard disk (.vmdk file) is set to read-only and the first snapshot is connected to the original VM, so that one still can work with the original VM. The second snapshot is used to create the new VM. Using snapshots makes deploying a VM using Fast Provisioning not only Fast but it also saves a lot of disk space. The problem with this is that a snapshot must be on the same datastore as its source. So if you have a VM in one datastore, its linked clone cannot be in another. vCloud has solved that problem by deploying a Shadow VM. When you deploy a VM with Fast Provisioning onto a different datastore than its source, vCloud creates a full clone (a normal full copy) of the VM onto the new datastore and then creates a linked clone from the Shadow VM. If your storage vendors Storage APIs is integrated in your ESXi servers Fast Provisioning may be offloaded to your storage backend, making it faster. See also recipe “Making NFS based datastores faster”. Summary In this article, we saw vCloud networks, vSphere and vCloud vApps, and datastores, profiles and clusters. Resources for Article :   Further resources on this subject: Windows 8 with VMware View [Article] VMware View 5 Desktop Virtualization [Article] Cloning and Snapshots in VMware Workstation [Article]
Read more
  • 0
  • 0
  • 3341
Banner background image

article-image-features-cloudflare
Packt
10 Sep 2013
5 min read
Save for later

Features of CloudFlare

Packt
10 Sep 2013
5 min read
(For more resources related to this topic, see here.) Top 5 features you need to know about Here we will go over the various security, performance, and monitoring features CloudFlare has to offer. Malicious traffic Any website is susceptible to attacks from malicious traffic. Some attacks might try to take down a targeted website, while others may try to include their own spam. Worse attacks might even try and trick your users to provide information or compromise user accounts. CloudFlare has tools available to mitigate various types of attacks. Distributed denial of service A common attack on the Internet is the distributed denial-of-service(DDoS) attack. A distributed denial-of-service attack involves producing so many requests for a service that it cannot fulfill them, and crumbles under the load. A common way this is handled in practice is by having the attacker make a server request, but never listen for the response. Typically a response will be presented by the client notifying the server that it received data, but if a client does not acknowledge, the server will keep trying for quite a while. A single client could send thousands of these requests per second, but the server would not be able to handle many at once. Another twist to these attacks is the dynamic denial-of-service attack. This attack will be spread across many machines, making it difficult to tell where the attacks are coming from. CloudFlare can help with this because it can monitor when users are trying an attack and reject access, or require a captcha challenge to gain access. It also monitors all of its customers for this, so if there is an attack happening on another CloudFlare site, it can protect yours from the traffic attacking the site as well. It is a difficult problem to solve. Sometimes traffic just spikes if big news article are run. It is hard to tell when it's legitimate traffic and when it is an attack. For this, CloudFlare offers multiple levels of DoS protection. On the CloudFlare settings the Securitytab is where you can configure this advanced protection: On the CloudFlare settings the Security tab is where you can configure this advanced protection: The basic settings are rolled into the Basic protection level setting: SQL injection SQL injection is a more involved attack. On a web page, you may have a field like a username/password field. That field will probably be checked against a database for validity. The database queries to do this are simple text strings. This means that if the query is written in a way that doesn't explicitly prevent it, an attacker can start writing their own queries. A site that is not equipped to handle these cases would be susceptible to hackers destroying data, gaining access by pretending to be other users, or accessing data they otherwise would not have access to. It is a difficult problem to check against when building a software. Even big companies have had issues. CloudFlare mitigates this by looking for requests containing things that look like database queries. Almost no websites take in raw database commands as normal queries. This means that CloudFlare can search for suspicious traffic and prevent it from accessing your page. Cross-site scripting Cross-site scripting is similar to SQL injection except that it deals with JavaScript and not database SQL. If you have a site that has comments, for example, an unprotected site might allow a hacker to put their own JavaScript on it. Any other user of the site could execute that JavaScript. They could do things like sniff for passwords, or even credit card information. CloudFlare prevents this in a similar fashion by looking for requests that contain JavaScript and blocking them. Open ports Often, services available on a server can be available without the sysadmin knowing about it. If Telnet is allowed, for example, an attacker could simply log in to the system and start checking out source code, looking into the database, or taking down the website. CloudFlare acts as a firewall to ensure that the ports are blocked even if the server has them open. Challenge page When CloudFlare receives a request from a suspect user, it will usually show a challenge page asking the user to fill out a captcha to access the site. The options for customizing these settings is on the Security Settings tab: You can also configure how that page looks by clicking on Customize. By default, it will look something like the following: E-mail address obfuscation E-mail address obfuscation scrambles any e-mail addresses on your page, then runs some JavaScript to decode it so that the text ends up being readable. This is nice in order to avoid getting spam in your user's e-mails, but the downside is that if a user has JavaScript disabled, they will not be able to read e-mail addresses: Summary In this article, we have looked at the various security features provided by CloudFlare against malicious traffic, distributed denial of service, e-mail address obfuscation, and so on. Therefore, it can be concluded that CloudFlare is one of the better website-designing options available in the market today. Resources for Article: Further resources on this subject: Getting Started with RapidWeaver [Article] LESS CSS Preprocessor [Article] Translations in Drupal 6 [Article]
Read more
  • 0
  • 0
  • 2077

article-image-understanding-big-picture
Packt
04 Sep 2013
7 min read
Save for later

Understanding the big picture

Packt
04 Sep 2013
7 min read
(For more resources related to this topic, see here.) So we've got this thing for authentication and authorization. Let's see who is responsible and what for. There is an AccessDecisionManager, which, as the name suggests, is responsible for deciding whether we can access something or not; if not, an AccessDeniedException or InsufficientAuthenticationException is thrown. AuthenticationManager is another crucial interface. It is responsible for confirming who we are. Both are just interfaces, so we can swap our own implementations if we like. In a web application, the job of talking with these two components and the user is handled by a web filter called DelegatingFilterProxy, which is decomposed into several small filters. Each one is responsible for a different thing, so we can turn them on, off, or put our own filters in between and mess with them anyway we like. These are quite important, and we will dig into them later. For the big picture, all we need to know is that these filters take care of all the talking, redirect the user to the login page (or an access-denied page), and save the current user details in an HTTPSession. Well, the last part, while true, is a bit misleading. User details are kept in a SecurityContext object, which we can get a hold of by calling SecurityContextHolder.getContext(), and which in the end is stored in HTTPSession by our filters. But we had promised a big picture, not the gory details, so here it is: Quite simple, right? If we have an authentication protocol without login and password, it works in a similar way. We just switch one of the filters, or the authentication manager, to a different implementation. If we don't have a web application, we just need to do the talking ourselves. But this is all for web resources (URLs). What is much more interesting and useful is securing calls to methods. It looks, for example, like this: @PreAuthorize(["isAuthenticated() and hasRole('ROLE_ADMIN')"])public void somethingOnlyAdminCanDo() {} Here, we decided that somethingOnlyAdminCanDo will be protected by our AccessDecisionManager and that the user must be authenticated (not anonymous) and has to have an admin role. Can a user be anonymous and have an admin role at the same time? In theory, yes, but it would not make any sense. Because it's much cheaper to check if he is authenticated and stop right there. We see a bit of optimization in here. We could drop the isAuthenticated() method and the behavior wouldn't change. We can put this kind of annotation on any Java method, but our configuration and mechanism to fire up the security will depend on the type of objects we are trying to protect. For objects declared as Spring beans (which is a short name for anything defined in our Inversion of Control (IoC) configuration, either via XML or annotations), we don't need to do much. Spring will just create proxies (dynamic classes) that take over calls to our secured methods and fire up AccessDecisionManager before passing the call to the object we really wanted to call. For objects outside of the IoC container (anything created with new or just code not defined in Spring context), we can use the power of Aspect Oriented Programming (AOP) to get the same effect. If you don't know what AOP is, don't worry. It's just a bit of magic at the classloader and bytecode level. For now, the only important thing is that it works basically in the same way. This is depicted as follows: We can do much more than this, as we'll see next, but these are the basics. So, how does the AccessDecisionManager decide whether we can access something or not? Imagine a council of very old Jedi masters sitting around a fire. They decide whether or not you are permitted to call a secured method or access a web resource. Each of these masters makes a decision or abstains. Each of them can consult additional information (not only who you are and what you want to do, but every aspect of the situation). In Spring Security, those smart people are called AccessDecisionVoters, and each of them has one vote. The council can be organized in many different ways. It has one voice, and so it may make the decision based on a majority of votes. It may be veto-based, where everything is allowed unless someone disagrees. Or it may need everyone to agree to grant access, otherwise access is denied. The council is the AccessDecisionManager, and we have three implementations previously mentioned out of the box. We can also decide who's in the council and who is not. This is probably the most important decision we can make, because this will decide the security model that we will use in our application. Let's talk about the most popular counselors (implementations of AccessDecisionVoter). Model based on roles (RoleVoter): This guy makes his decision based on the role of the user and the required role for the resource/method. So if we write @PreAuthorize("hasRole('ROLE_ADMIN')"), you better be a damn admin or you'll get a no-no from this guy. Model based on entity access control permissions (AclEntryVoter): This guy doesn't worry about roles. He is much more than that. Acl stands for Access Control List, which represents a list of permissions. Every user has a list of permissions, possibly for every domain object (usually an object in the database), that you want to secure. So, for example, if we have a bank application, the supervisor can give Frank access to a single specific customer (say, ACME—A Company that Makes Everything), which is represented as an entity in the database and as an object in our system. No other employee will be able to do anything to that customer unless the supervisor grants that person the same permission as Frank. This is probably the most scrutinous voter we would ever use. Our customer can have a very detailed configuration with him/her. On the other hand, this is also the most cumbersome, as we need to create a usable graphical interface to set permissions for every user and every domain object. While we have done this a few times, most of our customers wanted a simpler approach, and even those who started with a graphical user interface to configure everything asked for a simplified version based on business rules, at the end of the project. If your customer describes his security needs in terms of rules such as "Frank can edit every customer he has created but he cannot do anything other than view other customers", it means it's time for PreInvocationAuthorizationAdviceVoter. Business rules model (PreInvocationAuthorizationAdviceVoter): This is usually used when you want to implement static business rules in the application. This goes like "if I've written a blog post, I can change it later, but others can only comment" and "if a friend asked me to help him write the blog post, I can do that, because I'm his friend". Most of these things are also possible to implement with ACLs, but would be very cumbersome. This is our favorite voter. With it, it's very easy to write, test, and change the security restrictions, because instead of writing every possible relation in the database (as with ACL voter) or having only dumb roles, we write our security logic in plain old Java classes. Great stuff and most useful, once you see how it works. Did we mention that this is a council? Yes we did. The result of this is that we can mix any voters we want and choose any council organization we like. We can have all three voters previously mentioned and allow access if any of them says "yes". There are even more voters. And we can write new ones ourselves. Do you feel the power of the Jedi council already? Do you feel the power of the Jedi council already? Summary This section provides an overview of authentication and authorization, which are the principles of Spring security. Resources for Article : Further resources on this subject: Migration to Spring Security 3 [Article] Getting Started with Spring Security [Article] So, what is Spring for Android? [Article]
Read more
  • 0
  • 0
  • 1457

Packt
03 Sep 2013
11 min read
Save for later

Quick start – Using Burp Proxy

Packt
03 Sep 2013
11 min read
(For more resources related to this topic, see here.) At the top of Burp Proxy, you will notice the following three tabs: intercept: HTTP requests and responses that are in transit can be inspected and modified from this window options: Proxy configurations and advanced preferences can be tuned from this window history: All intercepted traffic can be quickly analyzed from this window If you are not familiar with the HTTP protocol or you want to refresh your knowledge, HTTP Made Really Easy, A Practical Guide to Writing Clients and Servers, found at http://www.jmarshall.com/easy/http/, represents a compact reference. Step 1 – Intercepting web requests After firing up Burp and configuring the browser, let's intercept our first HTTP request. During this exercise, we will intercept a simple request to the publisher's website: In the intercept tab, make sure that Burp Proxy is properly stopping all requests in transit by checking the intercept button. This should be marked as intercept is on. In the browser, type http://www.packtpub.com/ in the URL bar and press Enter. Back in Burp Proxy, you should be able to see the HTTP request made by the browser. At this stage, the request is temporarily stopped in Burp Proxy waiting for the user to either forward or stop it. For instance, press forward and return to the browser. You should see the home page of Packt Publishing as you would normally interact with the website. Again, type http://www.packtpub.com/ in the URL bar and press Enter. Let's press drop this time. Back in the browser, the page will contain the warning Burp proxy error: message was dropped by user. We have dropped the request, thus Burp Proxy did not forward the request to the server. As a result, the browser received a temporary HTML page with the warning message generated by Burp, instead of the original HTML content. Let's try one more time. Type http://www.packtpub.com/ in the URL bar of the browser and press Enter. Once the request is properly captured by Burp Proxy, the action button becomes active. Click on it to display the contextual menu. This is an important functionality as it allows you to import the current web request in any of the other Burp tools. You can already imagine the potentialities of having a set of integrated tools that allow you to manipulate and analyze web requests so easily. For example, if we want to decode the request, we can simply click on send to decoder. Burp Proxy In Burp Proxy, we can also decide to automatically forward all requests without waiting for the user to either forward or drop the communication. By clicking on the intercept button, it is possible to switch from intercept is on to intercept is off. Nevertheless, the proxy will record all requests in transit. Also, Burp Proxy allows you to automatically intercept all responses matching specific characteristics. Take a look at the numerous options available in the intercept server response section from within the Burp Proxy options tab. For example, it is possible to intercept the server's response only if the client's request was intercepted. This is extremely helpful while testing input validation vulnerabilities as we are generally interested in evaluating the server's responses for all tampered requests. Or else, you may only want to intercept and inspect responses having a specific return code (for example, 200 OK). Step 2 – Inspecting web requests Once a request is properly intercepted, it is possible to inspect the entire content, headers, and parameters, using one of the four Burp Proxy message analysis tabs: raw: This view allows you to display the web request in raw format within a simple text editor. This is a very handy visualization as it enables maximum flexibility for further changing the content. params: In this view, the focus is on user-supplied parameters (GET/POST parameters, cookies). This is particularly important in case of complex requests as it allows to consider all entry points for potential vulnerabilities. Whenever applicable, Burp Proxy will also automatically perform URL decoding. In addition, Burp Proxy will attempt to parse commonly used formats, including JSON. headers: Similarly, this view displays the HTTP header names and values in tabular form. hex: In case of binary content, it is useful to inspect the hexadecimal representation of the resource. This view allows to display a request as in a traditional hex editor. The history tab enables you to analyze all web requests transited through the proxy: Click on the history tab. At the top, Burp Proxy shows all the requests in the bundle. At the bottom, it displays the content of the request and response corresponding to the specific selection. If you have previously modified the request, Burp Proxy history will also display the modified version. Displaying HTTP requests and responses intercepted by Burp Proxy By double-clicking on one of the requests, Burp will automatically open a new window with the specific content. From this window, it is possible to browse all the captured communication using the previous and next buttons Back in the history tab, Burp Proxy displays several details for each item including the request method, URL, response's code, and length. Each request is uniquely identified by a number, visible in the left-hand side column. Click on the request identifier. Burp Proxy allows you to set a color for that specific item. This is extremely helpful to highlight important requests or responses. For example, during the initial application enumeration, you may notice an interesting request; you can mark it and get back later for further testing. Burp Proxy history is also useful when you have to evaluate a sequence of requests in order to reproduce a specific application behavior. Click on the display filter, at the top of the history list to hide irrelevant content. If you want to analyze all HTTP requests containing at least one parameter, select the show only parameterised checkbox. If you want to display requests having a specific response, just select the appropriate response code in the filter by status code selection. At this point, you may have already understood the potentialities of the tool to filter and reveal interesting traffic. In addition, when using Burp Suite Professional, you can also use the filter by search term option. This feature is particularly important when you need to analyze hundreds of requests or responses as you can filter relevant traffic only by using regular expressions or simply matching particular strings. Using this feature, you may also be able to discover sensitive information (for example, credentials) embedded in the intercepted pages. Step 3 – Tampering web requests As part of a typical security assessment, you will need to modify HTTP requests and analyze the web application responses. For example, to identify SQL injection vulnerabilities, it is important to inject common attack vectors (for example, a single quote) in all user-supplied input, including HTTP headers, cookies, and GET/POST parameters. If you want to refresh your knowledge on common web application vulnerabilities, the OWASP Top Ten Project article at https://www. owasp.org/index.php/Category:OWASP_Top_Ten_Project is a good starting point. Tampering web requests with Burp is as easy as editing strings in a text editor: Intercept a request containing at least one HTTP parameter. For example, you can point your browser to http://www.packtpub.com/books/all?keys=ASP. Go to Burp Proxy | Intercept. At this point, you should see the corresponding HTTP request. From the raw view, you can simply edit any aspect of the web request in transit. For example, you can change the value of the the GET parameter's keys value from ASP to PHP. Edit the request to look like the following: GET /books/all?keys=PHP HTTP/1.1Host: www.packtpub.comUser-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:15.0)Gecko/20100101 Firefox/15.0.1Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8Accept-Language: en-us,en;q=0.5Accept-Encoding: gzip, deflateProxy-Connection: keep-alive Click on forward and get back to the browser. This should result in a search query performed with the string PHP. You can verify it by simply checking the results in the HTML page. Although we have used the raw view to change the previous HTTP request, it is actually possible to use any of the Burp Proxy view. For example, in the params view, it is possible to add a new parameter by following these steps: Clicking on new (right side), from the Burp Proxy params view. Selecting the proper parameter type (URL, body, or cookie). URL should be used for GET parameters, whereas body denotes POST parameters. Typing the name and the value of the newly created parameter. Advanced features After practicing with the basic features provided by Burp Proxy, you are almost ready to experiment with more advanced configurations. Match and replace Let's imagine that you are testing an application designed for mobile devices using a standard browser from your computer. In most cases, the web server examines the user-agent provided by the browser to identify the specific platform and respond with customized resources that better fit mobile phones and tablets. Under these circumstances, you will particularly find the match and replace function, provided by Burp Proxy, very useful. Let's configure Burp Proxy in order to tamper the user-agent HTTP header field: In the options tab of Burp Proxy, scroll down to the match and replace section. Under the match and replace table, a drop-down list and two text fields allow to create a customized rule. Select request header from the drop-down list since we want to create a match condition pertaining to HTTP requests. Type ^User-Agent.*$ in the first text field. This field represents the match within the HTTP request. Burp Proxy's match and replace feature allows you to use simple strings as well as complex regular expressions. If you are not familiar with regular expressions, have a look at http://www.regular-expressions.info/quickstart. html. In the second text field, type Mozilla/5.0 (iPhone; U; CPU like Mac OS X; en) AppleWebKit/4h20+ (KHTML, like Gecko) Version/3.0 Mobile/1C25 Safari/419.3 or any other fake user-agent that you want to impersonate. Click add and verify that the new match has been added to the list; this button is shown here: Burp Proxy match and replace list Intercept a request, leave it to pass through the proxy, and verify that it has been automatically modified by the tool. Automatically modified HTTP header in Burp Proxy HTML modification Another interesting feature of Burp Proxy is the automatic HTML modification, that can be activated and configured in the appropriate section within Burp Proxy | options. By using this function, you can automatically remove JavaScript or modify HTML forms of all received HTTP responses. Some applications deploy client-side validation in the form of disabled HTML form fields or JavaScript code. If you want to verify the presence of server-side controls that enforce specific data formats, you would need to tamper the request with invalid data. In these situations, you can either manually tamper the request in the proxy or enable HTML modification to remove any client-side validation and use the browser in order to submit invalid data. This function can be also used to display hidden form fields. Let's see in practice how you can activate this feature: In Burp Proxy, go to options, scroll down to the HTML modification section. Numerous options are available in this section: unhide hidden form fields to display hidden HTML form fields, enable disabled form fields to submit all input forms present inside the HTML page, remove input field length limits to allow extra-long strings in the text fields, remove JavaScript form validation to make Burp Proxy all onsubmit handler JavaScript functions from HTML forms, remove all JavaScript to completely remove all JS scripts and remove object tags to remove embedded objects within the HTML document. Select the desired checkboxes to activate automatic HTML modification. Summary Using this feature, you will be able to understand whether the web application enforces server- side validation. For instance, some insecure applications use client-side validation only (for example, via JavaScript functions). You can activate the automatic HTML modification feature by selecting the remove JavaScript form validation checkbox in order to perform input validation testing directly from your browser. Resources for Article : Further resources on this subject: Visual Studio 2010 Test Types [Article] Ordered and Generic Tests in Visual Studio 2010 [Article] Manual, Generic, and Ordered Tests using Visual Studio 2008 [Article]  
Read more
  • 0
  • 0
  • 4343
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-exploitation-basics
Packt
27 Aug 2013
7 min read
Save for later

Exploitation Basics

Packt
27 Aug 2013
7 min read
(For more resources related to this topic, see here.) Basic terms of exploitation The basic terms of exploitation are explained as follows: Vulnerability: A vulnerability is a security hole in software or hardware, which allows an attacker to compromise a system. A vulnerability can be as simple as a weak password or as complex as a Denial of Service attack. Exploit: An exploit refers to a well-known security flaw or bug with which a hacker gains entry into a system. An exploit is the actual code with which an attacker takes advantage of a particular vulnerability. Payload: Once an exploit executes on the vulnerable system and the system has been compromised, the payload enables us to control the system. The payload is typically attached to the exploit and delivered. Shellcode: This is a set of instructions usually used as a payload when the exploitation occurs. Listener: A listener works as component waiting for an incoming connection. How does exploitation work? We consider the scenario of a computer lab in which we have two students doing work on their computers. After some time one of the students goes out for a coffee break and he responsibly locks down his computer. The password for that particular locked computer is Apple, which is a very simple dictionary word and is a system vulnerability. The other student starts to attempt a password guessing attack against the system of the student who left the lab. This is a classic example of an exploit. The controls that help the malicious user to control the system after successfully logging in to the computer are called the payload. We now come to the bigger question of how exploitation actually works. An attacker basically sends an exploit with an attached payload to the vulnerable system. The exploit runs first and if it succeeds, the actual code of the payload runs. After the payload runs, the attacker gets fully privileged access to the vulnerable system, and then he may download data, upload malware, virus', backdoors, or whatever he wants. A typical process for compromising a system For compromising any system, the first step is to scan the IP address to find open ports and its operating system and services. Then we move on to identifying a vulnerable service and finding an exploit in Metasploit for that particular service. If the exploit is not available in Metasploit, we will go through the Internet databases such as www.securityfocus.com, www.exploitdb.com, www.1337day.com, and so on. After successfully finding an exploit, we launch the exploit and compromise the system. The tools that are commonly used for port scanning are Nmap (Network Mapper), Autoscan, Unicorn Scan, and so on. For example, here we are using Nmap for scanning to show open ports and their services. First open the terminal in your BackTrack virtual machine. Type in nmap –v –n 192.168.0.103 and press Enter to scan. We use the –v parameter to get verbose output and the –n parameter to disable reverse DNS resolutions. Here we can see the results of Nmap, showing three open ports with their services running on them. If we need more detailed information such as the service version or Operating System type, we have to perform an intense scan using Nmap. For an intense scan, we use the command nmap –T4 –A –v 192.168.0.103. This shows us the complete results of the service version and the Operating System type. The next step is to find an exploit according to the service or its version. Here, we can see that the first service running on port number 135 is msrpc, which is known as Microsoft Windows RPC. Now we will learn how to find an exploit for this particular service in Metasploit. Let's open our terminal and type in msfconsole to start Metasploit. On typing in search dcom, it searches all of the Windows RPC related exploits in its database. In the following screenshot, we can see the exploit with its description and also the release date of this vulnerability. We are presented with a list of exploits according to their rank. From the three exploits related to this vulnerability, we select the first one since it is the most effective exploit with the highest rank. Now we have learned the technique of searching for an exploit in Metasploit through the search <service name> command. Finding exploits from online databases If the exploit is not available in Metasploit, then we have to search the Internet exploit databases for that particular exploit. Now we will learn how to search for an exploit on these online services such as www.1337day.com. We open the website and click on the Search tab. As an example, we will search for exploits on the Windows RPC service. Now we have to download and save a particular exploit. For this, just click on the exploit you need. After clicking on the exploit it shows the description of that exploit. Click on Open material to view or save the exploit. The usage of this exploit is provided as a part of the documentation in the exploit code as marked in the following screenshot: Now we will be exploiting our target machine with the particular exploit that we have downloaded. We have already scanned the IP address and found three open ports. The next step would be to exploit one of those ports. As an example, we will target the port number 135 service running on this target machine, which is msrpc. Let us start by compiling the downloaded exploit code. To compile the code, launch the terminal and type in gcc <exploit name with path> -o<exploitname>. For example, here we are typing gcc –dcom –o dcom. After compiling the exploit we have a binary file of that exploit, which we use to exploit the target by running the file in the terminal by typing in ./<filename>. From the preceding screenshot, we can see the requirements for exploiting the target. It requires the target IP address and the ID (Windows version). Let's have a look at our target IP address. We have the target IP address, so let's start the attack. Type in ./dcom 6 192.168.174.129. The target has been exploited and we already have the command shell. Now we check the IP address of the victim machine. Type in ipconfig. The target has been compromised and we have actually gained access to it. Now we will see how to use the internal exploits of Metasploit. We have already scanned an IP address and found three open ports. This time we target port number 445, which runs the Microsoft-ds service. Let us start by selecting an exploit. Launch msfconsole, type in use exploit/windows/smb/ms08_067_netapi, and press Enter. The next step will be to check the options for an exploit and what it requires in order to perform a successful exploitation. We type in show options and it will show us the requirements. We would need to set RHOST ( remote host), which is the target IP address, and let the other options keep their default values. We set up the RHOST or the target address by typing in set RHOST 192.168.0.103. After setting up the options, we are all set to exploit our target. Typing in exploit will give us the Meterpreter shell. References The following are some helpful references that shed further light on some of the topics covered in this article: http://www.securitytube.net/video/1175 http://resources.infosecinstitute.com/system-exploitation-metasploit/ Summary In this article, we covered the basics of vulnerability, a payload, and some tips on the art of exploitation. We also covered the techniques of how to search for vulnerable services and further query the Metasploit database for an exploit. These exploits were then used to compromise the vulnerable system. We also demonstrated the art of searching for exploits in Internet databases, which contain zero-day exploits on software and services. Resources for Article : Further resources on this subject: Understanding the True Security Posture of the Network Environment being Tested [Article] So, what is Metasploit? [Article] Tips and Tricks on BackTrack 4 [Article]
Read more
  • 0
  • 0
  • 3448

article-image-defining-applications-policy-file
Packt
23 Aug 2013
21 min read
Save for later

Defining the Application's Policy File

Packt
23 Aug 2013
21 min read
(For more resources related to this topic, see here.) The AndroidManifest.xml file All Android applications need to have a manifest file. This file has to be named as AndroidManifest.xml and has to be placed in the application's root directory. This manifest file is the application's policy file. It declares the application components, their visibility, access rules, libraries, features, and the minimum Android version that the application runs against. The Android system uses the manifest file for component resolution. Thus, the AndroidManfiest.xml file is by far the most important file in the entire application, and special care is required when defining it to tighten up the application's security. The manifest file is not extensible, so applications cannot add their own attributes or tags. The complete list of tags with how these tags can be nested is as follows: <uses-sdk><?xml version="1.0" encoding="utf-8"?> <manifest> <uses-permission /> <permission /> <permission-tree /> <permission-group /> <instrumentation /> <uses-sdk /> <uses-configuration /> <uses-feature /> <supports-screens /> <compatible-screens /> <supports-gl-texture /> <application> <activity> <intent-filter> <action /> <category /> <data /> </intent-filter> <meta-data /> </activity> <activity-alias> <intent-filter> </intent-filter> <meta-data /> </activity-alias> <service> <intent-filter> </intent-filter> <meta-data/> </service> <receiver> <intent-filter> </intent-filter> <meta-data /> </receiver> <provider> <grant-uri-permission /> <meta-data /> <path-permission /> </provider> <uses-library /> </application> </manifest> Only two tags, <manifest> and <application>, are the required tags. There is no specific order to declare components. The <manifest> tag declares the application specific attributes. It is declared as follows: <manifest package="string" android_sharedUserId="string" android_sharedUserLabel="string resource" android_versionCode="integer" android_versionName="string" android_installLocation=["auto" | "internalOnly" | "preferExternal"] > </manifest> An example of the <manifest> tag is shown in the following code snippet. In this example, the package is named com.android.example, the internal version is 10, and the user sees this version as 2.7.0. The install location is decided by the Android system based on where it has room to store the application. <manifest package="com.android.example" android_versionCode="10" android_versionName="2.7.0" android_installLocation="auto" > The attributes of the <manifest> tag are as follows: package: This is the name of the package. This is the Java style namespace of your application, for example, com.android.example. This is the unique ID of your application. If you change the name of a published application, it is considered a new application and auto updates will not work. android:sharedUserId: This attribute is used if two or more applications share the same Linux ID. This attribute is discussed in detail in a later section. android:sharedUserLabel: This is the user readable name of the shared user ID and only makes sense if android:sharedUserId is set. It has to be a string resource. android:versionCode: This is the version code used internally by the application to track revisions. This code is referred to when updating an application with the more recent version. android:versionName: This is the version of the application shown to the user. It can be set as a raw string or as a reference, and is only used for display to users. android:installLocation: This attribute defines the location where an APK will be installed. The application tag is defined as follows: <application android_allowTaskReparenting=["true" | "false"] android_backupAgent="string" android_debuggable=["true" | "false"] android_description="string resource" android_enabled=["true" | "false"] android_hasCode=["true" | "false"] android_hardwareAccelerated=["true" | "false"] android_icon="drawable resource" android_killAfterRestore=["true" | "false"] android_largeHeap=["true" | "false"] android_label="string resource" android_logo="drawable resource" android_manageSpaceActivity="string" android_name="string" android_permission="string" android_persistent=["true" | "false"] android_process="string" android_restoreAnyVersion=["true" | "false"] android_supportsRtl=["true" | "false"] android_taskAffinity="string" android_theme="resource or theme" android_uiOptions=["none" | "splitActionBarWhenNarrow"] > </application> An example of the <application> tag is shown in the following code snippet. In this example, the application name, description, icon, and label are set. The application is not debuggable and the Android system can instantiate the components. <application android_label="@string/app_name" android_description="@string/app_desc" android_icon="@drawable/example_icon" android_enabled="true" android_debuggable="false"> </application> Many attributes of the <application> tag serve as the default values for the components declared within the application. These tags include permission, process, icon, and label. Other attributes such as debuggable and enabled are set for the entire application. The attributes of the <application> tag are discussed as follows: android:allowTaskReparenting: This value can be overridden by the <activity> element. It allows an Activity to re-parent with the Activity it has affinity with, when it is brought to the foreground. android:backupAgent: This attribute contains the name of the backup agent for the application. android:debuggable: This attribute when set to true allows an application to be debugged. This value should always be set to false before releasing the app in the market. android:description: This is the user readable description of an application set as a reference to a string resource. android:enabled: This attribute if set to true, the Android system can instantiate application components. This attribute can be overridden by components. android:hasCode: This attribute if set to true, the application will try to load some code when launching the components. android:hardwareAccelerated: This attribute when set to true allows an application to support hardware accelerated rendering. It was introduced in the API level 11. android:icon: This is the application icon as a reference to a drawable resource. android:killAfterRestore: This attribute if set to true, the application will be terminated once its settings are restored during a full-system restore. android:largeHeap: This attribute lets the Android system create a large Dalvik heap for this application and increases the memory footprint of the application, so this should be used sparingly. android:label: This is the user readable label for the application. android:logo: This is the logo for the application. android:manageSpaceActivity: This value is the name of the Activity that manages the memory for the application. android:name: This attribute contains the fully qualified name of the subclass that will be instantiated before any other component is started. android:permission: This attribute can be overridden by a component and sets the permission that a client should have to interact with the application. android:persistent: Usually used by a system application, this attribute allows the application to be running all the time. android:process: This is the name of the process in which a component will run. This can be overridden by any component's android:process attribute. android:restoreAnyVersion: This attribute lets the backup agent attempt a restore even if the backup currently stored is by a newer application than what is attempting to restore now. android:supportsRtl: This attribute when set to true supports right-to-left layouts. It was added in the API level 17. android:taskAffinity: This attribute lets all activities have affinity with the package name, unless it is set by the Activity explicitly. android:theme: This is a reference to the style resource for the application. android:uiOptions: This attribute if set to none, there are no extra UI options; if set to splitActionBarWhenNarrow, a bar is set at the bottom if constrained for the screen. In the following sections we will discuss how to handle specific requirements using the policy file. Application policy use cases This section discusses how to define the application policies using the manifest file. I have used use cases and we will discuss how to implement these use cases in the policy file. Declaring application permissions An application on the Android platform has to declare what resources it intends to use for proper functioning of the application. These are the permissions that are displayed to the user when they download the application. Application permissions should be descriptive so that users can understand them. Also, as is the general rule with security, it is important to request the minimum permissions required. Application permissions are declared in the manifest file by using the tag <uses-permission>. An example of a location-based manifest file that uses the GPS for retrieving location is shown in the following code snippet: <uses-permissionandroid:name="android. permission.ACCESS_COARSE_LOCATION" /> <uses-permissionandroid:name="android. permission.ACCESS_FINE_LOCATION" /> <uses-permissionandroid:name="android. permission.ACCESS_LOCATION_EXTRA_COMMANDS" /> <uses-permissionandroid:name="android. permission.ACCESS_MOCK_LOCATION" /> <uses-permissionandroid:name="android.permission.INTERNET" /> These permissions will be displayed to the users when they install the app, and can always be checked by going to Application under the settings menu. These permissions are seen in the following screenshot: Declaring permissions for external applications The manifest file also declares the permissions an external application (which does not run with the same Linux ID) needs to access the application components. This can be one of two places in the policy file: in the <application> tag or along with the component in the <activity>, <provider>, <receiver>, and <service> tag. If there are permissions that all components of an application require, then it is easy to specify them in the <application> tag. If a component requires some specific permission, then those can be defined in the specific component tag. Remember, only one permission can be declared in any of the tags. If a component is protected by permission then the component permission overrides the permission declared in the <application> tag. The following is an example of an application that requires external applications to have android.permission.ACCESS_COARSE_LOCATION to access its components and resources: <application android_allowBackup="true" android_icon="@drawable/ic_launcher" android_label="@string/app_name" android_permission="android. permission.ACCESS_COARSE_LOCATION"> If a Service requires that any application component that accesses it should have access to the external storage, then it can be defined as follows: <service android_enabled="true" android_name=".MyService" android_permission="android. permission.WRITE_EXTERNAL_STORAGE"> </service> If a policy file has both the preceding tags then when an external component makes a request to this Service, it should have android.permission.WRITE_EXTERNAL_STORAGE, as this permission will override the permission declared by the application tag. Applications running with the same Linux ID Sharing data between applications is always tricky. It is not easy to maintain data confidentiality and integrity. Proper access control mechanisms have to be put in place based on who has access to how much data. In this section, we will discuss how to share application data with the internal applications (signed by the same developer key). Android is a layered architecture with an application isolation enforced by the operating system itself. Whenever an application is installed on the Android device, the Android system gives it a unique user ID defined by the system. Notice that the two applications, example1 and example2, in the following screenshot are the applications run as separate user IDs, app_49 and app_50: However, an application can request the system for a user ID of its choice. The other application can then request the same user ID as well. This creates tight coupling and does not require components to be made visible to the other application or to create shared content providers. This kind of tight coupling is done in the manifest tags of all applications that want to run in the same process. The following is a snippet of manifest files of the two applications com.example.example1 and com.example.example2 that use the same user ID: <manifest package="com.example.example1" android_versionCode="1" android_versionName="1.0" android_sharedUserId="com.sharedID.example"> <manifest package="com.example.example2" android_versionCode="1" android_versionName="1.0" android_sharedUserId="com.sharedID.example"> The following screenshot is displayed when these two applications are running on the device. Notice that the applications, com.example.example1 and com.example.example2, now have the app ID of app_113. You will notice that the shared UID follows a certain format akin to a package name. Any other naming convention will result in an error such as an installation error: INSTALL_PARSE_FAILED_BAD_SHARED_USER_ID. All applications that share the same UID should have the same certificate. External storage Starting with API Level 8, Android provides support to store Android applications (APK files) on external devices, such as an SD card. This helps to free up internal phone memory. Once the APK is moved to external storage, the only memory taken up by the app is the private data of the application stored on internal memory. It is important to note that even for the SD card resident APKs, the DEX (Dalvik Executable) files, private data directories, and native shared libraries remain on the internal storage. Adding an optional attribute in the manifest file enables this feature. The application info screen for such an application either has a move to the SD card or move to a phone button depending on the current storage location of APK. The user then has an option to move the APK file accordingly. If the external device is un-mounted or the USB mode is set to Mass Storage (where the device is used as a disk drive), all the running activities and services hosted on that external device are immediately killed. The feature to enable storing APK on the external devices is enabled by adding the optional attribute android:installLocation in the application's manifest file in the <manifest> element. The attribute android:installLocation can have the following three values: InternalOnly: The Android system will install the application on the internal storage only. In case of insufficient internal memory, storage errors are returned. PreferExternal: The Android system will try to install the application on the external storage. In case there is not enough external storage, the application will be installed on the internal storage. The user will have the ability to move the app from external to internal storage and vice versa as desired. auto: This option lets the Android system decide the best install location for the application. The default system policy is to install the application on internal storage first. If the system is running low on internal memory, the application is then installed on the external storage. The user will have the ability to move the application from external to internal storage and vice versa as desired. For example, if android:installLocation is set to Auto, then on devices running a version of Android less than 2.2, the system will ignore this feature and APK will only be installed on the internal memory. The following is the code snippet from an application's manifest file with this option: <manifest package="com.example.android" android_versionCode="10" android_versionName="2.7.0" android_installLocation="auto" > The following is a screenshot of the application with the manifest file as specified previously. You will notice that Move to SD card is enabled in this case: In another application, where android:installLocation is not set, the Move to SD Card is disabled as shown in the following screenshot: Setting component visibility Any of the application components namely, activities, services, providers, and receivers can be made discoverable to the external applications. This section discusses the nuances of such scenarios. Any Activity or Service can be made private by setting android:exported=false. This is also the default value for an Activity. See the following two examples of a private Activity: <activity android_name=".Activity1" android_exported="false" /> <activity android_name=".Activity2" /> However, if you add an Intent Filter to the Activity, then the Activity becomes discoverable for the Intent in the Intent Filter. Thus, the Intent Filter should never be relied upon as a security boundary. See the following examples for Intent Filter declaration: <activity android_name=".Activity1" android_label="@string/app_name" > <intent-filter> <action android_name="android.intent.action.MAIN" /> <category android_name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <activity android_name=".Activity2"> <intent-filter> <action android_name="com.example.android. intent.START_ACTIVITY2" /> </intent-filter> </activity> Both activities and services can also be secured by an access permission required by the external component. This is done using the android:permission attribute of the component tag. A Content Provider can be set up for private access by using android:exported=false. This is also the default value for a provider. In this case, only an application with the same ID can access the provider. This access can be limited even further by setting the android:permission attribute of the provider tag. A Broadcast Receiver can be made private by using android:exported=false. This is the default value of the receiver if it does not contain any Intent Filters. In this case, only the components with the same ID can send a broadcast to the receiver. If the receiver contains Intent Filters then it becomes discoverable and the default value of android:exported is false. Debugging During the development of an application, we usually set the application to be in the debug mode. This lets developers see the verbose logs and can get inside the application to check for errors. This is done in the <application> tag by setting android:debuggable to true. To avoid security leaks, it is very important to set this attribute to false before releasing the application. An example of sensitive information that I have seen in my experience includes usernames and passwords, memory dumps, internal server errors, and even some funny personal notes state of a server and a developer's opinion about a piece of code. The default value of android:debuggable is false. Backup Starting with API level 8, an application can choose a backup agent to back up the device to the cloud or server. This can be set up in the manifest file in the <application> tag by setting android:allowBackup to true and then setting android:backupAgent to a class name. The default value of android:allowBackup is set to true and the application can set it to false if it wants to opt out of the backup. There is no default value for android:backupAgent and a class name should be specified. The security implications of such a backup are debatable as services used to back up the data are different and sensitive data, such as usernames and passwords can be compromised. Putting it all together The following example puts all the learning we have done so far to analyze AndroidManifest.xml provided with an Android SDK sample for RandomMusicPlayer. The manifest file specifies that this is version 1 of the application com.example.android.musicplayer. It runs on SDK 14 but supports backwards up to SDK 7. The application uses two permissions namely, android.permission.INTERNET and android.permission.WAKE_LOCK. The application has one Activity that is the entry point for the application called MainActivity, one Service called MusicService, and one receiver called MusicIntentReceiver. MusicService has defined custom actions called PLAY, REWIND, PAUSE, SKIP, STOP, and TOGGLE_PLAYBACK. The receiver uses the action intent android.media.AUDIO_BECOMING_NOISY and android.media.MEDIA_BUTTON defined by the Android system. None of the components are protected with permissions. An example of an AndroidManifst.xml file is shown in the following screenshot: Example checklist In this section, I have tried to put together an example list that I suggest you refer to whenever you are ready to release a version of your application. This is a very general version and you should adapt it according to your own use case and components. When creating a checklist think about issues that relate to the entire application, those that are specific to a component, and issues that might come up by setting the component and application specification together. Application level In this section, I have listed some questions that you should be asking yourself as you define the application specific preferences. They may affect how your application is viewed, stored, and perceived by users. Some application level questions that you may like to ask are as follows: Do you want to share resources with other applications that you have developed? Did you specify the unique user ID? Did you define this unique ID for another application either intentionally or unintentionally? Does your application require some capabilities such as camera, Bluetooth, and SMS? Does your application need all these permissions? Is there another permission that is more restrictive than the one you have defined? Remember the principle of least privilege Do all the components of your application need this permission or only a few? Check the spellings of all the permissions once again. The application may compile and work even if the permission spelling is incorrect. If you have defined this permission, is this the correct one that you need? At what API level does the application work? What is the minimum API level that your application can support? Are there any external libraries that your application needs? Did you remember to turn off the debug attribute before you release? If you are using a backup agent then remember to mention it here Did you remember to set a version number? This will help you during application upgrade Do you want to set an auto upgrade? Did you remember to sign the application with your release key? Sometimes setting a particular screen orientation will not allow your application to be visible on certain devices. For example, if your application only supports portrait mode then it might not appear for devices with landscape mode only. Where do you want to install the APK? Are there any services that might cease to work if the intent is not received in time? Do you want some other application level settings, such as the ability of the system to restore components? If defining a new permission, think twice if you really want them. Chances are there is already an existing permission that will cover your use case. Component level Some component level questions that you will want to think about in the policy are listed here. These are questions that you should be asking yourself for each component: Did you define all components? If using the third party libraries in your application, did you define all the components that you will use? Was there a particular setting that the third party library expects from your application? Do you want this component to be visible to other applications? Do you need to add some Intent Filters? If the component is not supposed to be visible, did you add Intent Filters? Remember as soon as you add Intent Filters, your component becomes visible. Do other components require some special permission to trigger this component? Verify the spelling of the permission name. Does your application require some capabilities such as camera, Bluetooth, and SMS? Summary In this article, we've learned how to define an applications policy file. The manifest file is the most important artifact of an application and should be defined with utmost care. This manifest file declares the permissions requested by an application and permissions that the external applications need to access its components. With the policy file we also define the storage location of the out APK and the minimum SDK against which the out application will run. The policy file exposes components that are not sensitive to the application. At the end of this article we discussed some sample issues that a developer should be aware of when writing a manifest file. In this article, we've learned about an Android application structure. Resources for Article: Further resources on this subject: So, what is Spring for Android? [Article] Animating Properties and Tweening Pages in Android 3-0 [Article] New Connectivity APIs – Android Beam [Article]
Read more
  • 0
  • 0
  • 2677

article-image-so-what-metasploit
Packt
06 Aug 2013
9 min read
Save for later

So, what is Metasploit?

Packt
06 Aug 2013
9 min read
(For more resources related to this topic, see here.) In the IT industry, we have various flavors of operating systems ranging from Mac, Windows, *nix platforms, and other server operating systems, which run an n number of services depending on the needs of the organization. When given a task to assess the risk factor of any organization, it becomes very tedious to run single code snippets against these systems. What if, due to some hardware failure, all these code snippets are lost? Enter Metasploit. Metasploit is an exploit development framework started by H. D. Moore in 2003, which was later acquired by Rapid7. It is basically a tool for the development of exploits and the testing of these exploits on live targets. This framework has been completely written using Ruby,and is currently one of the largest frameworks ever written in the Ruby language. The tool houses more than 800 exploits in its repository and hundreds of payloads for each exploit. This also contains various encoders, which help us in the obfuscation of exploits to evade the antivirus and other intrusion detection systems ( IDS ). As we progress in this book, we shall uncover more and more features of this tool. This tool can be used for penetration testing, risk assessment, vulnerability research, and other security developmental practices such as IDS and the intrusion prevention system ( IPS ). Top features you need to know about After learning about the basics of the Metasploit framework, in this article we will find out the top features of Metasploit and learn some of the attack scenarios. This article will be a flow of the following features: The meterpreter module Using auxiliary modules in Metasploit Client-side attacks with auxiliary modules The meterpreter module In the earlier article, we have seen how to open up a meterpreter session in Metasploit. But in this article, we shall see the features of the meterpreter module and its command set in detail. Before we see the working example, let's see why meterpreter is used in exploitation: It doesn't create a new process in the target system It runs in the context of the process that is being exploited It performs multiple tasks in one go; that is, you don't have to create separate requests for each individual task It supports scripts writing Let's check out what the meterpreter shell looks like. Meterpreter allows you to provide commands and obtain results. Let's see the list of commands that are available to use under meterpreter. These can be obtained by typing help in the meterpreter command shell. The syntax for this command is as follows: meterpreter>help The following screenshot represents the core commands: The filesystem commands are as follows: The networking commands are as follows: The system commands are as follows: The user interface commands are as follows: The other miscellaneous commands are as follows: As you can see in the preceding screenshots, meterpreter has two sets of commands set apart from its core set of commands. They are as follows: Stdapi Priv The Stdapi command set contains various commands for the filesystem commands, networking commands, system commands, and user-interface commands. Depending on the exploit, if it can get higher privileges, the priv command set is loaded. By default, the stdapi command set and core command set gets loaded irrespective of the privilege an exploit gets. Let's check out the route command from the meterpreter stdapi command set. The syntax is as follows: meterpreter>route [–h] command [args] In the following screenshot, we can see the list of all the routes on the target machine: In a scenario where we wish to add other subnets and gateways we can use the concept of pivoting, where we add a couple of routes for optimizing the attack. The following are the commands supported by the route: Add [subnet] [netmask] [gateway]Delete [subnet] [netmask] [gateway] List Another command that helps during pivoting is port-forwarding. Meterpreter supports port forwarding via the following command. The syntax for this command is as follows: meterpreter>portfwd [-h] [add/delete/list] [args] As soon as an attacker breaks into any system, the first thing that he/she does is check what privilege levels he/she has to access the system. Meterpreter provides a command for working out the privilege level after breaking into the system. The syntax for this command is as follows: meterpreter>getuid The following screenshot demonstrates the working of getuid in meterpreter. In the following screenshot, the attacker is accessing the system with the SYSTEM privilege. In a Windows environment, the SYSTEM privilege is the highest possible privilege available. Suppose we failed to get access to the system as a SYSTEM user, but succeeded in getting access via the administrator, then meterpreter provides you with many ways to elevate your access levels. This is called privilege escalation. The commands are as follows: Syntax: meterpreter>getsystem Syntax: meterpreter>migrate process_id Syntax: meterpreter>steal_token process_id The first method uses an internal procedure within the meterpreter to gain the system access, whereas in the second method, we are migrating to a process that is running with a SYSTEM privilege. In this case, the exploit by default gets loaded in any process space of the Windows operating system. But, there is always a possibility that the user clears that process space by deleting that process from the process manager. In a case like this, it's wise to migrate to a process which is usually untouched by the user. This helps in maintaining a prolonged access to the victim machine. In the third method, we are actually impersonating a process which is running as a SYSTEM privileged process. This is called impersonation via token stealing. Basically, Windows assigns users with a unique ID called Secure Identifier (SID). Each thread holds a token containing information about the privilege levels. Impersonating a token happens when one particular thread temporarily assumes the identity of another process in the same system. We have seen the usage of process IDs in the preceding commands, but how do we fetch the process ID? That is exactly what we I shall be covering in this article. Windows runs various processes and the exploit itself will be running in the process space of the Windows system. To list all these processes with their PIDs and the privilege levels, we use the following meterpreter command: meterpreter>ps The following screenshot gives a clear picture of the ps command: In the preceding screenshot, we have the PIDs listed. We can use these PIDs to escalate our privileges. Once you steal a token, it can be dropped using the Drop_token command. The syntax for this command is as follows: meterpreter>drop_token Another interesting command from the stdapi set is the shell command. This spawns a shell in the target system and enables us to navigate through the system effortlessly. The syntax for this command is as follows: meterpreter>shell The following screenshot shows the usage of the shell command: The preceding screenshot shows that we are inside the target system. All the usual windows command shell scripts such as dir, cd, and md work here. After briefly covering system commands, let's start learning the filesystem commands. A filesystem contains a working directory. To find out the current working directory in the target system, we use the following command: meterpreter>pwd The following screenshot shows the command in action: Suppose you wish to search for different files on the target system, then we can use a command called search. The syntax for this command is as follows: meterpreter> search [-d dir][-r recurse] –f pattern Various options available under the search command are: -d: This is the directory to begin the search. If nothing is specified, then it searches all drives. -f: The pattern that we would like to search for. For example, *.pdf. -h: Provides the help context. -r: Used when we need to recursively search the subdirectories. By default this is set to true. Once we get the file we need, we use the download command to download it to our drive. The syntax for this command is as follows: meterpreter>download Full_relative_path By now we have covered the core commands, system commands, networking commands, and filesystem commands. The last article of the stdapi command set is the user-interface commands. The most commonly used commands are the keylogging commands. These commands are very effective in sniffing user account credentials: Syntax: meterpreter>keyscan_start Syntax: meterpreter>keyscan_dump Syntax: meterpreter>keyscan_stop This is the procedure of the usage of this command. The following screenshot explains the commands in action: The communication between the meterpreter and its targets is done via type-length-value. This means that the data is getting transferred in an encrypted manner. This leads to multiple channels of communications. The advantage of this is that multiple programs can communicate with an attacker. The creation of channels is illustrated in the following screenshot: The syntax for this command is as follows: meterpreter>execute process_name –c -c is the parameter that tells the meterpreter to channel the input/output. When the attack requires us to interact with multiple processes then the concept of channels comes in handy as a tool for the attacker. The close command is used to exit a channel. Summary In this article we learned what is Metaspoilt and also saw one of its top feature. Resources for Article: Further resources on this subject: Understanding the True Security Posture of the Network Environment being Tested [Article] Preventing Remote File Includes Attack on your Joomla Websites [Article] Tips and Tricks on BackTrack 4 [Article]
Read more
  • 0
  • 0
  • 4559

article-image-building-app-using-backbonejs
Packt
29 Jul 2013
7 min read
Save for later

Building an app using Backbone.js

Packt
29 Jul 2013
7 min read
(For more resources related to this topic, see here.) Building a Hello World app For building the app you will need the necessary script files and a project directory laid out. Let's begin writing some code. This code will require all scripts to be accessible; otherwise we'll see some error messages. We'll also go over the Web Inspector and use it to interact with our applications. Learning this tool is essential for any web application developer to proficiently debug (and even write new code for) their application. Step 1 – adding code to the document Add the following code to the index.html file. I'm assuming the use of LoDash and Zepto, so you'll want to update the code accordingly: <!DOCTYPE HTML> <html> <head> <title>Backbone Application Development Starter</title> <!-- Your Utility Library --> <script src ="scripts/lodash-1.3.1.js"></script> <!-- Your Selector Engine --> <script src ="scripts/zepto-1.0.js"></script> <script src ="scripts/backbone-1.0.0.js"></script> </head> <body> <div id="display"> <div class ="listing">Houston, we have a problem.</div> </div> </body> <script src ="scripts/main.js"></script> </html> This file will load all of our scripts and has a simple <div>, which displays some content. I have placed the loading of our main.js file after the closing <body> tag. This is just a personal preference of mine; it will ensure that the script is executed after the elements of the DOM have been populated. If you were to place it adjacent to the other scripts, you would need to encapsulate the contents of the script with a function call so that it gets run after the DOM has loaded; otherwise, when the script runs and tries to find the div#display element, it will fail. Step 2 – adding code to the main script Now, add the following code to your scripts/main.js file: var object = {};_.extend(object, Backbone.Events);object.on("show-message", function(msg) {$('#display .listing').text(msg);});object.trigger("show-message", "Hello World"); Allow me to break down the contents of main.js so that you know what each line does. var object = {}; The preceding line should be pretty obvious. We're just creating a new object, aptly named object, which contains no attributes. _.extend(object, Backbone.Events); The preceding line is where the magic starts to happen. The utility library provides us with many functions, one of which is extend. This function takes two or more objects as its arguments, copies the attributes from them, and appends them to the first object. You can think of this as sort of extending a class in a classical language (such as PHP). Using the extend utility function is the preferred way of adding Backbone functionality to your classes. In this case, our object is now a Backbone Event object, which means it can be used for triggering events and running callback functions when an event is triggered. By extending your objects in this manner, you have a common method for performing event-based actions in your code, without having to write one-off implementations. object.on("show-message", function(msg) {$('#display .listing').text(msg);}); The preceding code adds an event listener to our object. The first argument to the on function is the name of the event, in this case show-message. This is a simple string that describes the event you are listening for and will be used later for triggering the events. There isn't a requirement as far as naming conventions go, but try to be descriptive and consistent. The second argument is a callback function, which takes an argument that can be set at the time the event is triggered. The callback function here is pretty simple; it just queries the DOM using our selector engine and updates the text of the element to be the text passed into the e vent. object.trigger("show-message", "Hello World"); Finally, we trigger our event. Simply use the trigger function on the object, tell it that we are triggering the show-message event, and pass in the argument for the trigger callback as the second argument. Step 3 – opening the project in your browser This will show us the text Hello World when we open our index.html file in our browser. Don't believe me? Double click on the file now, and you should see the following screen: If Chrome isn't your default browser, you should be able to right-click on the file from within your file browser and there should be a way to choose which application to open the file with. Chrome should be in that list if it was installed properly. Step 4 – encountering a problem Do you see something other than our happy Hello World message? The code is set up in a way that it should display the message Houston , we have a problem if something were to go wrong (this text is what is displayed in the DOM by default, before having JavaScript run and replace it). The missing script file The first place to look for the problem is the Network tab of the Web Inspector. This tab shows us each of the files that were downloaded by the browser to satisfy the rendering of the page. Sometimes, the Console tab will also tell us when a file wasn't properly downloaded, but not always. The following screenshot explains this: If you look at the Network tab, you can see an item clearly marked in red ( backbone-1.0.0.js ). In my project folder, I had forgotten to download the Backbone library file, thus the browser wasn't able to load the file. Note that we are loading the file from the local filesystem, so the status column says either Success or Failure . If these files were being loaded from a real web server, you would see the actual HTTP status codes, such as 200 OK for a successful file download or 404 Not Found for a missing file. The script typo Perhaps you have made a typo in the script while copying it from the book. If you have any sort of issues resulting from incorrect JavaScript, they will be visible in the Console tab, as shown in the following screenshot: In my example, I had forgotten the line about extending my object with the Backbone events class. Having left out the extend line of code caused the object to be missing some functionality, specifically the method on(). Notice how the console displays which filename and line number the error is on. Feel free to remove and add code to the files and refresh the page to get a feel for what the errors look like in the console. This is a great way to get a feel for debugging Backbone-based (and, really, any JavaScript) applications. Summary In this article we learned how to develop an app using Backbone.js. Resources for Article : Further resources on this subject: JBoss Portals and AJAX - Part 1 [Article] Getting Started with Zombie.js [Article] DWR Java AJAX User Interface: Basic Elements (Part 1) [Article]
Read more
  • 0
  • 0
  • 1897
article-image-planning-lab-environment
Packt
12 Apr 2013
10 min read
Save for later

Planning the lab environment

Packt
12 Apr 2013
10 min read
(For more resources related to this topic, see here.) Getting ready To get the best result after setting up your lab, you should plan it properly at first. Your lab will be used to practise certain penetration testing skills. Therefore, in order to properly plan your lab environment, you should first consider which skills you want to practise. Although you could also have non-common or even unique reasons to build a lab, I can provide you with the average list of skills one might need to practice: Essential skills Discovery techniques Enumeration techniques Scanning techniques Network vulnerability exploitation Privilege escalation OWASP TOP 10 vulnerabilities discovery and exploitation Password and hash attacks Wireless attacks Additional skills Modifying and testing exploits Tunneling Fuzzing Vulnerability research Documenting the penetration testing process All these skills are applied in real-life penetration testing projects, depending on its depth and the penetration tester's qualifications. The following skills could be practised at three lab types or their combinations: Network security lab Web application lab Wi-Fi lab I should mention that the lab planning process for each of the three lab types listed consists of the same four phases: Determining the lab environment requirements: This phase helps you to actually understand what your lab should include. In this phase, all the necessary lab environment components should be listed and their importance for practising different penetration testing skills should be assessed. Determining the lab environment size: The number of various lab environment components should be defined in this phase. Determining the required resources: The point of this phase is to choose which hardware and software could be used for building the lab with the specified parameters and fit it with what you actually have or are able to get. Determining the lab environment architecture: This phase designs the network topology and network address space How to do it... Now, I want to describe step by step how to plan a common lab combined of all three lab types listed in the preceding section using the following four-phase approach: Determine the lab environment requirements: To fit our goal and practise particular skills, the lab should contain the following components: Skills to practice Necessary components Discovery techniques Several different hosts with various OSs Firewall IPS Enumeration techniques Scanning techniques Network vulnerability exploitation OWASP TOP 10 vulnerabilities discovery and exploitation Web server Web application Database server Web Application Firewall Password and hash attacks Workstations Servers Domain controller FTP server Wireless attacks Wireless router Radius server Laptop or any other host with Wi-Fi adapter Modifying and testing exploits Any host Vulnerable network service Debugger Privilege escalation Any host Tunnelling Several hosts Fuzzing Any host Vulnerable network service Debugger Vulnerability research Documenting the penetration testing process Specialized software Now, we can make our component list and define the importance of each component for our lab (importance ranges between less important, Additional, and most important, Essential): Components Importance Windows server Essential Linux server Important FreeBSD server Additional Domain controller Important Web server Essential FTP Server Important Web site Essential Web 2.0 application Important Web Application Firewall Additional Database server Essential Windows workstation Essential Linux workstation Additional Laptop or any other host with Wi-Fi adapter Essential Wireless router Essential Radius server Important Firewall Important IPS Additional Debugger Additional Determine the lab environment size: In this step, we should decide how many instances of each component we need in our lab. We will count only the essential and important components' numbers, so let's exclude all additional components. This means that we've now got the following numbers: Components Number Windows server 2 Linux server 1 Domain controller 1 Web server 1 FTP Server 1 Web site 1 Web 2.0 application 1 Database server 1 Windows workstation 2 Host with Wi-Fi adapter 2 Wireless router 1 Radius server 1 Firewall 2 Determine required resources: Now, we will discuss the required resources: Server and victim workstations will be virtual machines based on VMWare Workstation 8.0. To run the virtual machines without any trouble, you will need to have an appropriate hardware platform based on a CPU with two or more cores and at least 4 GB RAM. Windows servers OSs will work under Microsoft Windows 2008 Server and Microsoft Windows Server 2003. We will use Ubuntu 12.04 LTS as a Linux server OS. Workstations will work under Microsoft Windows XP SP3 and Microsoft Windows 7. ASUS WL-520gc will be used as the LAN and WLAN router. Any laptop as the attacker's host. Samsung Galaxy Tab as the Wi-Fi victim (or other device supporting Wi-Fi). We will use free software as a web server, an FTP-server, and a web application, so there is no need for any hardware or financial resources to get these requirements. Determine the lab environment architecture: Now, we need to design our lab network and draw a scheme: Address space parameters DHCP server: 192.168.1.1 Gateway: 192.168.1.1 Address pool: 192.168.1.2-15 Subnet mask: 255.255.255.0 How it works... In the first step, we discovered which types of lab components we need by determining what could be used to practise the following skills: All OSs and network services are suitable for practicing discovery, enumeration, and scanning techniques and also for network vulnerability exploitation. We also need at least two firewalls – windows built-in software and router built-in firewall functions. Firewalls are necessary for learning different scanning techniques and firewall rules detection knowledge. Additionally, you can use any IPS for practicing evasion techniques. A web server, a website, and a web application are necessary for learning how to disclose and exploit OWASP TOP 10 vulnerabilities. Though a Web Application Firewall (WAF) is not necessary, it helps to improve web penetration testing skills to higher level. An FTP service ideally fits to practice password brute-forcing. Microsoft domain services are necessary to understand and try Windows domain passwords and hash attacks including relaying. This is why we need at least one network service with remote password authentication and at least one Windows domain controller with two Windows workstations. A wireless access point is essential for performing various wireless attacks, but it is better to combine LAN router and Wi-Fi access point in one device. So, we will use Wi-Fi router with several LAN ports. A radius server is necessary for practicing attacks on WLAN with WPA-Enterprise security. A Laptop and a tablet PC with any Wi-Fi adapters will work as an attacker, and victim in wireless attacks. Tunnelling techniques could be practiced at any two hosts; it does not matter whether we use Windows or any other OS. Testing and modifying exploits as well as fuzzing and vulnerability research need a debugger installed on a vulnerable host. To properly document a penetration testing process, one can use just any test processor software, but there are several specialized software solutions, which make a thing much more comfortable and easier. In the second step, we determined which software and hardware we can use as instances of chosen component types and set their importance based on a common lab for a basic and intermediate professional level penetration tester. In the third step, we understood which solutions will be suitable for our tasks and what we can afford. I have tried to choose a cheaper option, which is why I am going to use virtualization software. The ASUS WL-520gc router combines the LAN router and Wi-Fi access point in the same device, so it is cheaper and more comfortable than using dedicated devices. A laptop and a tablet PC are also chosen for practising wireless attacks, but it is not the cheapest solution. In the fourth step, we designed our lab network based on determined resources. We have chosen to put all the hosts in the same subnet to set up the lab in an easier way. The subnet has its own DHCP server to dynamically assign network addresses to hosts. There's more... Let me give you an account of alternative ways to plan the lab environment details. Lab environment components variations It is not necessary to use a laptop as the attacker machine and a tablet PC as the victim – you just need two PCs with connected Wi-Fi adapters to perform various wireless attacks. As an alternative to virtual machines, a laptop, and a tablet PC or old unused computers (if you have them) could also be used to work as hardware hosts. There is only one condition – their hardware resources should be enough for planned OSs to work. An IPS could be either a software or hardware, but hardware systems are more expensive. For our needs, it is enough to use any freeware Internet security software including both the firewall and IPS functionality. It is not essential to choose the same OS as I have chosen in this chapter; you can use any other OSs that support the required functionality. The same is true about network services – it is not necessary to use an FTP service; you can use any other service that supports network password authentication such as telnet and SSH. You will have to additionally install any debugger on one of the victim's workstations in order to test the new or modified exploits and perform vulnerability research, if you need to. Finally, you can use any other hardware or virtual router that supports LAN routing and Wi-Fi access point functionality. A connected, dedicated LAN router and Wi-Fi access point are also suitable for the lab. Choosing virtualization solutions – pros and cons Here, I want to list some pros and cons of the different virtualization solutions in table format: Solution Pros Cons VMWare ESXi Enterprise solution Powerful solution Easily supports a lot of virtual machines on the same physical server as separate partitions No need to install the OS Very high cost Requires a powerful server Requires processor virtualization support VMWare workstation Comfortable to work with User friendly GUI Easy install Virtual *nix systems work fast Better works with virtual graphics Shareware It sometimes faces problems with USB Wi-Fi adapters on Windows 7 Demanding towards system resources Does not support 64-bit guest OS Virtual Windows systems work slowly VMWare player Freeware User-friendly GUI Easy to install Cannot create new virtual machines Poor functionality Micrisoft Virtual PC Freeware Great compatibility and stability with Microsoft systems Good USB support Easy to install Works only on Windows and only with Windows Does not support a lot of features that concurrent solutions do Oracle Virtual Box Freeware Virtual Windows systems work fast User-friendly GUI Easy to install Works on Mac OS and Solaris as well as on Windows and Linux Supports the "Teleportation" technology Paid USB support; Virtual *nix systems work slowly Here, I have listed only the leaders of the virtualization market in my opinion. Historically, I am mostly accustomed to VMWare Workstation, but of course, you can choose any other solutions that you may like. You can find more comparison info at http://virt.kernelnewbies.org/TechComparison. Summary This article explained how you can plan your lab environment. Resources for Article : Further resources on this subject: FAQs on BackTrack 4 [Article] CISSP: Vulnerability and Penetration Testing for Access Control [Article] BackTrack 4: Target Scoping [Article]
Read more
  • 0
  • 0
  • 3371

article-image-getting-started-spring-security
Packt
14 Mar 2013
14 min read
Save for later

Getting Started with Spring Security

Packt
14 Mar 2013
14 min read
(For more resources related to this topic, see here.) Hello Spring Security Although Spring Security can be extremely difficult to configure, the creators of the product have been thoughtful and have provided us with a very simple mechanism to enable much of the software's functionality with a strong baseline. From this baseline, additional configuration will allow a fine level of detailed control over the security behavior of our application. We'll start with an unsecured calendar application, and turn it into a site that's secured with rudimentary username and password authentication. This authentication serves merely to illustrate the steps involved in enabling Spring Security for our web application; you'll see that there are some obvious flaws in this approach that will lead us to make further configuration refinements. Updating your dependencies The first step is to update the project's dependencies to include the necessary Spring Security .jar files. Update the Maven pom.xml file from the sample application you imported previously, to include the Spring Security .jar files that we will use in the following few sections. Remember that Maven will download the transitive dependencies for each listed dependency. So, if you are using another mechanism to manage dependencies, ensure that you also include the transitive dependencies. When managing the dependencies manually, it is useful to know that the Spring Security reference includes a list of its transitive dependencies. pom.xml <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-config</artifactId> <version>3.1.0.RELEASE</version> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-core</artifactId> <version>3.1.0.RELEASE</version> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-web</artifactId> <version>3.1.0.RELEASE</version> </dependency> Downloading the example code You can download the example code files for all Packt books you have purchased from your account at https://www.packtpub.com. If you purchased this book elsewhere, you can visit https://www.packtpub.com/books/content/support and register to have the files e-mailed directly to you. Using Spring 3.1 and Spring Security 3.1 It is important to ensure that all of the Spring dependency versions match and all the Spring Security versions match; this includes transitive versions. Since Spring Security 3.1 builds with Spring 3.0, Maven will attempt to bring in Spring 3.0 dependencies. This means, in order to use Spring 3.1, you must ensure to explicitly list the Spring 3.1 dependencies or use Maven's dependency management features, to ensure that Spring 3.1 is used consistently. Our sample applications provide an example of the former option, which means that no additional work is required by you. In the following code, we present an example fragment of what is added to the Maven pom.xml file to utilize Maven's dependency management feature, to ensure that Spring 3.1 is used throughout the entire application: <project ...> ... <dependencyManagement> <dependencies> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-aop</artifactId> <version>3.1.0.RELEASE</version> </dependency> … list all Spring dependencies (a list can be found in our sample application's pom.xml ... <dependency> <groupId>org.springframework</groupId> <artifactId>spring-web</artifactId> <version>3.1.0.RELEASE</version> </dependency> </dependencies> </dependencyManagement> </project> If you are using Spring Tool Suite, any time you update the pom.xml file, ensure you right-click on the project and navigate to Maven | Update Project…, and select OK, to update all the dependencies. Implementing a Spring Security XML configuration file The next step in the configuration process is to create an XML configuration file, representing all Spring Security components required to cover standard web requests.Create a new XML file in the src/main/webapp/WEB-INF/spring/ directory with the name security.xml and the following contents. Among other things, the following file demonstrates how to require a user to log in for every page in our application, provide a login page, authenticate the user, and require the logged-in user to be associated to ROLE_USER for every URL:URL element: src/main/webapp/WEB-INF/spring/security.xml <?xml version="1.0" encoding="UTF-8"?> <bean:beans xsi_schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.1.xsd http://www.springframework.org/schema/security http://www.springframework.org/schema/security/spring-security- 3.1.xsd"> <http auto-config="true"> <intercept-url pattern="/**" access="ROLE_USER"/> </http> <authentication-manager> <authentication-provider> <user-service> <user name="user1@example.com" password="user1" authorities="ROLE_USER"/> </user-service> </authentication-provider> </authentication-manager> </bean:beans> If you are using Spring Tool Suite, you can easily create Spring configuration files by using File | New Spring Bean Configuration File. This wizard allows you to select the XML namespaces you wish to use, making configuration easier by not requiring the developer to remember the namespace locations and helping prevent typographical errors. You will need to manually change the schema definitions as illustrated in the preceding code. This is the only Spring Security configuration required to get our web application secured with a minimal standard configuration. This style of configuration, using a Spring Security-specific XML dialect, is known as the security namespace style, named after the XML namespace (http://www.springframework.org/schema/security) associated with the XML configuration elements. Let's take a minute to break this configuration apart, so we can get a high-level idea of what is happening. The <http> element creates a servlet filter, which ensures that the currently logged-in user is associated to the appropriate role. In this instance, the filter will ensure that the user is associated with ROLE_USER. It is important to understand that the name of the role is arbitrary. Later, we will create a user with ROLE_ADMIN and will allow this user to have access to additional URLs that our current user does not have access to. The <authentication-manager> element is how Spring Security authenticates the user. In this instance, we utilize an in-memory data store to compare a username and password. Our example and explanation of what is happening are a bit contrived. An inmemory authentication store would not work for a production environment. However, it allows us to get up and running quickly. We will incrementally improve our understanding of Spring Security as we update our application to use production quality security . Users who dislike Spring's XML configuration will be disappointed to learn that there isn't an alternative annotation-based or Java-based configuration mechanism for Spring Security, as there is with Spring Framework. There is an experimental approach that uses Scala to configure Spring Security, but at the time of this writing, there are no known plans to release it. If you like, you can learn more about it at https://github.com/tekul/scalasec/. Still, perhaps in the future, we'll see the ability to easily configure Spring Security in other ways. Although annotations are not prevalent in Spring Security, certain aspects of Spring Security that apply security elements to classes or methods are, as you'd expect, available via annotations. Updating your web.xml file The next steps involve a series of updates to the web.xml file. Some of the steps have already been performed because the application was already using Spring MVC. However, we will go over these requirements to ensure that these more fundamental Spring requirements are understood, in the event that you are using Spring Security in an application that is not Spring-enabled. ContextLoaderListener The first step of updating the web.xml file is to ensure that it contains the o.s.w.context.ContextLoaderListener listener, which is in charge of starting and stopping the Spring root ApplicationContext interface. ContextLoaderListener determines which configurations are to be used, by looking at the <context-param> tag for contextConfigLocation. It is also important to specify where to read the Spring configurations from. Our application already has ContextLoaderListener added, so we only need to add the newly created security.xml configuration file, as shown in the following code snippet: src/main/webapp/WEB-INF/web.xml <context-param> <param-name>contextConfigLocation</param-name> <param-value> /WEB-INF/spring/services.xml /WEB-INF/spring/i18n.xml /WEB-INF/spring/security.xml </param-value> </context-param> <listener> <listener-class> org.springframework.web.context.ContextLoaderListener </listener-class> </listener> The updated configuration will now load the security.xml file from the /WEB-INF/spring/ directory of the WAR. As an alternative, we could have used /WEB-INF/spring/*.xml to load all the XML files found in /WEB-INF/spring/. We choose not to use the *.xml notation to have more control over which files are loaded. ContextLoaderListener versus DispatcherServlet You may have noticed that o.s.web.servlet.DispatcherServlet specifies a contextConfigLocation component of its own. src/main/webapp/WEB-INF/web.xml <servlet> <servlet-name>Spring MVC Dispatcher Servlet</servlet-name> <servlet-class> org.springframework.web.servlet.DispatcherServlet </servlet-class> <init-param> <param-name>contextConfigLocation</param-name> <param-value> /WEB-INF/mvc-config.xml </param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet> DispatcherServlet creates o.s.context.ApplicationContext, which is a child of the root ApplicationContext interface. Typically, Spring MVC-specific components are initialized in the ApplicationContext interface of DispatcherServlet, while the rest are loaded by ContextLoaderListener. It is important to know that beans in a child ApplicationContext (such as those created by DispatcherServlet) can reference beans of its parent ApplicationContext (such as those created by ContextLoaderListener). However, the parent ApplicationContext cannot refer to beans of the child ApplicationContext. This is illustrated in the following diagram where childBean can refer to rootBean, but rootBean cannot refer to childBean. As with most usage of Spring Security, we do not need Spring Security to refer to any of the MVC-declared beans. Therefore, we have decided to have ContextLoaderListener initialize all of Spring Security's configuration. springSecurityFilterChain The next step is to configure springSecurityFilterChain to intercept all requests by updating web.xml. Servlet <filter-mapping> elements are considered in the order that they are declared. Therefore, it is critical for springSecurityFilterChain to be declared first, to ensure the request is secured prior to any other logic being invoked. Update your web.xml file with the following configuration: src/main/webapp/WEB-INF/web.xml </listener> <filter> <filter-name>springSecurityFilterChain</filter-name> <filter-class> org.springframework.web.filter.DelegatingFilterProxy </filter-class> </filter> <filter-mapping> <filter-name>springSecurityFilterChain</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> <servlet> Not only is it important for Spring Security to be declared as the first <filter-mapping> element, but we should also be aware that, with the example configuration, Spring Security will not intercept forwards, includes, or errors. Often, it is not necessary to intercept other types of requests, but if you need to do this, the dispatcher element for each type of request should be included in <filter-mapping>. We will not perform these steps for our application, but you can see an example, as shown in the following code snippet: src/main/webapp/WEB-INF/web.xml <filter-mapping> <filter-name>springSecurityFilterChain</filter-name> <url-pattern>/*</url-pattern> <dispatcher>REQUEST</dispatcher> <dispatcher>ERROR</dispatcher> ... </filter-mapping> DelegatingFilterProxy The o.s.web.filter.DelegatingFilterProxy class is a servlet filter provided by Spring Web that will delegate all work to a Spring bean from the root ApplicationContext that must implement javax.servlet.Filter. Since, by default, the bean is looked up by name, using the value of <filter-name>, we must ensure we use springSecurityFilterChain as the value of <filter-name>. Pseudo-code for how o.s.web.filter.DelegatingFilterProxy works for our web.xml file can be found in the following code snippet: public class DelegatingFilterProxy implements Filter { void doFilter(request, response, filterChain) { Filter delegate = applicationContet.getBean("springSecurityFilterChain") delegate.doFilter(request,response,filterChain); } } FilterChainProxy When working in conjunction with Spring Security, o.s.web.filter. DelegatingFilterProxy will delegate to Spring Security's o.s.s.web. FilterChainProxy, which was created in our minimal security.xml file. FilterChainProxy allows Spring Security to conditionally apply any number of servlet filters to the servlet request. We will learn more about each of the Spring Security filters and their role in ensuring that our application is properly secured, throughout the rest of the book. The pseudo-code for how FilterChainProxy works is as follows: public class FilterChainProxy implements Filter { void doFilter(request, response, filterChain) { // lookup all the Filters for this request List<Filter> delegates = lookupDelegates(request,response) // invoke each filter unless the delegate decided to stop for delegate in delegates { if continue processing delegate.doFilter(request,response,filterChain) } // if all the filters decide it is ok allow the // rest of the application to run if continue processing filterChain.doFilter(request,response) } } Due to the fact that both DelegatingFilterProxy and FilterChainProxy are the front door to Spring Security, when used in a web application, it is here that you would add a debug point when trying to figure out what is happening. Running a secured application If you have not already done so, restart the application and visit http://localhost:8080/calendar/, and you will be presented with the following screen: Great job! We've implemented a basic layer of security in our application, using Spring Security. At this point, you should be able to log in using user1@example.com as the User and user1 as the Password (user1@example.com/user1). You'll see the calendar welcome page, which describes at a high level what to expect from the application in terms of security. Common problems Many users have trouble with the initial implementation of Spring Security in their application. A few common issues and suggestions are listed next. We want to ensure that you can run the example application and follow along! Make sure you can build and deploy the application before putting Spring Security in place. Review some introductory samples and documentation on your servlet container if needed. It's usually easiest to use an IDE, such as Eclipse, to run your servlet container. Not only is deployment typically seamless, but the console log is also readily available to review for errors. You can also set breakpoints at strategic locations, to be triggered on exceptions to better diagnose errors. If your XML configuration file is incorrect, you will get this (or something similar to this): org.xml.sax.SAXParseException: cvc-elt.1: Cannot find the declaration of element 'beans'. It's quite common for users to get confused with the various XML namespace references required to properly configure Spring Security. Review the samples again, paying attention to avoid line wrapping in the schema declarations, and use an XML validator to verify that you don't have any malformed XML. If you get an error stating "BeanDefinitionParsingException: Configuration problem: Unable to locate Spring NamespaceHandler for XML schema namespace [http://www.springframework.org/ schema/security] ...", ensure that the spring-security-config- 3.1.0.RELEASE.jar file is on your classpath. Also ensure the version matches the other Spring Security JARs and the XML declaration in your Spring configuration file. Make sure the versions of Spring and Spring Security that you're using match and that there aren't any unexpected Spring JARs remaining as part of your application. As previously mentioned, when using Maven, it can be a good idea to declare the Spring dependencies in the dependency management section.
Read more
  • 0
  • 0
  • 3810

article-image-wireshark-working-packet-streams
Packt
11 Mar 2013
3 min read
Save for later

Wireshark: Working with Packet Streams

Packt
11 Mar 2013
3 min read
(For more resources related to this topic, see here.) Working with Packet Streams While working on network capture, there can be multiple instances of network activities going on. Consider a small example where you are simultaneously browsing multiple websites through your browser. Several TCP data packets will be flowing across your network for all these multiple websites. So it becomes a bit tedious to track the data packets belonging to a particular stream or session. This is where Follow TCP stream comes into action. Now when you are visiting multiple websites, each site maintains its own stream of data packets. By using the Follow TCP stream option we can apply a filter that locates packets only specific to a particular stream. To view the complete stream, select your preferred TCP packet (for example, a GET or POST request). Right-clicking on it will bring up the option Follow TCP Stream. Once you click on Follow TCP Stream, you will notice that a new filter rule is applied to Wireshark and the main capture window reflects all those data packets that belong to that stream. This can be helpful in figuring out what different requests/responses have been generated through a particular session of network interaction. If you take a closer look at the filter rule applied once you follow a stream, you will see a rule similar to tcp.stream eq <Number>. Here Number reflects the stream number which has to be followed to get various data packets. An additional operation that can be carried out here is to save the data packets belonging to a particular stream. Once you have followed a particular stream, go to File | Save As. Then select Displayed to save only the packets belonging to the viewed stream. Similar to following the TCP stream, we also have the option to follow the UDP and SSL streams. The two options can be reached by selecting the particular protocol type (UDP or SSL) and right-clicking on it. The particular follow option will be highlighted according to the selected protocol. The Wireshark menu icons also provide some quick navigation options to migrate through the captured packets. These icons include: Go back in packet history (1): This option traces you back to the last analyzed/selected packet. Clicking on it multiple times keeps pushing you back to your selection history. Go forward in packet history (2): This option pushes you forward in the series of packet analysis. Go to last packet (5): This option jumps your selection to the last packet in your capture window.: This option is useful in directly going to a specific packet number. Go to the first packet (4): This option takes you to the first packet in your current display of the capture window. Go to last packet (5): This option jumps your selection to the last packet in your capture window. Summary In this article, we learned how to work with packet streams. Resources for Article : Further resources on this subject: BackTrack 5: Advanced WLAN Attacks [Article] BackTrack 5: Attacking the Client [Article] Debugging REST Web Services [Article]
Read more
  • 0
  • 0
  • 7998
article-image-backtrack-forensics
Packt
28 Feb 2013
8 min read
Save for later

BackTrack Forensics

Packt
28 Feb 2013
8 min read
(For more resources related to this topic, see here.) Intrusion detection and log analysis Intrusion detection is a method used to monitor malicious activity on a computer network or system. It's generally referred to as an intrusion detection system (IDS) because it's the system that actually performs the task of monitoring activity based upon a set of predefined rules. An IDS adds an additional layer of security to a network by analyzing information from various points and determining if an actual or possible security breach has occurred, or to locate if a vulnerability is present that will allow for a possible breach. In this recipe, we will examine the Snort tool for the purposes of intrusion detection and log analysis. Snort was developed by Sourcefire, and is an open source tool that has the capabilities of acting as both an intrusion detection system and an intrusion prevention system. One of the advantages of Snort is that it allows you to analyze network traffic in real time, and make faster responses should security breaches occur. Remember, running Snort on our network and utilizing it for intrusion detection does not stop exploits from occurring. It just gives us the ability to see what is going on in our network. Getting ready A connection to the Internet or intranet is required to complete this task. It is assumed that you have visited http://snort.org/start/rules and downloaded the Sourcefire Vulnerability Research Team (VRT) Certified Rules. A valid ruleset must be maintained in order to use Snort for detection. If you do not have an account already, you may sign up at https://www.snort.org/signup. How to do it... Let's begin by starting Snort: Start the Snort service: Now that the Snort service has been initiated, we will start the application from a terminal window. We are going to pass a few options that are described as follows: -q: This option tells Snort to run in inline mode. -v: This command allows us to view a printout of TCP/IP headers on the screen. This is also called the "sniffer mode" setting. -c: This option allows us to select our configuration file. In this case, its location is /etc/snort/snort.conf. -i: This option allows you to specify your interface. Using these options, let's execute the following command: snort -q -v -i eth1 -c /etc/snort/snort.conf To stop Snort from monitoring, press Ctrl + X. How it works... In this recipe, we started the Snort service and launched Snort in order to view the log data. There's more… Before we can adequately use Snort for our purposes, we need to make alterations to its configuration file. Open a terminal window and locate the Snort configuration file: locate snort.conf Now we will edit the configuration file using nano: nano /etc/snort/snort.conf Look for the line that reads var HOME_NET any. We would like to change this to our internal network (the devices we would like to have monitored). Each situation is going to be unique. You may want to only monitor one device and you can do so simply by entering its IP address (var HOME_NET 192.168.10.10). You may also want to monitor an IP range (var HOME_NET 192.168.10.0/24), or you may want to specify multiple ranges (var HOME_NET 192.168.10.0/24,10.0.2.0/24). In our case, we will look at just our local network: var HOME_NET 192.168.10.0/24 Likewise, we need to specify what is considered the external network. For most purposes, we want any IP address that is not a part of our specified home network to be considered as external. So we will place a comment on the line that reads var EXTERNAL_NET any and uncomment the line that says var EXTERNAL_NET !$HOME_NET: #var EXTERNAL_NET any var External_NET !$HOME_NET The screenshot represents the two lines that you need to alter to match the changes mentioned in this step. To view an extended list of Snort commands, please visit the Snort Users Manual at http://www.snort.org/assets/166/ snort_manual.pdf. Recursive directory encryption/decryption Encryption is a method of transforming data into a format that cannot be read by other users. Decryption is the method of transforming data back into a format that is readable. The benefit of encrypting your data is that even if the data is stolen, without the correct decryptor, it's unusable by the stealing party. You have the ability, depending on the program that you use, to encrypt individual files, folders, or entire hard drives. In this recipe, we will use gpgdir to perform recursive directory encryption and decryption. An advantage of using gpgdir is that it has the ability to not only encrypt a folder, but also all subfolders and files contained within our main folder. This will save you a lot of time and effort! Getting ready To complete this recipe, you must have gpgdir installed on your BackTrack version. How to do it... In order to use gpgdir, you must have it installed. If you have not installed it before, use the following instructions to install it: Open a terminal window and make a new directory under the root filesystem: mkdir /sourcecode Change your directory to the sourcecode directory: cd /sourcecode Next, we will use Wget to download the gpgdir application and its public key: wget http://cipherdyne.org/gpgdir/download/gpgdir- 1.9.5.tar.bz2 Next we download the signature file: wget http://cipherdyne.org/gpgdir/download/gpgdir- 1.9.5.tar.bz2.asc Next we download the public key file: Now we need to verify the package: gpg --import public_key gpg --verify gpgdir-1.9.5.tar.bz2.asc Next we untar gpgdir, switch to its directory, and complete the installation: tar xfj gpgdir-1.9.5.tar.bz2 cd gpgdir-1.9.5 ./install.pl The first time you run gpgdir, a new file will be created in your root directory (assuming root is the user you are using under BackTrack). The file is called ./ gpgdirrc. To start the creation of the file, type the following command: gpgdir Finally, we need to edit the gpgdirrc file and remove the comments from the default_key variable: vi /root/.gpgdirrc Now that you have gpgdir installed, let's use it to perform recursive directory encryption and decryption: Open a terminal window and create a directory for us to encrypt: mkdir /encrypted_directory Add files to the directory. You can add as many files as you would like using the Linux copy command cp. Now, we will use gpgdir to encrypt the directory: gpgdir -e /encrypted_directory At the prompt, enter your password. This is the password associated with your key file. To decrypt the directory with gpgdir, type the following command: gpgdir -d /encrypted_directory How it works… In this recipe, we used gpgdir to recursively encrypt a directory and to subsequently decrypt it. We began the recipe by installing gpgdir and editing its configuration file. Once gpgdir has been installed, we have the ability to encrypt and decrypt directories. For more information on gpgdir, please visit its documentation website at http://cipherdyne.org/gpgdir/docs/. Scanning for signs of rootkits A rootkit is a malicious program designed to hide suspicious processes from detection and allow continued, often remote, access to a computer system. Rootkits can be installed using various methods including hiding executable code within web page links, downloaded software programs, or on media files and documents. In this recipe, we will utilize chkrootkit to search for rootkits on our Windows or Linux system. Getting ready In order to scan for a rootkit, you can either use your BackTrack installation, log in to a compromised virtual machine remotely, or mount the BackTrack 5 R3 DVD on a computer system to which you have physical access. How to do it... Let's begin exploring chkrootkit by navigating to it from the BackTrack menu: Navigate to Applications | BackTrack | Forensics | Anti-Virus Forensics Tools | chkrootkit: Alternatively, you can enter the following commands to run chkrootkit: cd /pentest/forensics/chkrootkit ./chkrootkit chkrootkit will begin execution immediately, and you will be provided with an output on your screen as the checks are processed: How it works… In this recipe, we used chkrootkit to check for malware, Trojans, and rootkits on our localhost. chkrookit is a very effective scanner that can be used to determine if our system has been attacked. It's also useful when BackTrack is loaded as a live DVD and used to scan a computer you think is infected by rootkits. There's more... Alternatively, you can run Rootkit Hunter (rkhunter) to find rootkits on your system: Open a terminal window and run the following command to launch rkhunter: rkhunter --check At the end of the process, you will receive a summary listing the checks performed and their statistics: Useful alternative command options for chkrootkit The following is a list of useful commands to select when running chkrootkit: -h: Displays the help file -V: Displays the current running version of chkrootkit -l: Displays a list of available tests Useful alternative command options for rkhunter The following is a list of useful commands to select when running rkhunter: --update: Allows you to update the rkhunter database rkhunter --update --list: Displays a list of Perl modules, rootkits available for checking, and tests that will be performed rkhunter --list --sk: Allows you to skip pressing the Enter key after each test runs rkhunter --check --sk Entering rkhunter at a terminal window will display the help file: rkhunter
Read more
  • 0
  • 0
  • 3741

article-image-dpm-feature-set
Packt
29 Jul 2011
3 min read
Save for later

The DPM Feature Set

Packt
29 Jul 2011
3 min read
  Microsoft Data Protection Manager 2010 A practical step-by-step guide to planning deployment, installation, configuration, and troubleshooting of Data Protection Manager 2010 with this book and eBook         Read more about this book       (For more resources on this subject, see here.) DPM has a robust set of features and capabilities. The following are some of the most valuable ones: Disk-based data protection and recovery Continuous back up Tape-based archiving and back up Built in monitoring Cloud-based back up and recovery Built-in reports and notifications Integration with Microsoft System Center Operations Manager Windows PowerShell integration for scripting Remote administration Tight integration with other Microsoft products Protection of clustered servers Protection of application-specific servers Backing up the system state Backing up client computers New features of DPM 2010 Microsoft has done a great job of updating Data Protection Manager 2010 with great new features and some much needed features. There were some issues with Data Protection Manager 2007 that would cause an Administrator to perform routine maintenance on it. Most of these issues have been resolved with Data Protection Manager 2010. The following are the most exciting new features to DPM: DPM 2007 to DPM 2010 in-place upgrade Auto-Rerun and Auto-CC (Consistency Check) automatically fixes Replica Inconsistent errors Auto-Grow will automatically grow volumes as needed It allows you to shrink volumes as needed Bare metal restore A Back up SLA report that can be configured and e-mailed to you daily Self-restore service for SQL Database Administrators of SQL back ups When backing up SharePoint 2010, no recovery farm is required for item level recoveries for example: recover SharePoint list items, and recovery of items in SharePoint farm using host-headers. This is an improvement to SharePoint that DPM takes advantage of Better back up for mobile or disconnected employees (This requires VPN or Direct Access) End users of protected clients are able to recover their data. The end users can do this without an Administrator doing anything. DPM is Live Migration aware. We already know DPM can protect VMs on Hyper-V. Now DPM will automatically continue protection of a VM even after it has been migrated to a different Hyper-V server. The Hyper-V server has to be a Windows Server 2008 R2 clustered server. DPM2DPM4DR (DPM to DPM for Disaster Recovery) allows you to back up your DPM to a second DPM. This feature was available in 2007 and it can now be set up via the GUI. You can also perform chained DPM back up so you could have DPM A, DPM B, and DPM C. Before you could only have a secondary DPM server backing up a primary DPM server. With the 2010 release, a single DPM server's scalability has been increased over its previous 2007 release: DPM can handle 80 TB per server DPM can back up up to 100 servers DPM can back up up to 1000 clients DPM can back up up to 2000 SQL databases As you can see from the previous list there are many enhancements to DPM 2010 that will benefit Administrators as well as end users. Summary In this article we took a look at the existing as well as new features of DPM. Further resources on this subject: Installing Data Protection Manager 2010 [article] Overview of Data Protection Manager 2010 [article] Debatching Bulk Data on Microsoft Platform [article]
Read more
  • 0
  • 0
  • 1502