Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-working-webcam-and-pi-camera
Packt
09 Feb 2016
13 min read
Save for later

Working with a Webcam and Pi Camera

Packt
09 Feb 2016
13 min read
In this article by Ashwin Pajankar and Arush Kakkar, the author of the book Raspberry Pi By Example we will learn how to use different types and uses of cameras with our Pi. Let's take a look at the topics we will study and implement in this article: Working with a webcam Crontab Timelapse using a webcam Webcam video recording and playback Pi Camera and Pi NOIR comparison Timelapse using Pi Camera The PiCamera module in Python (For more resources related to this topic, see here.) Working with webcams USB webcams are a great way to capture images and videos. Raspberry Pi supports common USB webcams. To be on the safe side, here is a list of the webcams supported by Pi: http://elinux.org/RPi_USB_Webcams. I am using a Logitech HD c310 USB Webcam. You can purchase it online, and you can find the product details and the specifications at http://www.logitech.com/en-in/product/hd-webcam-c310. Attach your USB webcam to Raspberry Pi through the USB port on Pi and run the lsusb command in the terminal. This command lists all the USB devices connected to the computer. The output should be similar to the following output depending on which port is used to connect the USB webcam:   pi@raspberrypi ~/book/chapter04 $ lsusb Bus 001 Device 002: ID 0424:9514 Standard Microsystems Corp. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp. Bus 001 Device 004: ID 148f:2070 Ralink Technology, Corp. RT2070 Wireless Adapter Bus 001 Device 007: ID 046d:081b Logitech, Inc. Webcam C310 Bus 001 Device 006: ID 1c4f:0003 SiGma Micro HID controller Bus 001 Device 005: ID 1c4f:0002 SiGma Micro Keyboard TRACER Gamma Ivory Then, install the fswebcam utility by running the following command: sudo apt-get install fswebcam The fswebcam is a simple command-line utility that captures images with webcams for Linux computers. Once the installation is done, you can use the following command to create a directory for output images: mkdir /home/pi/book/output Then, run the following command to capture the image: fswebcam -r 1280x960 --no-banner ~/book/output/camtest.jpg This will capture an image with a resolution of 1280 x 960. You might want to try another resolution for your learning. The --no-banner command will disable the timestamp banner. The image will be saved with the filename mentioned. If you run this command multiple times with the same filename, the image file will be overwritten each time. So, make sure that you change the filename if you want to save previously captured images. The text output of the command should be similar to the following: --- Opening /dev/video0... Trying source module v4l2... /dev/video0 opened. No input was specified, using the first. --- Capturing frame... Corrupt JPEG data: 2 extraneous bytes before marker 0xd5 Captured frame in 0.00 seconds. --- Processing captured image... Disabling banner. Writing JPEG image to '/home/pi/book/output/camtest.jpg'. Crontab A cron is a time-based job scheduler in Unix-like computer operating systems. It is driven by a crontab (cron table) file, which is a configuration file that specifies shell commands to be run periodically on a given schedule. It is used to schedule commands or shell scripts to run periodically at a fixed time, date, or interval. The syntax for crontab in order to schedule a command or script is as follows: 1 2 3 4 5 /location/command Here, the following are the definitions: 1: Minutes (0-59) 2: Hours (0-23) 3: Days (0-31) 4: Months [0-12 (1 for January)] 5: Days of the week [0-7 ( 7 or 0 for Sunday)] /location/command: The script or command name to be scheduled The crontab entry to run any script or command every minute is as follows: * * * * * /location/command 2>&1 In the next section, we will learn how to use crontab to schedule a script to capture images periodically in order to create the timelapse sequence. You can refer to this URL for more details oncrontab: http://www.adminschoice.com/crontab-quick-reference. Creating a timelapse sequence using fswebcam Timelapse photography means capturing photographs in regular intervals and playing the images with a higher frequency in time than those that were shot. For example, if you capture images with a frequency of one image per minute for 10 hours, you will get 600 images. If you combine all these images in a video with 30 images per second, you will get 10 hours of timelapse video compressed in 20 seconds. You can use your USB webcam with Raspberry Pi to achieve this. We already know how to use the Raspberry Pi with a Webcam and the fswebcam utility to capture an image. The trick is to write a script that captures images with different names and then add this script in crontab and make it run at regular intervals. Begin with creating a directory for captured images: mkdir /home/pi/book/output/timelapse Open an editor of your choice, write the following code, and save it as timelapse.sh: #!/bin/bash DATE=$(date +"%Y-%m-%d_%H%M") fswebcam -r 1280x960 --no-banner /home/pi/book/output/timelapse/garden_$DATE.jpg Make the script executable using: chmod +x timelapse.sh This shell script captures the image and saves it with the current timestamp in its name. Thus, we get an image with a new filename every time as the file contains the timestamp. The second line in the script creates the timestamp that we're using in the filename. Run this script manually once, and make sure that the image is saved in the /home/pi/book/output/timelapse directory with the garden_<timestamp>.jpg name. To run this script at regular intervals, we need to schedule it in crontab. The crontab entry to run our script every minute is as follows: * * * * * /home/pi/book/chapter04/timelapse.sh 2>&1 Open the crontab of the Pi user with crontab –e. It will open crontab with nano as the editor. Add the preceding line to crontab, save it, and exit it. Once you exit crontab, it will show the following message: no crontab for pi - using an empty one crontab: installing new crontab Our timelapse webcam setup is now live. If you want to change the image capture frequency, then you have to change the crontab settings. To set it every 5 minutes, change it to */5 * * * *. To set it for every 2 hours, use 0 */2 * * *. Make sure that your MicroSD card has enough free space to store all the images for the time duration for which you need to keep your timelapse setup. Once you capture all the images, the next part is to encode them all in a fast playing video, preferably 20 to 30 frames per second. For this part, the mencoder utility is recommended. The following are the steps to create a timelapse video with mencoder on a Raspberry Pi or any Debian/Ubuntu machine: Install mencoder using sudo apt-get install mencoder. Navigate to the output directory by issuing: cd /home/pi/book/output/timelapse Create a list of your timelapse sequence images using: ls garden_*.jpg > timelapse.txt Use the following command to create a video: mencoder -nosound -ovc lavc -lavcopts vcodec=mpeg4:aspect=16/9:vbitrate=8000000 -vf scale=1280:960 -o timelapse.avi -mf type=jpeg:fps=30 mf://@timelapse.txt This will create a video with name timelapse.avi in the current directory with all the images listed in timelapse.txt with a 30 fps frame rate. The statement contains the details of the video codec, aspect ratio, bit rate, and scale. For more information, you can run man mencoder on Command Prompt. We will cover how to play a video in the next section. Webcam video recording and playback We can use a webcam to record live videos using avconv. Install avconv using sudo apt-get install libav-tools. Use the following command to record a video: avconv -f video4linux2 -r 25 -s 1280x960 -i /dev/video0 ~/book/output/VideoStream.avi It will show following output on the screen. pi@raspberrypi ~ $ avconv -f video4linux2 -r 25 -s 1280x960 -i /dev/video0 ~/book/output/VideoStream.avi avconv version 9.14-6:9.14-1rpi1rpi1, Copyright (c) 2000-2014 the Libav developers built on Jul 22 2014 15:08:12 with gcc 4.6 (Debian 4.6.3-14+rpi1) [video4linux2 @ 0x5d6720] The driver changed the time per frame from 1/25 to 2/15 [video4linux2 @ 0x5d6720] Estimating duration from bitrate, this may be inaccurate Input #0, video4linux2, from '/dev/video0': Duration: N/A, start: 629.030244, bitrate: 147456 kb/s Stream #0.0: Video: rawvideo, yuyv422, 1280x960, 147456 kb/s, 1000k tbn, 7.50 tbc Output #0, avi, to '/home/pi/book/output/VideoStream.avi': Metadata: ISFT : Lavf54.20.4 Stream #0.0: Video: mpeg4, yuv420p, 1280x960, q=2-31, 200 kb/s, 25 tbn, 25 tbc Stream mapping: Stream #0:0 -> #0:0 (rawvideo -> mpeg4) Press ctrl-c to stop encoding frame= 182 fps= 7 q=31.0 Lsize= 802kB time=7.28 bitrate= 902.4kbits/s video:792kB audio:0kB global headers:0kB muxing overhead 1.249878% Received signal 2: terminating. You can terminate the recording sequence by pressing Ctrl + C. We can play the video using omxplayer. It comes with the latest raspbian, so there is no need to install it. To play a file with the name vid.mjpg, use the following command: omxplayer ~/book/output/VideoStream.avi It will play the video and display some output similar to the one here: pi@raspberrypi ~ $ omxplayer ~/book/output/VideoStream.avi Video codec omx-mpeg4 width 1280 height 960 profile 0 fps 25.000000 Subtitle count: 0, state: off, index: 1, delay: 0 V:PortSettingsChanged: 1280x960@25.00 interlace:0 deinterlace:0 anaglyph:0 par:1.00 layer:0 have a nice day ;) Try playing timelapse and record videos using omxplayer. Working with the Pi Camera and NoIR Camera Modules These camera modules are specially manufactured for Raspberry Pi and work with all the available models. You will need to connect the camera module to the CSI port, located behind the Ethernet port, and activate the camera using the raspi-config utility if you haven't already. You can find the video instructions to connect the camera module to Raspberry Pi at http://www.raspberrypi.org/help/camera-module-setup/. This page lists the types of camera modules available: http://www.raspberrypi.org/products/. Two types of camera modules are available for the Pi. These are Pi Camera and Pi NoIR camera, and they can be found at https://www.raspberrypi.org/products/camera-module/ and https://www.raspberrypi.org/products/pi-noir-camera/, respectively. The following image shows Pi Camera and Pi NoIR Camera boards side by side: The following image shows the Pi Camera board connected to the Pi: The following is an image of the Pi camera board placed in the camera case: The main difference between Pi Camera and Pi NoIR Camera is that Pi Camera gives better results in good lighting conditions, whereas Pi NoIR (NoIR stands for No-Infra Red) is used for low light photography. To use NoIR Camera in complete darkness, we need to flood the object to be photographed with infrared light. This is a good time to take a look at the various enclosures for Raspberry Pi Models. You can find various cases available online at https://www.adafruit.com/categories/289. An example of a Raspberry Pi case is as follows: Using raspistill and raspivid To capture images and videos using the Raspberry Pi camera module, we need to use raspistill and raspivid utilities. To capture an image, run the following command: raspistill -o cam_module_pic.jpg This will capture and save the image with name cam_module_pic.jpg in the current directory. To capture a 20 second video with the camera module, run the following command: raspivid –o test.avi –t 20000 This will capture and save the video with name test.avi in the current directory. Unlike fswebcam and avconv, raspistill and raspivid do not write anything to the console. So, you need to check the current directory for the output. Also, one can run the echo $? command to check whether these commands executed successfully. We can also mention the complete location of the file to be saved in these command, as shown in the following example: raspistill -o /home/pi/book/output/cam_module_pic.jpg Just like fswebcam, raspistill can be used to record the timelapse sequence. In our timelapse shell script, replace the line that contains fswebcam with the appropriate raspistill command to capture the timelapse sequence and use mencoder again to create the video. This is left as an exercise for the readers. Now, let's take a look at the images taken with the Pi camera under different lighting conditions. The following is the image with normal lighting and the backlight: The following is the image with only the backlight: The following is the image with normal lighting and no backlight: For NoIR camera usage in the night under low light conditions, use IR illuminator light for better results. You can get it online. A typical off-the-shelf LED IR illuminator suitable for our purpose will look like the one shown here: Using picamera in Python with the Pi Camera module picamera is a Python package that provides a programming interface to the Pi Camera module. The most recent version of raspbian has picamera preinstalled. If you do not have it installed, you can install it using: sudo apt-get install python-picamera The following program quickly demonstrates the basic usage of the picamera module to capture an image: import picamera import time with picamera.PiCamera() as cam: cam.resolution=(1024,768) cam.start_preview() time.sleep(5) cam.capture('/home/pi/book/output/still.jpg') We have to import time and picamera modules first. cam.start_preview()will start the preview, and time.sleep(5) will wait for 5 seconds before cam.capture() captures and saves image in the specified file. There is a built-in function in picamera for timelapse photography. The following program demonstrates its usage: import picamera import time with picamera.PiCamera() as cam: cam.resolution=(1024,768) cam.start_preview() time.sleep(3) for count, imagefile in enumerate(cam.capture_continuous ('/home/pi/book/output/image{counter: 02d}.jpg')): print 'Capturing and saving ' + imagefile time.sleep(1) if count == 10: break In the preceding code, cam.capture_continuous()is used to capture the timelapse sequence using the Pi camera module. Checkout more examples and API references for the picamera module at http://picamera.readthedocs.org/. The Pi camera versus the webcam Now, after using the webcam and the Pi camera, it's a good time to understand the differences, the pros, and the cons of using these. The Pi camera board does not use a USB port and is directly interfaced to the Pi. So, it provides better performance than a webcam in terms of the frame rate and resolution. We can directly use the picamera module in Python to work on images and videos. However, the Pi camera cannot be used with any other computer. A webcam uses an USB port for interface, and because of that, it can be used with any computer. However, compared to the Pi camera its performance, it is lower in terms of the frame rate and resolution. Summary In this article, we learned how to use a webcam and the Pi camera. We also learned how to use utilities such as fswebcam, avconv, raspistill, raspivid, mencoder, and omxplayer. We covered how to use crontab. We used the Python picamera module to programmatically work with the Pi camera board. Finally, we compared the Pi camera and the webcam. We will be reusing all the code examples and concepts for some real-life projects soon. Resources for Article: Further resources on this subject: Introduction to the Raspberry Pi's Architecture and Setup [article] Raspberry Pi LED Blueprints [article] Hacking a Raspberry Pi project? Understand electronics first! [article]
Read more
  • 0
  • 0
  • 34034

article-image-making-simple-web-based-ssh-client-using-nodejs-and-socketio
Jakub Mandula
28 Oct 2015
7 min read
Save for later

Making a simple Web based SSH client using Node.js and Socket.io

Jakub Mandula
28 Oct 2015
7 min read
If you are reading this post, you probably know what SSH stands for. But just for the sake of formality, here we go: SSH stands for Secure Shell. It is a network protocol for secure access to the shell on a remote computer. You can do much more over SSH besides commanding your computer. Here you can find further information: http://en.wikipedia.org/wiki/Secure_Shell. In this post, we are going to create a very simple web terminal. And when I say simple, I mean it! However much you like colors, it will not support them because the parsing is just beyond the scope of this post. If you want a good client-side terminal library use term.js. It is made by the same guy who wrote pty.js, which we will be using. It is able to handle pretty much all key events and COLORS!!!! Installation I am going to assume you already have your node and npm installed. First we will install all of the npm packages we will be using: npm install express pty.js socket.io Express is a super cool web framework for Node. We are going to use it to serve our static files. I know it is a bit overkill, but I like Express. pty.js is where the magic will be happening. It forks processes into virtual pseudo terminals and provides bindings for communication. Socket.io is what we will use to transmit the data from the web browser to the server and back. It uses modern WebSockets, but provides fallbacks for backward compatibility. Anytime you want to create a real-time application, Socket.io is the way to go. Planning First things first, we need to think what we want the program to do. We want the program to create an instance of a shell on the server (remote machine) and send all of the text to the browser. Back in the browser, we want to capture any user events and send them back to the server shell. The WebSSH server This is the code that will power the terminal forwarding. Open a new file named server.js and start by importing all of the libraries: var express = require('express'); var https = require('https'); var http = require('http'); var fs = require('fs'); var pty = require('pty.js'); Set up express: // Setup the express app var app = express(); // Static file serving app.use("/",express.static("./")); Next we are going to create the server. // Creating an HTTP server var server = http.createServer(app).listen(8080) If you want to use HTTPS, which you probably will, you need to generate a key and certificate and import them as shown. var options = { key: fs.readFileSync('keys/key.pem'), cert: fs.readFileSync('keys/cert.pem') }; Then use the options object to create the actual server. Notice that this time we are using the https package. // Create an HTTPS server var server = https.createServer(options, app).listen(8080) CAUTION: Even if you use HTTPS, do not use this example program on the Internet. You are not authenticating the client in any way and thus providing a free open gate to your computer. Please make sure you only use this on your Private network protected by a firewall!!! Now bind the socket.io instance to the server: var io = require('socket.io')(server); After this, we can set up the place where the magic happens. // When a new socket connects io.on('connection', function(socket){ // Create terminal var term = pty.spawn('sh', [], { name: 'xterm-color', cols: 80, rows: 30, cwd: process.env.HOME, env: process.env }); // Listen on the terminal for output and send it to the client term.on('data', function(data){ socket.emit('output', data); }); // Listen on the client and send any input to the terminal socket.on('input', function(data){ term.write(data); }); // When socket disconnects, destroy the terminal socket.on("disconnect", function(){ term.destroy(); console.log("bye"); }); }); In this block, all we do is wait for new connections. When we get one, we spawn a new virtual terminal and start to pump the data from the terminal to the socket and vice versa. After the socket disconnects, we make sure to destroy the terminal. If you have noticed, I am using the simple sh shell. I did this mainly because I don't have a fancy prompt on it. Because we are not adding any parsing logic, my bash prompt would show up like this: ]0;piman@mothership: ~ _[01;32m✓ [33mpiman_[0m ↣ _[1;34m[~]_[37m$[0m - Eww! But you may use any shell you like. This is all that we need on the server side. Save the file and close it. Client side The client side is going to be just a very simple HTML file. Start with a very simple HTML markup: <!doctype html> <html> <head> <title>SSH Client</title> <script type="text/javascript" src="//cdnjs.cloudflare.com/ajax/libs/socket.io/1.3.5/socket.io.min.js"></script> <script type="text/javascript" src="//cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js"></script> <style> body { margin: 0; padding: 0; } .terminal { font-family: monospace; color: white; background: black; } </style> </head> <body> <h1>SSH</h1> <div class="terminal"> </div> <script> </script> </body> </html> I am downloading the client side libraries jquery and socket.io from cdnjs. All of the client code will be written in the script tag below the terminal div. Surprisingly the code is very simple: // Connect to the socket.io server var socket = io.connect('http://localhost:8080'); // Wait for data from the server socket.on('output', function (data) { // Insert some line breaks where they belong data = data.replace("n", "<br>"); data = data.replace("r", "<br>"); // Append the data to our terminal $('.terminal').append(data); }); // Listen for user input and pass it to the server $(document).on("keypress",function(e){ var char = String.fromCharCode(e.which); socket.emit("input", char); }); Notice that we do not have to explicitly append the text the client types to the terminal mainly because the server echos it back anyways. Now we are done! Run the server and open up the URL in your browser. node server.js You should see a small prompt and be able to start typing commands. You can now explore you machine from the browser! Remember that our Web Terminal does not support Tab, Ctrl, Backspace or Esc characters. Implementing this is your homework. Conclusion I hope you found this tutorial useful. You can apply the knowledge in any real-time application where communication with the server is critical. All the code is available here. Please note, that if you'd like to use a browser terminal I strongly recommend term.js. It supports colors and styles and all the basic keys including Tabs, Backspace etc. I use it in my PiDashboard project. It is much cleaner and less tedious than the example I have here. I can't wait what amazing apps you will invent based on this. About the Author Jakub Mandula is a student interested in anything to do with technology, computers, mathematics or science.
Read more
  • 0
  • 6
  • 33712

article-image-image-filtering-techniques-opencv
Vijin Boricha
12 Apr 2018
15 min read
Save for later

Image filtering techniques in OpenCV

Vijin Boricha
12 Apr 2018
15 min read
In the world of computer vision, image filtering is used to modify images. These modifications essentially allow you to clarify an image in order to get the information you want. This could involve anything from extracting edges from an image, blurring it, or removing unwanted objects.  There are, of course, lots of reasons why you might want to use image filtering to modify an image. For example, taking a picture in sunlight or darkness will impact an images clarity - you can use image filters to modify the image to get what you want from it. Similarly, you might have a blurred or 'noisy' image that needs clarification and focus. Let's use an example to see how to do image filtering in OpenCV. This image filtering tutorial is an extract from Practical Computer Vision. Here's an example with considerable salt and pepper noise. This occurs when there is a disturbance in the quality of the signal that's used to generate the image. The image above can be easily generated using OpenCV as follows: # initialize noise image with zeros noise = np.zeros((400, 600)) # fill the image with random numbers in given range cv2.randu(noise, 0, 256) Let's add weighted noise to a grayscale image (on the left) so the resulting image will look like the one on the right: The code for this is as follows: # add noise to existing image noisy_gray = gray + np.array(0.2*noise, dtype=np.int) Here, 0.2 is used as parameter, increase or decrease the value to create different intensity noise. In several applications, noise plays an important role in improving a system's capabilities. This is particularly true when you're using deep learning models. The noise becomes a way of testing the precision of the deep learning application, and building it into the computer vision algorithm. Linear image filtering The simplest filter is a point operator. Each pixel value is multiplied by a scalar value. This operation can be written as follows: Here: The input image is F and the value of pixel at (i,j) is denoted as f(i,j) The output image is G and the value of pixel at (i,j) is denoted as g(i,j) K is scalar constant This type of operation on an image is what is known as a linear filter. In addition to multiplication by a scalar value, each pixel can also be increased or decreased by a constant value. So overall point operation can be written like this: This operation can be applied both to grayscale images and RGB images. For RGB images, each channel will be modified with this operation separately. The following is the result of varying both K and L. The first image is input on the left. In the second image, K=0.5 and L=0.0, while in the third image, K is set to 1.0 and L is 10. For the final image on the right, K=0.7 and L=25. As you can see, varying K changes the brightness of the image and varying L changes the contrast of the image: This image can be generated with the following code: import numpy as np import matplotlib.pyplot as plt import cv2 def point_operation(img, K, L): """ Applies point operation to given grayscale image """ img = np.asarray(img, dtype=np.float) img = img*K + L # clip pixel values img[img > 255] = 255 img[img < 0] = 0 return np.asarray(img, dtype = np.int) def main(): # read an image img = cv2.imread('../figures/flower.png') gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # k = 0.5, l = 0 out1 = point_operation(gray, 0.5, 0) # k = 1., l = 10 out2 = point_operation(gray, 1., 10) # k = 0.8, l = 15 out3 = point_operation(gray, 0.7, 25) res = np.hstack([gray,out1, out2, out3]) plt.imshow(res, cmap='gray') plt.axis('off') plt.show() if __name__ == '__main__': main() 2D linear image filtering While the preceding filter is a point-based filter, image pixels have information around the pixel as well. In the previous image of the flower, the pixel values in the petal are all yellow. If we choose a pixel of the petal and move around, the values will be quite close. This gives some more information about the image. To extract this information in filtering, there are several neighborhood filters. In neighborhood filters, there is a kernel matrix which captures local region information around a pixel. To explain these filters, let's start with an input image, as follows: This is a simple binary image of the number 2. To get certain information from this image, we can directly use all the pixel values. But instead, to simplify, we can apply filters on this. We define a matrix smaller than the given image which operates in the neighborhood of a target pixel. This matrix is termed kernel; an example is given as follows: The operation is defined first by superimposing the kernel matrix on the original image, then taking the product of the corresponding pixels and returning a summation of all the products. In the following figure, the lower 3 x 3 area in the original image is superimposed with the given kernel matrix and the corresponding pixel values from the kernel and image are multiplied. The resulting image is shown on the right and is the summation of all the previous pixel products: This operation is repeated by sliding the kernel along image rows and then image columns. This can be implemented as in following code. We will see the effects of applying this on an image in coming sections. # design a kernel matrix, here is uniform 5x5 kernel = np.ones((5,5),np.float32)/25 # apply on the input image, here grayscale input dst = cv2.filter2D(gray,-1,kernel) However, as you can see previously, the corner pixel will have a drastic impact and results in a smaller image because the kernel, while overlapping, will be outside the image region. This causes a black region, or holes, along with the boundary of an image. To rectify this, there are some common techniques used: Padding the corners with constant values maybe 0 or 255, by default OpenCV will use this. Mirroring the pixel along the edge to the external area Creating a pattern of pixels around the image The choice of these will depend on the task at hand. In common cases, padding will be able to generate satisfactory results. The effect of the kernel is most crucial as changing these values changes the output significantly. We will first see simple kernel-based filters and also see their effects on the output when changing the size. Box filtering This filter averages out the pixel value as the kernel matrix is denoted as follows: Applying this filter results in blurring the image. The results are as shown as follows: In frequency domain analysis of the image, this filter is a low pass filter. The frequency domain analysis is done using Fourier transformation of the image, which is beyond the scope of this introduction. We can see on changing the kernel size, the image gets more and more blurred: As we increase the size of the kernel, you can see that the resulting image gets more blurred. This is due to averaging out of peak values in small neighbourhood where the kernel is applied. The result for applying kernel of size 20x20 can be seen in the following image. However, if we use a very small filter of size (3,3) there is negligible effect on the output, due to the fact that the kernel size is quite small compared to the photo size. In most applications, kernel size is heuristically set according to image size: The complete code to generate box filtered photos is as follows: def plot_cv_img(input_image, output_image): """ Converts an image from BGR to RGB and plots """ fig, ax = plt.subplots(nrows=1, ncols=2) ax[0].imshow(cv2.cvtColor(input_image, cv2.COLOR_BGR2RGB)) ax[0].set_title('Input Image') ax[0].axis('off') ax[1].imshow(cv2.cvtColor(output_image, cv2.COLOR_BGR2RGB)) ax[1].set_title('Box Filter (5,5)') ax[1].axis('off') plt.show() def main(): # read an image img = cv2.imread('../figures/flower.png') # To try different kernel, change size here. kernel_size = (5,5) # opencv has implementation for kernel based box blurring blur = cv2.blur(img,kernel_size) # Do plot plot_cv_img(img, blur) if __name__ == '__main__': main() Properties of linear filters Several computer vision applications are composed of step by step transformations of an input photo to output. This is easily done due to several properties associated with a common type of filters, that is, linear filters: The linear filters are commutative such that we can perform multiplication operations on filters in any order and the result still remains the same: a * b = b * a They are associative in nature, which means the order of applying the filter does not affect the outcome: (a * b) * c = a * (b * c) Even in cases of summing two filters, we can perform the first summation and then apply the filter, or we can also individually apply the filter and then sum the results. The overall outcome still remains the same: Applying a scaling factor to one filter and multiplying to another filter is equivalent to first multiplying both filters and then applying scaling factor These properties play a significant role in other computer vision tasks such as object detection and segmentation. A suitable combination of these filters enhances the quality of information extraction and as a result, improves the accuracy. Non-linear image filtering While in many cases linear filters are sufficient to get the required results, in several other use cases performance can be significantly increased by using non-linear image filtering. Mon-linear image filtering is more complex, than linear filtering. This complexity can, however, give you more control and better results in your computer vision tasks. Let's take a look at how non-linear image filtering works when applied to different images. Smoothing a photo Applying a box filter with hard edges doesn't result in a smooth blur on the output photo. To improve this, the filter can be made smoother around the edges. One of the popular such filters is a Gaussian filter. This is a non-linear filter which enhances the effect of the center pixel and gradually reduces the effects as the pixel gets farther from the center. Mathematically, a Gaussian function is given as: where μ is mean and σ is variance. An example kernel matrix for this kind of filter in 2D discrete domain is given as follows: This 2D array is used in normalized form and effect of this filter also depends on its width by changing the kernel width has varying effects on the output as discussed in further section. Applying gaussian kernel as filter removes high-frequency components which results in removing strong edges and hence a blurred photo: While this filter performs better blurring than a box filter, the implementation is also quite simple with OpenCV: def plot_cv_img(input_image, output_image): """ Converts an image from BGR to RGB and plots """ fig, ax = plt.subplots(nrows=1, ncols=2) ax[0].imshow(cv2.cvtColor(input_image, cv2.COLOR_BGR2RGB)) ax[0].set_title('Input Image') ax[0].axis('off') ax[1].imshow(cv2.cvtColor(output_image, cv2.COLOR_BGR2RGB)) ax[1].set_title('Gaussian Blurred') ax[1].axis('off') plt.show() def main(): # read an image img = cv2.imread('../figures/flower.png') # apply gaussian blur, # kernel of size 5x5, # change here for other sizes kernel_size = (5,5) # sigma values are same in both direction blur = cv2.GaussianBlur(img,(5,5),0) plot_cv_img(img, blur) if __name__ == '__main__': main() The histogram equalization technique The basic point operations, to change the brightness and contrast, help in improving photo quality but require manual tuning. Using histogram equalization technique, these can be found algorithmically and create a better-looking photo. Intuitively, this method tries to set the brightest pixels to white and the darker pixels to black. The remaining pixel values are similarly rescaled. This rescaling is performed by transforming original intensity distribution to capture all intensity distribution. An example of this equalization is as following: The preceding image is an example of histogram equalization. On the right is the output and, as you can see, the contrast is increased significantly. The input histogram is shown in the bottom figure on the left and it can be observed that not all the colors are observed in the image. After applying equalization, resulting histogram plot is as shown on the right bottom figure. To visualize the results of equalization in the image , the input and results are stacked together in following figure. Code for the preceding photos is as follows: def plot_gray(input_image, output_image): """ Converts an image from BGR to RGB and plots """ # change color channels order for matplotlib fig, ax = plt.subplots(nrows=1, ncols=2) ax[0].imshow(input_image, cmap='gray') ax[0].set_title('Input Image') ax[0].axis('off') ax[1].imshow(output_image, cmap='gray') ax[1].set_title('Histogram Equalized ') ax[1].axis('off') plt.savefig('../figures/03_histogram_equalized.png') plt.show() def main(): # read an image img = cv2.imread('../figures/flower.png') # grayscale image is used for equalization gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # following function performs equalization on input image equ = cv2.equalizeHist(gray) # for visualizing input and output side by side plot_gray(gray, equ) if __name__ == '__main__': main() Median image filtering Median image filtering a similar technique as neighborhood filtering. The key technique here, of course, is the use of a median value. As such, the filter is non-linear. It is quite useful in removing sharp noise such as salt and pepper. Instead of using a product or sum of neighborhood pixel values, this filter computes a median value of the region. This results in the removal of random peak values in the region, which can be due to noise like salt and pepper noise. This is further shown in the following figure with different kernel size used to create output. In this image first input is added with channel wise random noise as: # read the image flower = cv2.imread('../figures/flower.png') # initialize noise image with zeros noise = np.zeros(flower.shape[:2]) # fill the image with random numbers in given range cv2.randu(noise, 0, 256) # add noise to existing image, apply channel wise noise_factor = 0.1 noisy_flower = np.zeros(flower.shape) for i in range(flower.shape[2]): noisy_flower[:,:,i] = flower[:,:,i] + np.array(noise_factor*noise, dtype=np.int) # convert data type for use noisy_flower = np.asarray(noisy_flower, dtype=np.uint8) The created noisy image is used for median image filtering as: # apply median filter of kernel size 5 kernel_5 = 5 median_5 = cv2.medianBlur(noisy_flower,kernel_5) # apply median filter of kernel size 3 kernel_3 = 3 median_3 = cv2.medianBlur(noisy_flower,kernel_3) In the following photo, you can see the resulting photo after varying the kernel size (indicated in brackets). The rightmost photo is the smoothest of them all: The most common application for median blur is in smartphone application which filters input image and adds an additional artifacts to add artistic effects. The code to generate the preceding photograph is as follows: def plot_cv_img(input_image, output_image1, output_image2, output_image3): """ Converts an image from BGR to RGB and plots """ fig, ax = plt.subplots(nrows=1, ncols=4) ax[0].imshow(cv2.cvtColor(input_image, cv2.COLOR_BGR2RGB)) ax[0].set_title('Input Image') ax[0].axis('off') ax[1].imshow(cv2.cvtColor(output_image1, cv2.COLOR_BGR2RGB)) ax[1].set_title('Median Filter (3,3)') ax[1].axis('off') ax[2].imshow(cv2.cvtColor(output_image2, cv2.COLOR_BGR2RGB)) ax[2].set_title('Median Filter (5,5)') ax[2].axis('off') ax[3].imshow(cv2.cvtColor(output_image3, cv2.COLOR_BGR2RGB)) ax[3].set_title('Median Filter (7,7)') ax[3].axis('off') plt.show() def main(): # read an image img = cv2.imread('../figures/flower.png') # compute median filtered image varying kernel size median1 = cv2.medianBlur(img,3) median2 = cv2.medianBlur(img,5) median3 = cv2.medianBlur(img,7) # Do plot plot_cv_img(img, median1, median2, median3) if __name__ == '__main__': main() Image filtering and image gradients These are more edge detectors or sharp changes in a photograph. Image gradients widely used in object detection and segmentation tasks. In this section, we will look at how to compute image gradients. First, the image derivative is applying the kernel matrix which computes the change in a direction. The Sobel filter is one such filter and kernel in the x-direction is given as follows: Here, in the y-direction: This is applied in a similar fashion to the linear box filter by computing values on a superimposed kernel with the photo. The filter is then shifted along the image to compute all values. Following is some example results, where X and Y denote the direction of the Sobel kernel: This is also termed as an image derivative with respect to given direction(here X or Y). The lighter resulting photographs (middle and right) are positive gradients, while the darker regions denote negative and gray is zero. While Sobel filters correspond to first order derivatives of a photo, the Laplacian filter gives a second-order derivative of a photo. The Laplacian filter is also applied in a similar way to Sobel: The code to get Sobel and Laplacian filters is as follows: # sobel x_sobel = cv2.Sobel(img,cv2.CV_64F,1,0,ksize=5) y_sobel = cv2.Sobel(img,cv2.CV_64F,0,1,ksize=5) # laplacian lapl = cv2.Laplacian(img,cv2.CV_64F, ksize=5) # gaussian blur blur = cv2.GaussianBlur(img,(5,5),0) # laplacian of gaussian log = cv2.Laplacian(blur,cv2.CV_64F, ksize=5) We learnt about types of filters and how to perform image filtering in OpenCV. To know more about image transformation and 3D computer vision check out this book Practical Computer Vision. Check out for more: Fingerprint detection using OpenCV 3 3 ways to deploy a QT and OpenCV application OpenCV 4.0 is on schedule for July release  
Read more
  • 0
  • 1
  • 33098

article-image-is-web-development-dying
Richard Gall
23 May 2018
7 min read
Save for later

Is web development dying?

Richard Gall
23 May 2018
7 min read
It's not hard to find people asking whether web development is dying. A quick search throws up questions on Quora, Reddit, and other forums. "Is web development a dying profession or does it just smell funny?" asks one Reddit user. The usual suspects in the world of content (Forbes et al) have responded with their own takes and think pieces on whether web development is dead. And why wouldn't they? I, for one, would never miss out on an opportunity to write something with an outlandish and provocative headline for clicks. So, is web development dying or simply very unwell? Why do people think web development is dying? The question might seem a bit overwrought, but there are good reasons for people to ask the question. One reason is that getting a website has never been easier or cheaper. Think about it: if you want to create a content site, it doesn't take much to set one up with WordPress. You barely need to be technically literate, let alone a developer. Similarly, if you want an eCommerce store there are plenty of off-the-shelf solutions that allow people to start running an online business with very little work at all. Even if you do want a custom solution, you can now do that pretty cheaply. On the Treehouse forums, one user comments that thanks to sites like SquareSpace, businesses can now purchase a website for less than £100 (about $135). The commenter remarks that whereas he'd typically charge around £3000 for a complete website build, potential clients are coming back puzzled as to why he would think they'd spend so much when they could get the same result for a fraction of the price. From a professional perspective, this sort of anecdotal evidence indicates that it's becoming more and more difficult to be successful in web development. For all the talk around 'learning to code' and the digital economy, maybe building websites isn't the best area to get into. Web development is getting easier When people say web development is dying, they might actually be saying that there isn't as much money in it any more. If freelancers are struggling to charge the rates that they used to, that's because there is someone out there who is going to do it for a lot less money. The reason for this isn't that there's a new generation of web developers able to subsist on a paltry sum of money. It's actually getting a lot easier. Aside from solutions like WordPress and Shopify, the task of building websites from scratch (sort of scratch) is now easier than it has ever been. Are templates killing web development? Templates make everything easy for web developers and designers. Why would you want to do much more than drag and drop templates if you could? If the result looks good and does the job, then why spend time doing more? The more you do yourself, the more you're likely to break things. And the more you break things the more you've got to fix. Of course, templates are lowering the barrier to entry into web development and design. And while we shouldn't be precious about new web developers entering the industry, it is understandable that many experienced web developers are anxious about what the future might hold. From this perspective, templates aren't killing web development, but they are changing what the profession looks like. And without wishing to sound euphemistic, this is both a challenge and an opportunity for everyone in web development. Whether you're experienced or new to the industry, these changes mean people are going to have to adapt. Web development isn't dying, it's fragmenting The way web developers are going to have to adapt is by choosing what path they want to take in their career. Web development as we've always known it is, perhaps well and truly dead. Instead, it's fragmenting into specialized areas; design on the one hand, and full-stack on the other. This means your skill set needs to be unique. In a world where building websites takes very little skill or technical knowledge, specific expertise is vital. This is something journalist Andrew Pierno noted in a blog post on Medium. Pierno writes:  ...we are in a scenario where the web developer no longer has the skill set to build that interesting differentiator anymore, particularly if the main value prop is around A.I, computer vision, machine learning, AR, VR, blockchain, etc. Building websites is no longer remarkable - as we've seen, people that can do it are ubiquitous. But building a native application; that's not quite so easy. Building a mobile app that uses computer vision to compare you to Renaissance paintings - that's even harder to do. These are the sorts of things that are going to be valuable - and these are the sorts of things that web developers are going to need to learn how to do. Full-stack development and the expansion of the developer skill set In his piece, Pierno argues that the scope of the web developers role is shrinking. However, I don't think that's quite right. Yes, it might be fragmenting, but the scope of, say, full-stack development, is huge. In fact, full-stack developers need to know a huge range of technologies and tools. If they're to differentiate themselves in the job market, as Pierno suggests they should, they need to know machine learning, they need to know mobile, databases, and maybe even Blockchain. From this perspective, it's not hard to see how the 'web' part of web development might be dying. To some extent, as the web becomes more ubiquitous and less of a rarefied 'space' in people's lives, the more we have to get into the detail of how we utilize the technologies around it. Web development's decline is design's gain If web development as a discipline is dying, that's only going to make design more important. If, as we saw earlier, building websites is going to become a free for all for just about anyone with an internet connection and enough confidence, standards and quality might start to slip. That means the value of someone who understands good design will be higher than ever. As a web developer you might disappear into the ether of everyone else out there. But if you market yourself as a designer, someone who understands the intricacies of UI and UX implicitly, you immediately start to look a little different. Think of it like a sandwich shop - anyone can start making sandwiches. But to make a great sandwich shop, the type that wins awards and the type that people want to Instagram, requires extra attention to detail. It demands more skill and more culinary awareness. Maybe web development is dying, but maybe it just needs to change Clearly, what we call web development is very different in 2018 than what it was 5 years ago. There are a huge number of reasons for this, but perhaps the most important is that it doesn't really make sense to talk about 'the web' any more. Because 'the web' is now an outdated concept, perhaps web development needs to die. Maybe we're holding on to something which is only going to play into the hands of poor design and poor quality software. It's might even damage the careers of talented engineers and designers. You could make a pretty good comparison between 'the web' and 'big data'. Even reading those words feels oddly outdated today, but they're still at the center of the tech landscape. Big data, for example, is everywhere - it's hard to imagine our lives outside of it, but it doesn't make sense to talk about it in the abstract. Instead, what's interesting is how it's applied, how engineers make data accessible, usable and secure. The same is true of the web. It's not dead, but it has certainly assumed a slightly different form. And web development might well be dying, but the world will always need developers and designers. It's simply time to adapt. Read next Why is everyone talking about JavaScript fatigue? Is novelty ruining web development?
Read more
  • 0
  • 4
  • 32965

article-image-drones-everything-you-wanted-know
Aarthi Kumaraswamy
11 Apr 2018
10 min read
Save for later

Drones: Everything you ever wanted to know!

Aarthi Kumaraswamy
11 Apr 2018
10 min read
When you were a kid, did you have fun with paper planes? They were so much fun. So, what is a gliding drone? Well, before answering this, let me be clear that there are other types of drones, too. We will know all common types of drones soon, but before doing that, let's find out what a drone first. Drones are commonly known as Unmanned Aerial Vehicles (UAV). A UAV is a flying thing without a human pilot on it. Here, by thing we mean the aircraft. For drones, there is the Unmanned Aircraft System (UAS), which allows you to communicate with the physical drone and the controller on the ground. Drones are usually controlled by a human pilot, but they can also be autonomously controlled by the system integrated on the drone itself. So what the UAS does, is it communicates between the UAS and UAV. Simply, the system that communicates between the drone and the controller, which is done by the commands of a person from the ground control station, is known as the UAS. Drones are basically used for doing something where humans cannot go or carrying out a mission that is impossible for humans. Drones have applications across a wide spectrum of industries from military, scientific research, agriculture, surveillance, product delivery, aerial photography, recreations, to traffic control. And of course, like any technology or tool it can do great harm when used for malicious purposes like for terrorist attacks and smuggling drugs. Types of drones Classifying drones based on their application Drones can be categorized into the following six types based on their mission: Combat: Combat drones are used for attacking in the high-risk missions. They are also known as Unmanned Combat Aerial Vehicles (UCAV). They carry missiles for the missions. Combat drones are much like planes. The following is a picture of a combat drone: Logistics: Logistics drones are used for delivering goods or cargo. There are a number of famous companies, such as Amazon and Domino's, which deliver goods and pizzas via drones. It is easier to ship cargo with drones when there is a lot of traffic on the streets, or the route is not easy to drive. The following diagram shows a logistic drone: Civil: Civil drones are for general usage, such as monitoring the agriculture fields, data collection, and aerial photography. The following picture is of an aerial photography drone: Reconnaissance: These kinds of drones are also known as mission-control drones. A drone is assigned to do a task and it does it automatically, and usually returns to the base by itself, so they are used to get information from the enemy on the battlefield. These kinds of drones are supposed to be small and easy to hide. The following diagram is a reconnaissance drone for your reference, they may vary depending on the usage: Target and decoy: These kinds of drones are like combat drones, but the difference is, the combat drone provides the attack capabilities for the high-risk mission and the target and decoy drones provide the ground and aerial gunnery with a target that simulates the missile or enemy aircrafts. You can look at the following figure to get an idea what a target and decoy drone looks like: Research and development: These types of drones are used for collecting data from the air. For example, some drones are used for collecting weather data or for providing internet. [box type="note" align="" class="" width=""]Also read this interesting news piece on Microsoft committing $5 billion to IoT projects.[/box] Classifying drones based on wing types We can also classify drones by their wing types. There are three types of drones depending on their wings or flying mechanism: Fixed wing: A fixed wing drone has a rigid wing. They look like airplanes. These types of drones have a very good battery life, as they use only one motor (or less than the multiwing). They can fly at a high altitude. They can carry more weight because they can float on air for the wings. There are also some disadvantages of fixed wing drones. They are expensive and require a good knowledge of aerodynamics. They break a lot and training is required to fly them. The launching of the drone is hard and the landing of these types of drones is difficult. The most important thing you should know about the fixed wing drones is they can only move forward. To change the directions to left or right, we need to create air pressure from the wing. We will build one fixed wing drone in this book. I hope you would like to fly one. Single rotor: Single rotor drones are simply like helicopter. They are strong and the propeller is designed in a way that it helps to both hover and change directions. Remember, the single rotor drones can only hover vertically in the air. They are good with battery power as they consume less power than a multirotor. The payload capacity of a single rotor is good. However, they are difficult to fly. Their wing or the propeller can be dangerous if it loosens. Multirotor: Multirotor drones are the most common among the drones. They are classified depending on the number of wings they have, such as tricopter (three propellers or rotors), quadcopter (four rotors), hexacopter (six rotors), and octocopter (eight rotors). The most common multirotor is the quadcopter. The multirotors are easy to control. They are good with payload delivery. They can take off and land vertically, almost anywhere. The flight is more stable than the single rotor and the fixed wing. One of the disadvantages of the multirotor is power consumption. As they have a number of motors, they consume a lot of power. Classifying drones based on body structure We can also classify multirotor drones by their body structure. They can be known by the number of propellers used on them. Some drones have three propellers. They are called tricopters. If there are four propellers or rotors, they are called quadcopters. There are hexacopters and octacopters with six and eight propellers, respectively. The gliding drones or fixed wings do not have a structure like copters. They look like the airplane. The shapes and sizes of the drones vary from purpose to purpose. If you need a spy drone, you will not make a big octacopter right? If you need to deliver a cargo to your friend's house, you can use a multirotor or a single rotor: The Ready to Fly (RTF) drones do not require any assembly of the parts after buying. You can fly them just after buying them. RTF drones are great for the beginners. They require no complex setup or programming knowledge. The Bind N Fly (BNF) drones do not come with a transmitter. This means, if you have bought a transmitter for yourother drone, you can bind it with this type of drone and fly. The problem is that an old model of transmitter might not work with them and the BNF drones are for experienced flyers who have already flown drones with safety, and had the transmitter to test with other drones. The Almost Ready to Fly (ARF) drones come with everything needed to fly, but a few parts might be missing that might keep it from flying properly. Just kidding! They come with all the parts, but you have to assemble them together before flying. You might lose one or two things while assembling. So be careful if you buy ARF drones. I always lose screws or spare small parts of the drones while I assemble. From the name of these types of drones, you can imagine why they are called by this name. The ARF drones require a lot of patience to assemble and bind to fly. Just be calm while assembling. Don't throw away the user manuals like me. You might end up with either pocket screws or lack of screws or parts. Key components for building a drone To build a drone, you will need a drone frame, motors, radio transmitter and reciever, battery, battery adapters/chargers, connectors and modules to make the drone smarter. Drone frames Basically, the drone frame is the most important component to build a drone. It helps to mount the motors, battery, and other parts on it. If you want to build a copter or a glide, you first need to decide what frame you will buy or build. For example, if you choose a tricopter, your drone will be smaller, the number of motors will be three, the number of propellers will be three, the number of ESC will be three, and so on. If you choose a quadcopter it will require four of each of the earlier specifications. For the gliding drone, the number of parts will vary. So, choosing a frame is important as the target of making the drone depends on the body of the drone. And a drone's body skeleton is the frame. In this book, we will build a quadcopter, as it is a medium size drone and we can implement all the things we want on it. If you want to buy the drone frame, there are lots of online shops who sell ready-made drone frames. Make sure you read the specification before buying the frames. While buying frames, always double check the motor mount and the other screw mountings. If you cannot mount your motors firmly, you will lose the stability of the drone in the air. About the aerodynamics of the drone flying, we will discuss them soon. The following figure shows a number of drone frames. All of them are pre-made and do not need any calculation to assemble. You will be given a manual which is really easy to follow: You should also choose a material which light but strong. My personal choice is carbon fiber. But if you want to save some money, you can buy strong plastic frames. You can also buy acrylic frames. When you buy the frame, you will get all the parts of the frame unassembled, as mentioned earlier. The following picture shows how the frame will be shipped to you, if you buy from the online shop: If you want to build your own frame, you will require a lot of calculations and knowledge about the materials. You need to focus on how the assembling will be done, if you build a frame by yourself. The thrust of the motor after mounting on the frame is really important. It will tell you whether your drone will float in the air or fall down or become imbalanced. To calculate the thrust of the motor, you can follow the equation that we will speak about next. If P is the payload capacity of your drone (how much your drone can lift. I'll explain how you can find it), M is the number of motors, W is the weight of the drone itself, and H is the hover throttle % (will be explained later). Then, our thrust of the motors T will be as follows: The drone's payload capacity can be found with the following equation: [box type="note" align="" class="" width=""]Remember to keep the frame balanced and the center of gravity remains in the hands of the drone.[/box] Check out the book, Building Smart Drones with ESP8266 and Arduino by Syed Omar Faruk Towaha to read about the other components that go into making a drone and then build some fun drone projects from follow me drones, to drone that take selfies to those that race and glide. Check out other posts on IoT: How IoT is going to change tech teams AWS Sydney Summit 2018 is all about IoT 25 Datasets for Deep Learning in IoT  
Read more
  • 0
  • 0
  • 32914

article-image-implementing-autocompletion-in-a-react-material-ui-application-tutorial
Bhagyashree R
16 May 2019
14 min read
Save for later

Implementing autocompletion in a React Material UI application [Tutorial]

Bhagyashree R
16 May 2019
14 min read
Web applications typically provide autocomplete input fields when there are too many choices to select from. Autocomplete fields are like text input fields—as users start typing, they are given a smaller list of choices based on what they've typed. Once the user is ready to make a selection, the actual input is filled with components called Chips—especially relevant when the user needs to be able to make multiple selections. In this article, we will start by building an Autocomplete component. Then we will move on to implementing multi-value selection and see how to better serve the autocomplete data through an API. To help our users better understand the results we will also implement a feature that highlights the matched portion of the string value. This article is taken from the book React Material-UI Cookbook by Adam Boduch. This book will serve as your ultimate guide to building compelling user interfaces with React and Material Design. To follow along with the examples implemented in this article, you can download the code from the book’s GitHub repository. Building an Autocomplete component Material-UI doesn't actually come with an Autocomplete component. The reason is that, since there are so many different implementations of autocomplete selection components in the React ecosystem already, it doesn't make sense to provide another one. Instead, you can pick an existing implementation and augment it with Material-UI components so that it can integrate nicely with your Material-UI application. How to do it? You can use the Select component from the react-select package to provide the autocomplete functionality that you need. You can use Select properties to replace key autocomplete components with Material-UI components so that the autocomplete matches the look and feel of the rest of your app. Let's make a reusable Autocomplete component. The Select component allows you to replace certain aspects of the autocomplete experience. In particular, the following are the components that you'll be replacing: Control: The text input component to use Menu: A menu with suggestions, displayed when the user starts typing NoOptionsMessage: The message that's displayed when there aren't any suggestions to display Option: The component used for each suggestion in Menu Placeholder: The placeholder text component for the text input SingleValue: The component for showing a value once it's selected ValueContainer: The component that wraps SingleValue IndicatorSeparator: Separates buttons on the right side of the autocomplete ClearIndicator: The component used for the button that clears the current value DropdownIndicator: The component used for the button that shows Menu Each of these components is replaced with Material-UI components that change the look and feel of the autocomplete. Moreover, you'll have all of this as new Autocomplete components that you can reuse throughout your app. Let's look at the result before diving into the implementation of each replacement component. Following is what you'll see when the screen first loads: If you click on the down arrow, you'll see a menu with all the values, as follows: Try typing tor into the autocomplete text field, as follows: If you make a selection, the menu is closed and the text field is populated with the selected value, as follows: You can change your selection by opening the menu and selecting another value, or you can clear the selection by clicking on the clear button to the right of the text. How does it work? Let's break down the source by looking at the individual components that make up the Autocomplete component and replacing pieces of the Select component. Then, we'll look at the final Autocomplete component. Text input control Here's the source for the Control component: const inputComponent = ({ inputRef, ...props }) => ( <div ref={inputRef} {...props} /> ); const Control = props => ( <TextField fullWidth InputProps={{ inputComponent, inputProps: { className: props.selectProps.classes.input, inputRef: props.innerRef, children: props.children, ...props.innerProps } }} {...props.selectProps.textFieldProps} /> ); The inputComponent() function is a component that passes the inputRef value—a reference to the underlying input element—to the ref prop. Then, inputComponent is passed to InputProps to set the input component used by TextField. This component is a little bit confusing because it's passing references around and it uses a helper component for this purpose. The important thing to remember is that the job of Control is to set up the Select component to use a Material-UITextField component. Options menu Here's the component that displays the autocomplete options when the user starts typing or clicks on the down arrow: const Menu = props => ( <Paper square className={props.selectProps.classes.paper} {...props.innerProps} > {props.children} </Paper> ); The Menu component renders a Material-UI Paper component so that the element surrounding the options is themed accordingly. No options available Here's the NoOptionsMessage component. It is rendered when there aren't any autocomplete options to display, as follows: const NoOptionsMessage = props => ( <Typography color="textSecondary" className={props.selectProps.classes.noOptionsMessage} {...props.innerProps} > {props.children} </Typography> ); This renders a Typography component with textSecondary as the color property value. Individual option Individual options that are displayed in the autocomplete menu are rendered using the MenuItem component, as follows: const Option = props => ( <MenuItem buttonRef={props.innerRef} selected={props.isFocused} component="div" style={{ fontWeight: props.isSelected ? 500 : 400 }} {...props.innerProps} > {props.children} </MenuItem> ); The selected and style properties alter the way that the item is displayed, based on the isSelected and isFocused properties. The children property sets the value of the item. Placeholder text The Placeholder text of the Autocomplete component is shown before the user types anything or makes a selection, as follows: const Placeholder = props => ( <Typography color="textSecondary" className={props.selectProps.classes.placeholder} {...props.innerProps} > {props.children} </Typography> ); The Material-UI Typography component is used to theme the Placeholder text. SingleValue Once again, the Material-UI Typography component is used to render the selected value from the menu within the autocomplete input, as follows: const SingleValue = props => ( <Typography className={props.selectProps.classes.singleValue} {...props.innerProps} > {props.children} </Typography> ); ValueContainer The ValueContainer component is used to wrap the SingleValue component with a div and the valueContainer CSS class, as follows: const ValueContainer = props => ( <div className={props.selectProps.classes.valueContainer}> {props.children} </div> ); IndicatorSeparator By default, the Select component uses a pipe character as a separator between the buttons on the right side of the autocomplete menu. Since they're going to be replaced by Material-UI button components, this separator is no longer necessary, as follows: const IndicatorSeparator = () => null; By having the component return null, nothing is rendered. Clear option indicator This button is used to clear any selection made previously by the user, as follows: const ClearIndicator = props => ( <IconButton {...props.innerProps}> <CancelIcon /> </IconButton> ); The purpose of this component is to use the Material-UI IconButton component and to render a Material-UI icon. The click handler is passed in through innerProps. Show menu indicator Just like the ClearIndicator component, the DropdownIndicator component replaces the button used to show the autocomplete menu with an icon from Material-UI, as follows: const DropdownIndicator = props => ( <IconButton {...props.innerProps}> <ArrowDropDownIcon /> </IconButton> ); Styles Here are the styles used by the various sub-components of the autocomplete: const useStyles = makeStyles(theme => ({ root: { flexGrow: 1, height: 250 }, input: { display: 'flex', padding: 0 }, valueContainer: { display: 'flex', flexWrap: 'wrap', flex: 1, alignItems: 'center', overflow: 'hidden' }, noOptionsMessage: { padding: `${theme.spacing(1)}px ${theme.spacing(2)}px` }, singleValue: { fontSize: 16 }, placeholder: { position: 'absolute', left: 2, fontSize: 16 }, paper: { position: 'absolute', zIndex: 1, marginTop: theme.spacing(1), left: 0, right: 0 } })); The Autocomplete Finally, following is the Autocomplete component that you can reuse throughout your application: Autocomplete.defaultProps = { isClearable: true, components: { Control, Menu, NoOptionsMessage, Option, Placeholder, SingleValue, ValueContainer, IndicatorSeparator, ClearIndicator, DropdownIndicator }, options: [ { label: 'Boston Bruins', value: 'BOS' }, { label: 'Buffalo Sabres', value: 'BUF' }, { label: 'Detroit Red Wings', value: 'DET' }, { label: 'Florida Panthers', value: 'FLA' }, { label: 'Montreal Canadiens', value: 'MTL' }, { label: 'Ottawa Senators', value: 'OTT' }, { label: 'Tampa Bay Lightning', value: 'TBL' }, { label: 'Toronto Maple Leafs', value: 'TOR' }, { label: 'Carolina Hurricanes', value: 'CAR' }, { label: 'Columbus Blue Jackets', value: 'CBJ' }, { label: 'New Jersey Devils', value: 'NJD' }, { label: 'New York Islanders', value: 'NYI' }, { label: 'New York Rangers', value: 'NYR' }, { label: 'Philadelphia Flyers', value: 'PHI' }, { label: 'Pittsburgh Penguins', value: 'PIT' }, { label: 'Washington Capitals', value: 'WSH' }, { label: 'Chicago Blackhawks', value: 'CHI' }, { label: 'Colorado Avalanche', value: 'COL' }, { label: 'Dallas Stars', value: 'DAL' }, { label: 'Minnesota Wild', value: 'MIN' }, { label: 'Nashville Predators', value: 'NSH' }, { label: 'St. Louis Blues', value: 'STL' }, { label: 'Winnipeg Jets', value: 'WPG' }, { label: 'Anaheim Ducks', value: 'ANA' }, { label: 'Arizona Coyotes', value: 'ARI' }, { label: 'Calgary Flames', value: 'CGY' }, { label: 'Edmonton Oilers', value: 'EDM' }, { label: 'Los Angeles Kings', value: 'LAK' }, { label: 'San Jose Sharks', value: 'SJS' }, { label: 'Vancouver Canucks', value: 'VAN' }, { label: 'Vegas Golden Knights', value: 'VGK' } ] }; The piece that ties all of the previous components together is the components property that's passed to Select. This is actually set as a default property in Autocomplete, so it can be further overridden. The value passed to components is a simple object that maps the component name to its implementation. Selecting autocomplete suggestions In the previous section, you built an Autocomplete component capable of selecting a single value. Sometimes, you need the ability to select multiple values from an Autocomplete component. The good news is that, with a few small additions, the component that you created in the previous section already does most of the work. How to do it? Let's walk through the additions that need to be made in order to support multi-value selection in the Autocomplete component, starting with the new MultiValue component, as follows: const MultiValue = props => ( <Chip tabIndex={-1} label={props.children} className={clsx(props.selectProps.classes.chip, { [props.selectProps.classes.chipFocused]: props.isFocused })} onDelete={props.removeProps.onClick} deleteIcon={<CancelIcon {...props.removeProps} />} /> ); The MultiValue component uses the Material-UI Chip component to render a selected value. In order to pass MultiValue to Select, add it to the components object that's passed to Select: components: { Control, Menu, NoOptionsMessage, Option, Placeholder, SingleValue, MultiValue, ValueContainer, IndicatorSeparator, ClearIndicator, DropdownIndicator }, Now you can use your Autocomplete component for single value selection, or for multi-value selection. You can add the isMulti property with a default value of true to defaultProps, as follows: isMulti: true, Now, you should be able to select multiple values from the autocomplete. How does it work? Nothing looks different about the autocomplete when it's first rendered, or when you show the menu. When you make a selection, the Chip component is used to display the value. Chips are ideal for displaying small pieces of information like this. Furthermore, the close button integrates nicely with it, making it easy for the user to remove individual selections after they've been made. Here's what the autocomplete looks like after multiple selections have been made: API-driven Autocomplete You can't always have your autocomplete data ready to render on the initial page load. Imagine trying to load hundreds or thousands of items before the user can interact with anything. The better approach is to keep the data on the server and supply an API endpoint with the autocomplete text as the user types. Then you only need to load a smaller set of data returned by the API. How to do it? Let's rework the example from the previous section. We'll keep all of the same autocomplete functionality, except that, instead of passing an array to the options property, we'll pass in an API function that returns a Promise. Here's the API function that mocks an API call that resolves a Promise: const someAPI = searchText => new Promise(resolve => { setTimeout(() => { const teams = [ { label: 'Boston Bruins', value: 'BOS' }, { label: 'Buffalo Sabres', value: 'BUF' }, { label: 'Detroit Red Wings', value: 'DET' }, ... ]; resolve( teams.filter( team => searchText && team.label .toLowerCase() .includes(searchText.toLowerCase()) ) ); }, 1000); }); This function takes a search string argument and returns a Promise. The same data that would otherwise be passed to the Select component in the options property is filtered here instead. Think of anything that happens in this function as happening behind an API in a real app. The returned Promise is then resolved with an array of matching items following a simulated latency of one second. You also need to add a couple of components to the composition of the Select component (we're up to 13 now!), as follows: const LoadingIndicator = () => <CircularProgress size={20} />; const LoadingMessage = props => ( <Typography color="textSecondary" className={props.selectProps.classes.noOptionsMessage} {...props.innerProps} > {props.children} </Typography> ); The LoadingIndicator component is shown on the right the autocomplete text input. It's using the CircularProgress component from Material-UI to indicate that the autocomplete is doing something. The LoadingMessage component follows the same pattern as the other text replacement components used with Select in this example. The loading text is displayed when the menu is shown, but the Promise that resolves the options is still pending. Lastly, there's the Select component. Instead of using Select, you need to use the AsyncSelect version, as follows: import AsyncSelect from 'react-select/lib/Async'; Otherwise, AsyncSelect works the same as Select, as follows: <AsyncSelect value={value} onChange={value => setValue(value)} textFieldProps={{ label: 'Team', InputLabelProps: { shrink: true } }} {...{ ...props, classes }} /> How does it work? The only difference between a Select autocomplete and an AsyncSelect autocomplete is what happens while the request to the API is pending. Here is what the autocomplete looks like while this is happening: As the user types the CircularProgress component is rendered to the right, while the loading message is rendered in the menu using a Typography component. Highlighting search results When the user starts typing in an autocomplete and the results are displayed in the dropdown, it isn't always obvious how a given item matches the search criteria. You can help your users better understand the results by highlighting the matched portion of the string value. How to do it? You'll want to use two functions from the autosuggest-highlight package to help highlight the text presented in the autocomplete dropdown, as follows: import match from 'autosuggest-highlight/match'; import parse from 'autosuggest-highlight/parse'; Now, you can build a new component that will render the item text, highlighting as and when necessary, as follows: const ValueLabel = ({ label, search }) => { const matches = match(label, search); const parts = parse(label, matches); return parts.map((part, index) => part.highlight ? ( <span key={index} style={{ fontWeight: 500 }}> {part.text} </span> ) : ( <span key={index}>{part.text}</span> ) ); }; The end result is that ValueLabel renders an array of span elements, determined by the parse() and match() functions. One of the spans will be bolded if part.highlight is true. Now, you can use ValueLabel in the Option component, as follows: const Option = props => ( <MenuItem buttonRef={props.innerRef} selected={props.isFocused} component="div" style={{ fontWeight: props.isSelected ? 500 : 400 }} {...props.innerProps} > <ValueLabel label={props.children} search={props.selectProps.inputValue} /> </MenuItem> ); How does it work? Now, when you search for values in the autocomplete text input, the results will highlight the search criteria in each item, as follows: This article helped you implement autocompletion in your Material UI React application.  Then we implemented multi-value selection and saw how to better serve the autocomplete data through an API endpoint. If you found this post useful, do check out the book, React Material-UI Cookbook by Adam Boduch.  This book will help you build modern-day applications by implementing Material Design principles in React applications using Material-UI. How to create a native mobile app with React Native [Tutorial] Reactive programming in Swift with RxSwift and RxCocoa [Tutorial] How to build a Relay React App [Tutorial]
Read more
  • 0
  • 0
  • 32314
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-discovering-network-hosts-with-tcp-syn-and-tcp-ack-ping-scans-in-nmaptutorial
Savia Lobo
09 Nov 2018
8 min read
Save for later

Discovering network hosts with 'TCP SYN' and 'TCP ACK' ping scans in Nmap[Tutorial]

Savia Lobo
09 Nov 2018
8 min read
Ping scans are used for detecting live hosts in networks. Nmap's default ping scan (-sP) sends TCP SYN, TCP ACK, and ICMP packets to determine if a host is responding, but if a firewall is blocking these requests, it will be treated as offline. Fortunately, Nmap supports a scanning technique named the TCP SYN ping scan that is very handy to probe different ports in an attempt to determine if a host is online or at least has more permissive filtering rules. Similar to the TCP SYN ping scan, the TCP ACK ping scan is used to determine if a host is responding. It can be used to detect hosts that block SYN packets or ICMP echo requests, but it will most likely be blocked by modern firewalls that track connection states because it sends bogus TCP ACK packets associated with non-existing connections. This article is an excerpt taken from the book Nmap: Network Exploration and Security Auditing Cookbook - Second Edition written by Paulino Calderon. In this book, you will be introduced to the most powerful features of Nmap and related tools, common security auditing tasks for local and remote networks, web applications, databases, mail servers and much more. This post will talk about the TCP SYN and TCP ACK ping scans and its related options. Discovering network hosts with TCP SYN ping scans How to do it... Open your terminal and enter the following command: # nmap -sn -PS <target> You should see the list of hosts found in the target range using TCP SYN ping scanning: # nmap -sn -PS 192.1.1/24 Nmap scan report for 192.168.0.1 Host is up (0.060s latency). Nmap scan report for 192.168.0.2 Host is up (0.0059s latency). Nmap scan report for 192.168.0.3 Host is up (0.063s latency). Nmap scan report for 192.168.0.5 Host is up (0.062s latency). Nmap scan report for 192.168.0.7 Host is up (0.063s latency). Nmap scan report for 192.168.0.22 Host is up (0.039s latency). Nmap scan report for 192.168.0.59 Host is up (0.00056s latency). Nmap scan report for 192.168.0.60 Host is up (0.00014s latency). Nmap done: 256 IP addresses (8 hosts up) scanned in 8.51 seconds How it works... The -sn option tells Nmap to skip the port scanning phase and only perform host discovery. The -PS flag tells Nmap to use a TCP SYN ping scan. This type of ping scan works in the following way: Nmap sends a TCP SYN packet to port 80. If the port is closed, the host responds with an RST packet. If the port is open, the host responds with a TCP SYN/ACK packet indicating that a connection can be established. Afterward, an RST packet is sent to reset this connection. The CIDR /24 in 192.168.1.1/24 is used to indicate that we want to scan all of the 256 IPs in our local network. There's  more... TCP SYN ping scans can be very effective to determine if hosts are alive on networks. Although Nmap sends more probes by default, it is configurable. Now it is time to learn more about discovering hosts with TCP SYN ping scans. Privileged versus unprivileged TCP SYN ping scan Running a TCP SYN ping scan as an unprivileged user who can't send raw packets makes Nmap use the connect() system call to send the TCP SYN packet. In this case, Nmap distinguishes a SYN/ACK packet when the function returns successfully, and an RST packet when it receives an ECONNREFUSED error message. Firewalls and traffic filtering A lot of systems are protected by some kind of traffic filtering, so it is important to always try different ping scanning techniques. In the following example, we will scan a host online that gets marked as offline, but in fact, was just behind some traffic filtering system that did not allow TCP ACK or ICMP requests: # nmap -sn 0xdeadbeefcafe.com Note: Host seems down. If it is really up, but blocking our ping probes, try -Pn Nmap done: 1 IP address (0 hosts up) scanned in 4.68 seconds # nmap -sn -PS 0xdeadbeefcafe.com Nmap scan report for 0xdeadbeefcafe.com (52.20.139.72) Host is up (0.062s latency). rDNS record for 52.20.139.72: ec2-52-20-139-72.compute- 1.amazonaws.com Nmap done: 1 IP address (1 host up) scanned in 0.10 seconds During a TCP SYN ping scan, Nmap uses the SYN/ACK and RST responses to determine if the host is responding. It is important to note that there are firewalls configured to drop RST packets. In this case, the TCP SYN ping scan will fail unless we send the probes to an open port: # nmap -sn -PS80 <target> You can set the port list to be used with -PS (port list or range) as follows: # nmap -sn -PS80,21,53 <target> # nmap -sn -PS1-1000 <target> # nmap -sn -PS80,100-1000 <target> Discovering hosts with TCP ACK ping scans How to do it... Open your terminal and enter the following command: # nmap -sn -PA <target> The result is a list of hosts that responded to the TCP ACK packets sent, therefore, online: # nmap -sn -PA 192.168.0.1/24 Nmap scan report for 192.168.0.1 Host is up (0.060s latency). Nmap scan report for 192.168.0.60 Host is up (0.00014s latency). Nmap done: 256 IP addresses (2 hosts up) scanned in 6.11 seconds How it works... The -sn option tells Nmap to skip the port scan phase and only perform host discovery. And the -PA flag tells Nmap to use a TCP ACK ping scan. A TCP ACK ping scan works in the following way: Nmap sends an empty TCP packet with the ACK flag set to port 80 (the default port, but an alternate port list can be assigned). If the host is offline, it should not respond to this request. Otherwise, it will return an RST packet and will be treated as online. RST packets are sent because the TCP ACK packet sent is not associated with an existing valid connection. There's more... TCP ACK ping scans use port 80 by default, but this behavior can be configured. This scanning technique also requires privileges to create raw packets. Now we will learn more about the scan limitations and configuration options. Privileged versus unprivileged TCP ACK ping scans TCP ACK ping scans need to run as a privileged user. Otherwise a connect() system call is used to send an empty TCP SYN packet. Hence, TCP ACK ping scans will not use the TCP ACK technique, previously discussed, as an unprivileged user, and it will perform a TCP SYN ping scan instead. Selecting ports in TCP ACK ping scans In addition, you can select the ports to be probed using this technique, by listing them after the -PA flag: # nmap -sn -PA21,22,80 <target> # nmap -sn -PA80-150 <target> # nmap -sn -PA22,1000-65535 <target> Discovering hosts with UDP ping scans Ping scans are used to determine if a host is responding and can be considered online. UDP ping scans have the advantage of being capable of detecting systems behind firewalls with strict TCP filtering but that left UDP exposed. This next recipe describes how to perform a UDP ping scan with Nmap and its related options. How to do it... Open your terminal and enter the following command: # nmap -sn -PU <target> Nmap will determine if the target is reachable using a UDP ping scan: # nmap -sn -PU scanme.nmap.org Nmap scan report for scanme.nmap.org (45.33.32.156) Host is up (0.13s latency). Other addresses for scanme.nmap.org (not scanned): 2600:3c01::f03c:91ff:fe18:bb2f Nmap done: 1 IP address (1 host up) scanned in 7.92 seconds How it works... The -sn option tells Nmap to skip the port scan phase but perform host discovery. In combination with the -PU flag, Nmap uses UDP ping scanning. The technique used by a UDP ping scan works as follows: Nmap sends an empty UDP packet to port 40125. If the host is online, it should return an ICMP port unreachable error. If the host is offline, various ICMP error messages could be returned. There's more... Services that do not respond to empty UDP packets will generate false positives when probed. These services will simply ignore the UDP packets, and the host will be incorrectly marked as offline. Therefore, it is important that we select ports that are closed for better results. Selecting ports in UDP ping scans To specify the ports to be probed, add them after the -PU flag, as follows: # nmap -sn -PU1337,11111 scanme.nmap.org # nmap -sn -PU1337 scanme.nmap.org # nmap -sn -PU1337-1339 scanme.nmap.org This in this post we saw how network hosts can be discovered using TCP SYN and TCP ACK ping scans. If you've enjoyed reading this post and want to learn how to discover hosts using other ping scans such as ICMP, SCTP INIT, IP protocol, and others head over to our book, Nmap: Network Exploration and Security Auditing Cookbook - Second Edition. Docker Multi-Host Networking Experiments on Amazon AWS Hosting the service in IIS using the TCP protocol FreeRTOS affected by 13 vulnerabilities in its TCP/IP stack
Read more
  • 0
  • 0
  • 32149

article-image-how-to-attack-an-infrastructure-using-voip-exploitation-tutorial
Savia Lobo
03 Nov 2018
9 min read
Save for later

How to attack an infrastructure using VoIP exploitation [Tutorial]

Savia Lobo
03 Nov 2018
9 min read
Voice over IP (VoIP) is pushing business communications to a new level of efficiency and productivity. VoIP-based systems are facing security risks on a daily basis. Although a lot of companies are focusing on the VoIP quality of service, they ignore the security aspects of the VoIP infrastructure, which makes them vulnerable to dangerous attacks. This tutorial is an extract taken from the book Advanced Infrastructure Penetration Testing written by Chiheb Chebbi. In this book, you will explore exploitation abilities such as offensive PowerShell tools and techniques, CI servers, database exploitation, Active Directory delegation, and much more. In today's post, you will learn how to penetrate the VoIP infrastructure. Like any other penetration testing, to exploit the VoIP infrastructure, we need to follow a strategic operation based on a number of steps. Before attacking any infrastructure, we've learned that we need to perform footprinting, scanning, and enumeration before exploiting it, and that is exactly what we are going to do with VoIP. To perform VoIP information gathering, we need to collect as much useful information as possible about the target. As a start, you can do a simple search online. For example, job announcements could be a valuable source of information. For example, the following job description gives the attacker an idea about the VoIP: Later, an attacker could search for vulnerabilities out there to try exploiting that particular system. Searching for phone numbers could also be a smart move, to have an idea of the target based on its voicemail, because each vendor has a default one. If the administrator has not changed it, listening to the voicemail can let you know about your target. If you want to have a look at some of the default voicemails, check http://www.hackingvoip.com/voicemail.html. It is a great resource for learning a great deal about hacking VoIP. Google hacking is an amazing technique for searching for information and online portals. We discussed Google hacking using Dorks. The following demonstration is the output of this Google Dork—in  URL: Network Configuration Cisco: You can find connected VoIP devices using the Shodan.io search engine: VoIP devices are generally connected to the internet. Thus, they can be reached by an outsider. They can be exposed via their web interfaces; that is why, sometimes leaving installation files exposed could be dangerous, because using a search engine can lead to indexing the portal. The following screenshot is taken from an online Asterisk management portal: And this screenshot is taken from a configuration page of an exposed website, using a simple search engine query: After collecting juicy information about the target, from an attacker perspective, we usually should perform scanning. Using scanning techniques is necessary during this phase. Carrying out Host Discovery and Nmap scanning is a good way of scanning the infrastructure to search for VoIP devices. Scanning can lead us to discover VoIP services. For example, we saw the -sV option in Nmap to check services. In VoIP, if port 2000 is open, it is a Cisco CallManager because the SCCP protocol uses that port as default, or if there is a UDP 5060 port, it is SIP. The -O Nmap option could be useful for identifying the running operating system, as there are a lot of VoIP devices that are running on a specific operating system, such as Cisco embedded. You know what to do now. After footprinting and scanning, we need to enumerate the target. As you can see, when exploiting an infrastructure we generally follow the same methodological steps. Banner grabbing is a well-known technique in enumeration, and the first step to enumerate a VoIP infrastructure is by starting a banner grabbing move. In order to do that, using the Netcat utility would help you grab the banner easily, or you can simply use the Nmap script named banner: nmap -sV --script=banner <target> For a specific vendor, there are a lot of enumeration tools you can use; EnumIAX is one of them. It is a built-in enumeration tool in Kali Linux to brute force Inter-Asterisk Exchange protocol usernames: Automated Corporate Enumerator (ACE) is another built-in enumeration tool in Kali Linux: svmap is an open source built-in tool in Kali Linux for identifying SIP devices. Type svmap -h and you will get all the available options for this amazing tool: VoIP attacks By now, you have learned the required skills to perform VoIP footprinting, scanning, and enumeration. Let's discover the major VoIP attacks. VoIP is facing multiple threats from different attack vectors. Denial-of-Service Denial-of-Service (DoS) is a threat to the availability of a network. DoS could be dangerous too for VoIP, as ensuring the availability of calls is vital in modern organizations. Not only the availability but also the clearness of calls is a necessity nowadays. To monitor the QoS of VoIP, you can use many tools that are out there; one of them is CiscoWorks QoS Policy Manager 4.1: To measure the quality of VoIP, there are some scoring systems, such as the Mean Opinion Score (MOS)  or the R-value based on several parameters (jitter, latency, and packet loss). Scores of the mean opinion score range from 1 to 5 (bad to very clear) and scores of R-value range from 1 to 100 (bad to very clear). The following screenshot is taken from an analysis of an RTP packet downloaded from the Wireshark website: You can also analyze the RTP jitter graph: VoIP infrastructure can be attacked by the classic DoS attacks. We saw some of them previously: Smurf flooding attack TCP SYN flood attack UDP flooding attack One of the DoS attack tools is iaxflood. It is available in Kali Linux to perform DoS attacks. IAX stands for  Inter-Asterisk Exchange. Open a Kali terminal and type  iaxflood <Source IP> <Destination IP>  <Number of packets>: The VoIP infrastructure can not only be attacked by the previous attacks attackers can perform packet Fragmentation and Malformed Packets to attack the infrastructure, using fuzzing tools. Eavesdropping Eavesdropping is one of the most serious VoIP attacks. It lets attackers take over your privacy, including your calls. There are many eavesdropping techniques; for example, an attacker can sniff the network for TFTP configuration files while they contain a password. The following screenshot describes an analysis of a TFTP capture: Also, an attacker can harvest phone numbers and build a valid phone numbers databases, after recording all the outgoing and ongoing calls. Eavesdropping does not stop there, attackers can record your calls and even know what you are typing using the Dual-Tone Multi-Frequency (DTMF). You can use the DTMF decoder/encoder from this link http://www.polar-electric.com/DTMF/: Voice Over Misconfigured Internet Telephones (VOMIT) is a great utility to convert Cisco IP Phone conversations into WAV files. You can download it from its official website http://vomit.xtdnet.nl/: SIP attacks Another attacking technique is SIP rogues. We can perform two types of SIP rogues. From an attacker's perspective, we can implement the following: Rogue SIP B2BUA: In  this attacking technique, the attacker mimics SIP B2BUA: SIP rogue as a proxy: Here, the attacker mimics a SIP proxy:   SIP registration hijacking SIP registration hijacking is a serious VoIP security problem. Previously, we saw that before establishing a SIP session, there is a registration step. Registration can be hijacked by attackers. During a SIP registration hijacking attack, the attacker disables a normal user by a Denial of Service, for example, and simply sends a registration request with his own IP address instead of that users because, in SIP, messages are transferred clearly, so SIP does not ensure the integrity of signalling messages: If you are a Metasploit enthusiast, you can try many other SIP modules. Open a Metasploit console by typing msfconsole and search SIP modules using search SIP: To use a specific SIP module, simply type use <module >. The following interface is an example of SIP module usage: Spam over Internet Telephony Spam over Internet Telephony (SPIT), sometimes called Voice spam, is like email spam, but it affects VoIP. To perform a SPIT attack, you can use a generation tool called spitter. Embedding malware Malware is a major threat to VoIP infrastructure. Your insecure VoIP endpoints can be exploited by different types of malware, such as Worms and VoIP Botnets. Softphones are also a highly probable target for attackers. Compromising your softphone could be very dangerous because if an attacker exploits it, they can compromise your VoIP network. Malware is not the only threat against VoIP endpoints. VoIP firmware is a potential attack vector for hackers. Firmware hacking can lead to phones being compromised. Viproy – VoIP penetration testing kit Viproy VoIP penetration testing kit (v4)  is a VoIP and unified communications services pentesting tool presented at Black Hat Arsenal USA 2014 by Fatih Ozavci: To download this project, clone it from its official repository, https://github.com/fozavci/viproy-voipkit: # git clone https://github.com/fozavci/viproy-voipkit. The following project contains many modules to test SIP and Skinny protocols: To use them, copy the lib, modules, and data folders to a Metasploit folder in your system. Thus, in  this article, we demonstrated how to exploit the VoIP infrastructure. We explored the major VoIP attacks and how to defend against them, in addition to the tools and utilities most commonly used by penetration testers. If you've enjoyed reading this, do check out Advanced Infrastructure Penetration Testing to discover post-exploitation tips, tools, and methodologies to help your organization build an intelligent security system. Managing a VoIP Solution with Active Directory Depends On Your Needs Opus 1.3, a popular FOSS audio codec with machine learning and VR support, is now generally available Approaching a Penetration Test Using Metasploit
Read more
  • 0
  • 0
  • 31392

article-image-openai-gym-environments-wrappers-and-monitors-tutorial
Packt Editorial Staff
17 Jul 2018
9 min read
Save for later

Extending OpenAI Gym environments with Wrappers and Monitors [Tutorial]

Packt Editorial Staff
17 Jul 2018
9 min read
In this article we are going to discuss two OpenAI Gym functionalities; Wrappers and Monitors. These functionalities are present in OpenAI to make your life easier and your codes cleaner. It provides you these convenient frameworks to extend the functionality of your existing environment in a modular way and get familiar with an agent's activity. So, let's take a quick overview of these classes. This article is an extract taken from the book, Deep Reinforcement Learning Hands-On, Second Edition written by, Maxim Lapan. What are Wrappers? Very frequently, you will want to extend the environment's functionality in some generic way. For example, an environment gives you some observations, but you want to accumulate them in some buffer and provide to the agent the N last observations, which is a common scenario for dynamic computer games, when one single frame is just not enough to get full information about the game state. Another example is when you want to be able to crop or preprocess an image's pixels to make it more convenient for the agent to digest, or if you want to normalize reward scores somehow. There are many such situations which have the same structure: you'd like to “wrap” the existing environment and add some extra logic doing something. Gym provides you with a convenient framework for these situations, called a Wrapper class. How does a wrapper work? The class structure is shown on the following diagram. The Wrapper class inherits the Env class. Its constructor accepts the only argument: the instance of the Env class to be “wrapped”. To add extra functionality, you need to redefine the methods you want to extend like step() or reset(). The only requirement is to call the original method of the superclass. Figure 1: The hierarchy of Wrapper classes in Gym. To handle more specific requirements, like a Wrapper which wants to process only observations from the environment, or only actions, there are subclasses of Wrapper which allow filtering of only a specific portion of information. They are: ObservationWrapper: You need to redefine its observation(obs) method. Argument obs is an observation from the wrapped environment, and this method should return the observation which will be given to the agent. RewardWrapper: Exposes the method reward(rew), which could modify the reward value given to the agent. ActionWrapper: You need to override the method action(act) which could tweak the action passed to the wrapped environment to the agent. Now let’s implement some wrappers To make it slightly more practical, let's imagine a situation where we want to intervene in the stream of actions sent by the agent and, with a probability of 10%, replace the current action with random one. By issuing the random actions, we make our agent explore the environment and from time to time drift away from the beaten track of its policy. This is an easy thing to do using the ActionWrapper class. import gym from typing import TypeVar import random Action = TypeVar('Action') class RandomActionWrapper(gym.ActionWrapper):     def __init__(self, env, epsilon=0.1):         super(RandomActionWrapper, self).__init__(env)         self.epsilon = epsilon Here we initialize our wrapper by calling a parent's __init__ method and saving epsilon (a probability of a random action). def action(self, action):         if random.random() < self.epsilon:             print("Random!")            return self.env.action_space.sample()        return action This is a method that we need to override from a parent's class to tweak the agent's actions. Every time we roll the die, with the probability of epsilon, we sample a random action from the action space and return it instead of the action the agent has sent to us. Please note, by using action_space and wrapper abstractions, we were able to write abstract code which will work with any environment from the Gym. Additionally, we print the message every time we replace the action, just to check that our wrapper is working. In production code, of course, this won't be necessary. if __name__ == "__main__":    env = RandomActionWrapper(gym.make("CartPole-v0")) Now it's time to apply our wrapper. We create a normal CartPole environment and pass it to our wrapper constructor. From here on we use our wrapper as a normal Env instance, instead of the original CartPole. As the Wrapper class inherits the Env class and exposes the same interface, we can nest our wrappers in any combination we want. This is a powerful, elegant and generic solution: obs = env.reset()    total_reward = 0.0    while True:        obs, reward, done, _ = env.step(0)        total_reward += reward        if done:            break    print("Reward got: %.2f" % total_reward) Here is almost the same code, except that every time we issue the same action: 0. Our agent is dull and always does the same thing. By running the code, you should see that the wrapper is indeed working: rl_book_samples/ch02$ python 03_random_actionwrapper.py WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype. Random! Random! Random! Random! Reward got: 12.00 If you want, you can play with the epsilon parameter on the wrapper's creation and check that randomness improves the agent's score on average. We should move on and look at another interesting gem hidden inside Gym: Monitor. What is a Monitor? Another class you should be aware of is Monitor. It is implemented like Wrapper and can write information about your agent's performance in a file with optional video recording of your agent in action. Some time ago, it was possible to upload the result of Monitor class' recording to the https://gym.openai.com website and see your agent's position in comparison to other people's results (see thee following screenshot), but, unfortunately, at the end of August 2017, OpenAI decided to shut down this upload functionality and froze all the results. There are several activities to implement an alternative to the original website, but they are not ready yet. I hope this situation will be resolved soon, but at the time of writing it's not possible to check your result against those of others. Just to give you an idea of how the Gym web interface looked, here is the CartPole environment leaderboard: Figure 2: OpenAI Gym web interface with CartPole submissions Every submission in the web interface had details about training dynamics. For example, below is the author's solution for one of Doom's mini-games: Figure 3: Submission dynamics on the DoomDefendLine environment. Despite this, Monitor is still useful, as you can take a look at your agent's life inside the environment. How to add Monitor to your agent So, here is how we add Monitor to our random CartPole agent, which is the only difference (the whole code is in Chapter02/04_cartpole_random_monitor.py). if __name__ == "__main__":    env = gym.make("CartPole-v0")    env = gym.wrappers.Monitor(env, "recording") The second argument we're passing to Monitor is the name of the directory it will write the results to. This directory shouldn't exist, otherwise your program will fail with an exception (to overcome this, you could either remove the existing directory or pass the force=True argument to Monitor class' constructor). The Monitor class requires the FFmpeg utility to be present on the system, which is used to convert captured observations into an output video file. This utility must be available, otherwise Monitor will raise an exception. The easiest way to install FFmpeg is by using your system's package manager, which is OS distribution-specific. To start this example, one of three extra prerequisites should be met: The code should be run in an X11 session with the OpenGL extension (GLX) The code should be started in an Xvfb virtual display You can use X11 forwarding in ssh connection The cause of this is video recording, which is done by taking screenshots of the window drawn by the environment. Some of the environment uses OpenGL to draw its picture, so the graphical mode with OpenGL needs to be present. This could be a problem for a virtual machine in the cloud, which physically doesn't have a monitor and graphical interface running. To overcome this, there is a special “virtual” graphical display, called Xvfb (X11 virtual framebuffer), which basically starts a virtual graphical display on the server and forces the program to draw inside it. That would be enough to make Monitor happily create the desired videos. To start your program in the Xvbf environment, you need to have it installed on your machine (it usually requires installing the package xvfb) and run the special script xvfb-run: $ xvfb-run -s "-screen 0 640x480x24" python 04_cartpole_random_monitor.py [2017-09-22 12:22:23,446] Making new env: CartPole-v0 [2017-09-22 12:22:23,451] Creating monitor directory recording [2017-09-22 12:22:23,570] Starting new video recorder writing to recording/openaigym.video.0.31179.video000000.mp4 Episode done in 14 steps, total reward 14.00 [2017-09-22 12:22:26,290] Finished writing results. You can upload them to the scoreboard via gym.upload('recording') As you may see from the log above, video has been written successfully, so you can peek inside one of your agent's sections by playing it. Another way to record your agent's actions is using ssh X11 forwarding, which uses ssh ability to tunnel X11 communications between the X11 client (Python code which wants to display some graphical information) and X11 server (software which knows how to display this information and has access to your physical display). In X11 architecture, the client and the server are separated and can work on different machines. To use this approach, you need the following: X11 server running on your local machine. Linux comes with X11 server as a standard component (all desktop environments are using X11). On a Windows machine you can set up third-party X11 implementations like open source VcXsrv (available in https://sourceforge.net/projects/vcxsrv/). The ability to log into your remote machine via ssh, passing –X command line option: ssh –X servername. This enables X11 tunneling and allows all processes started in this session to use your local display for graphics output. Then you can start a program which uses Monitor class and it will display the agent's actions, capturing the images into a video file. To summarize, we discussed the two extra functionalities in an OpenAI Gym; Wrappers and Monitors. To solve complex real world problems in Deep Learning, grab this practical guide Deep Reinforcement Learning Hands-On, Second Edition today. How Reinforcement Learning works How to implement Reinforcement Learning with TensorFlow Top 5 tools for reinforcement learning
Read more
  • 0
  • 0
  • 31241

article-image-mixing-aspnet-webforms-and-aspnet-mvc
Packt
12 Oct 2009
6 min read
Save for later

Mixing ASP.NET Webforms and ASP.NET MVC

Packt
12 Oct 2009
6 min read
Ever since Microsoft started working on the ASP.NET MVC framework, one of the primary concerns was the framework's ability to re-use as many features as possible from ASP.NET Webforms. In this article by Maarten Balliauw, we will see how we can mix ASP.NET Webforms and ASP.NET MVC in one application and how data is shared between both these technologies. (For more resources on .NET, see here.) Not every ASP.NET MVC web application will be built from scratch. Several projects will probably end up migrating from classic ASP.NET to ASP.NET MVC. The question of how to combine both technologies in one application arises—is it possible to combine both ASP.NET Webforms and ASP.NET MVC in one web application? Luckily, the answer is yes. Combining ASP.NET Webforms and ASP.NET MVC in one application is possible—in fact, it is quite easy. The reason for this is that the ASP.NET MVC framework has been built on top of ASP.NET. There's actually only one crucial difference: ASP.NET lives in System.Web, whereas ASP.NET MVC lives in System.Web, System.Web.Routing, System.Web.Abstractions, and System.Web.Mvc. This means that adding these assemblies as a reference in an existing ASP.NET application should give you a good start on combining the two technologies. Another advantage of the fact that ASP.NET MVC is built on top of ASP.NET is that data can be easily shared between both of these technologies. For example, the Session state object is available in both the technologies, effectively enabling data to be shared via the Session state. Plugging ASP.NET MVC into an existing ASP.NET application An ASP.NET Webforms application can become ASP.NET MVC enabled by following some simple steps. First of all, add a reference to the following three assemblies to your existing ASP.NET application: System.Web.Routing System.Web.Abstractions System.Web.Mvc After adding these assembly references, the ASP.NET MVC folder structure should be created. Because the ASP.NET MVC framework is based on some conventions (for example, controllers are located in Controllers), these conventions should be respected. Add the folder Controllers, Views, and Views | Shared to your existing ASP.NET application. The next step in enabling ASP.NET MVC in an ASP.NET Webforms application is to update the web.config file, with the following code: < ?xml version="1.0"?> <configuration> <system.web> <compilation debug="false"> <assemblies> <add assembly="System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> <add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add assembly="System.Web.Abstractions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add assembly="System.Web.Routing, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </assemblies> </compilation> <pages> <namespaces> <add namespace="System.Web.Mvc"/> <add namespace="System.Web.Mvc.Ajax"/> <add namespace="System.Web.Mvc.Html" /> <add namespace="System.Web.Routing"/> <add namespace="System.Linq"/> <add namespace="System.Collections.Generic"/> </namespaces> </pages> <httpModules> <add name="UrlRoutingModule" type="System.Web.Routing.UrlRoutingModule, System.Web.Routing, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </httpModules> </system.web> </configuration> Note that your existing ASP.NET Webforms web.config should not be replaced by the above web.config! The configured sections should be inserted into an existing web.config file in order to enable ASP.NET MVC. There's one thing left to do: configure routing. This can easily be done by adding the default ASP.NET MVC's global application class contents into an existing (or new) global application class, Global.asax. using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; using System.Web.Routing; namespace MixingBothWorldsExample { public class Global : System.Web.HttpApplication { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.IgnoreRoute("{resource}.aspx/{*pathInfo}"); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = "" } // Parameter defaults ); } protected void Application_Start() { RegisterRoutes(RouteTable.Routes); } } } This code registers a default ASP.NET MVC route, which will map any URL of the form /Controller/Action/Idinto a controller instance and action method. There's one difference with an ASP.NET MVC application that needs to be noted—a catch-all route is defined in order to prevent a request for ASP.NET Webforms to be routed into ASP.NET MVC. This catch-all route looks like this: routes.IgnoreRoute("{resource}.aspx/{*pathInfo}"); This is basically triggered on every request ending in .aspx. It tells the routing engine to ignore this request and leave it to ASP.NET Webforms to handle things. With the ASP.NET MVC assemblies referenced, the folder structure created, and the necessary configurations in place, we can now start adding controllers and views. Add a new controller in the Controllers folder, for example, the following simpleHomeController: using System.Web.Mvc; namespace MixingBothWorldsExample.Controllers { public class HomeController : Controller { public ActionResult Index() { ViewData["Message"] = "This is ASP.NET MVC!"; return View(); } } } The above controller will simply render a view, and pass it a message through the ViewData dictionary. This view, located in Views | Home | Index.aspx, would look like this: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Index.aspx.cs" Inherits="MixingBothWorldsExample.Views.Home.Index" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <head id="Head1" runat="server"> <title></title> </head> <body> <div> <h1><%=Html.Encode(ViewData["Message"]) %></h1> </div> </body> </html> The above view renders a simple HTML page and renders the ViewData dictionary's message as the page title.
Read more
  • 0
  • 1
  • 31208
article-image-installing-jquery
Packt
04 Jun 2015
25 min read
Save for later

Installing jQuery

Packt
04 Jun 2015
25 min read
 In this article by Alex Libby, author of the book Mastering jQuery, we will examine some of the options available to help develop your skills even further. (For more resources related to this topic, see here.) Local or CDN, I wonder…? Which version…? Do I support old IE…? Installing jQuery is a thankless task that has to be done countless times by any developer—it is easy to imagine that person asking some of the questions. It is easy to imagine why most people go with the option of using a Content Delivery Network (CDN) link, but there is more to installing jQuery than taking the easy route! There are more options available, where we can be really specific about what we need to use—throughout this article, we will. We'll cover a number of topics, which include: Downloading and installing jQuery Customizing jQuery downloads Building from Git Using other sources to install jQuery Adding source map support Working with Modernizr as a fallback Intrigued? Let's get started. Downloading and installing jQuery As with all projects that require the use of jQuery, we must start somewhere—no doubt you've downloaded and installed jQuery a thousand times; let's just quickly recap to bring ourselves up to speed. If we browse to http://www.jquery.com/download, we can download jQuery using one of the two methods: downloading the compressed production version or the uncompressed development version. If we don't need to support old IE (IE6, 7, and 8), then we can choose the 2.x branch. If, however, you still have some diehards who can't (or don't want to) upgrade, then the 1.x branch must be used instead. To include jQuery, we just need to add this link to our page: <script src="http://code.jquery.com/jquery-X.X.X.js"></script> Here, X.X.X marks the version number of jQuery or the Migrate plugin that is being used in the page. Conventional wisdom states that the jQuery plugin (and this includes the Migrate plugin too) should be added to the <head> tag, although there are valid arguments to add it as the last statement before the closing <body> tag; placing it here may help speed up loading times to your site. This argument is not set in stone; there may be instances where placing it in the <head> tag is necessary and this choice should be left to the developer's requirements. My personal preference is to place it in the <head> tag as it provides a clean separation of the script (and the CSS) code from the main markup in the body of the page, particularly on lighter sites. I have even seen some developers argue that there is little perceived difference if jQuery is added at the top, rather than at the bottom; some systems, such as WordPress, include jQuery in the <head> section too, so either will work. The key here though is if you are perceiving slowness, then move your scripts to just before the <body> tag, which is considered a better practice. Using jQuery in a development capacity A useful point to note at this stage is that best practice recommends that CDN links should not be used within a development capacity; instead, the uncompressed files should be downloaded and referenced locally. Once the site is complete and is ready to be uploaded, then CDN links can be used. Adding the jQuery Migrate plugin If you've used any version of jQuery prior to 1.9, then it is worth adding the jQuery Migrate plugin to your pages. The jQuery Core team made some significant changes to jQuery from this version; the Migrate plugin will temporarily restore the functionality until such time that the old code can be updated or replaced. The plugin adds three properties and a method to the jQuery object, which we can use to control its behavior: Property or Method Comments jQuery.migrateWarnings This is an array of string warning messages that have been generated by the code on the page, in the order in which they were generated. Messages appear in the array only once even if the condition has occurred multiple times, unless jQuery.migrateReset() is called. jQuery.migrateMute Set this property to true in order to prevent console warnings from being generated in the debugging version. If this property is set, the jQuery.migrateWarnings array is still maintained, which allows programmatic inspection without console output. jQuery.migrateTrace Set this property to false if you want warnings but don't want traces to appear on the console. jQuery.migrateReset() This method clears the jQuery.migrateWarnings array and "forgets" the list of messages that have been seen already. Adding the plugin is equally simple—all you need to do is add a link similar to this, where X represents the version number of the plugin that is used: <script src="http://code.jquery.com/jquery-migrate- X.X.X.js"></script> If you want to learn more about the plugin and obtain the source code, then it is available for download from https://github.com/jquery/jquery-migrate. Using a CDN We can equally use a CDN link to provide our jQuery library—the principal link is provided by MaxCDN for the jQuery team, with the current version available at http://code.jquery.com. We can, of course, use CDN links from some alternative sources, if preferred—a reminder of these is as follows: Google (https://developers.google.com/speed/libraries/devguide#jquery) Microsoft (http://www.asp.net/ajaxlibrary/cdn.ashx#jQuery_Releases_on_the_CDN_0) CDNJS (http://cdnjs.com/libraries/jquery/) jsDelivr (http://www.jsdelivr.com/#%!jquery) Don't forget though that if you need, we can always save a copy of the file provided on CDN locally and reference this instead. The jQuery CDN will always have the latest version, although it may take a couple of days for updates to appear via the other links. Using other sources to install jQuery Right. Okay, let's move on and develop some code! "What's next?" I hear you ask. Aha! If you thought downloading and installing jQuery from the main site was the only way to do this, then you are wrong! After all, this is about mastering jQuery, so you didn't think I will only talk about something that I am sure you are already familiar with, right? Yes, there are more options available to us to install jQuery than simply using the CDN or main download page. Let's begin by taking a look at using Node. Each demo is based on Windows, as this is the author's preferred platform; alternatives are given, where possible, for other platforms. Using Node JS to install jQuery So far, we've seen how to download and reference jQuery, which is to use the download from the main jQuery site or via a CDN. The downside of this method is the manual work required to keep our versions of jQuery up to date! Instead, we can use a package manager to help manage our assets. Node.js is one such system. Let's take a look at the steps that need to be performed in order to get jQuery installed: We first need to install Node.js—head over to http://www.nodejs.org in order to download the package for your chosen platform; accept all the defaults when working through the wizard (for Mac and PC). Next, fire up a Node Command Prompt and then change to your project folder. In the prompt, enter this command: npm install jquery Node will fetch and install jQuery—it displays a confirmation message when the installation is complete: You can then reference jQuery by using this link: <name of drive>:websitenode_modulesjquerydistjquery.min.js. Node is now installed and ready for use—although we've installed it in a folder locally, in reality, we will most likely install it within a subfolder of our local web server. For example, if we're running WampServer, we can install it, then copy it into the /wamp/www/js folder, and reference it using http://localhost/js/jquery.min.js. If you want to take a look at the source of the jQuery Node Package Manager (NPM) package, then check out https://www.npmjs.org/package/jquery. Using Node to install jQuery makes our work simpler, but at a cost. Node.js (and its package manager, NPM) is primarily aimed at installing and managing JavaScript components and expects packages to follow the CommonJS standard. The downside of this is that there is no scope to manage any of the other assets that are often used within websites, such as fonts, images, CSS files, or even HTML pages. "Why will this be an issue?," I hear you ask. Simple, why make life hard for ourselves when we can manage all of these assets automatically and still use Node? Installing jQuery using Bower A relatively new addition to the library is the support for installation using Bower—based on Node, it's a package manager that takes care of the fetching and installing of packages from over the Internet. It is designed to be far more flexible about managing the handling of multiple types of assets (such as images, fonts, and CSS files) and does not interfere with how these components are used within a page (unlike Node). For the purpose of this demo, I will assume that you have already installed it; if not, you will need to revisit it before continuing with the following steps: Bring up the Node Command Prompt, change to the drive where you want to install jQuery, and enter this command: bower install jquery This will download and install the script, displaying the confirmation of the version installed when it has completed. The library is installed in the bower_components folder on your PC. It will look similar to this example, where I've navigated to the jquery subfolder underneath. By default, Bower will install jQuery in its bower_components folder. Within bower_components/jquery/dist/, we will find an uncompressed version, compressed release, and source map file. We can then reference jQuery in our script using this line: <script src="/bower_components/jquery/jquery.js"></script> We can take this further though. If we don't want to install the extra files that come with a Bower installation by default, we can simply enter this in a Command Prompt instead to just install the minified version 2.1 of jQuery: bower install http://code.jquery.com/jquery-2.1.0.min.js Now, we can be really clever at this point; as Bower uses Node's JSON files to control what should be installed, we can use this to be really selective and set Bower to install additional components at the same time. Let's take a look and see how this will work—in the following example, we'll use Bower to install jQuery 2.1 and 1.10 (the latter to provide support for IE6-8). In the Node Command Prompt, enter the following command: bower init This will prompt you for answers to a series of questions, at which point you can either fill out information or press Enter to accept the defaults. Look in the project folder; you should find a bower.json file within. Open it in your favorite text editor and then alter the code as shown here: {"ignore": [ "**/.*", "node_modules", "bower_components","test", "tests" ] ,"dependencies": {"jquery-legacy": "jquery#1.11.1","jquery-modern": "jquery#2.10"}} At this point, you have a bower.json file that is ready for use. Bower is built on top of Git, so in order to install jQuery using your file, you will normally need to publish it to the Bower repository. Instead, you can install an additional Bower package, which will allow you to install your custom package without the need to publish it to the Bower repository: In the Node Command Prompt window, enter the following at the prompt: npm install -g bower-installer When the installation is complete, change to your project folder and then enter this command line: bower-installer The bower-installer command will now download and install both the versions of jQuery. At this stage, you now have jQuery installed using Bower. You're free to upgrade or remove jQuery using the normal Bower process at some point in the future. If you want to learn more about how to use Bower, there are plenty of references online; https://www.openshift.com/blogs/day-1-bower-manage-your-client-side-dependencies is a good example of a tutorial that will help you get accustomed to using Bower. In addition, there is a useful article that discusses both Bower and Node, available at http://tech.pro/tutorial/1190/package-managers-an-introductory-guide-for-the-uninitiated-front-end-developer. Bower isn't the only way to install jQuery though—while we can use it to install multiple versions of jQuery, for example, we're still limited to installing the entire jQuery library. We can improve on this by referencing only the elements we need within the library. Thanks to some extensive work undertaken by the jQuery Core team, we can use the Asynchronous Module Definition (AMD) approach to reference only those modules that are needed within our website or online application. Using the AMD approach to load jQuery In most instances, when using jQuery, developers are likely to simply include a reference to the main library in their code. There is nothing wrong with it per se, but it loads a lot of extra code that is surplus to our requirements. A more efficient method, although one that takes a little effort in getting used to, is to use the AMD approach. In a nutshell, the jQuery team has made the library more modular; this allows you to use a loader such as require.js to load individual modules when needed. It's not suitable for every approach, particularly if you are a heavy user of different parts of the library. However, for those instances where you only need a limited number of modules, then this is a perfect route to take. Let's work through a simple example to see what it looks like in practice. Before we start, we need one additional item—the code uses the Fira Sans regular custom font, which is available from Font Squirrel at http://www.fontsquirrel.com/fonts/fira-sans. Let's make a start using the following steps: The Fira Sans font doesn't come with a web format by default, so we need to convert the font to use the web font format. Go ahead and upload the FiraSans-Regular.otf file to Font Squirrel's web font generator at http://www.fontsquirrel.com/tools/webfont-generator. When prompted, save the converted file to your project folder in a subfolder called fonts. We need to install jQuery and RequireJS into our project folder, so fire up a Node.js Command Prompt and change to the project folder. Next, enter these commands one by one, pressing Enter after each: bower install jquerybower install requirejs We need to extract a copy of the amd.html and amd.css files—it contains some simple markup along with a link to require.js; the amd.css file contains some basic styling that we will use in our demo. We now need to add in this code block, immediately below the link for require.js—this handles the calls to jQuery and RequireJS, where we're calling in both jQuery and Sizzle, the selector engine for jQuery: <script>require.config({paths: {"jquery": "bower_components/jquery/src","sizzle": "bower_components/jquery/src/sizzle/dist/sizzle"}});require(["js/app"]);</script> Now that jQuery has been defined, we need to call in the relevant modules. In a new file, go ahead and add the following code, saving it as app.js in a subfolder marked js within our project folder: define(["jquery/core/init", "jquery/attributes/classes"],function($) {$("div").addClass("decoration");}); We used app.js as the filename to tie in with the require(["js/app"]); reference in the code. If all went well, when previewing the results of our work in a browser. Although we've only worked with a simple example here, it's enough to demonstrate how easy it is to only call those modules we need to use in our code rather than call the entire jQuery library. True, we still have to provide a link to the library, but this is only to tell our code where to find it; our module code weighs in at 29 KB (10 KB when gzipped), against 242 KB for the uncompressed version of the full library! Now, there may be instances where simply referencing modules using this method isn't the right approach; this may apply if you need to reference lots of different modules regularly. A better alternative is to build a custom version of the jQuery library that only contains the modules that we need to use and the rest are removed during build. It's a little more involved but worth the effort—let's take a look at what is involved in the process. Customizing the downloads of jQuery from Git If we feel so inclined, we can really push the boat out and build a custom version of jQuery using the JavaScript task runner, Grunt. The process is relatively straightforward but involves a few steps; it will certainly help if you have some prior familiarity with Git! The demo assumes that you have already installed Node.js—if you haven't, then you will need to do this first before continuing with the exercise. Okay, let's make a start by performing the following steps: You first need to install Grunt if it isn't already present on your system—bring up the Node.js Command Prompt and enter this command: npm install -g grunt-cli Next, install Git—for this, browse to http://msysgit.github.io/ in order to download the package. Double-click on the setup file to launch the wizard, accepting all the defaults is sufficient for our needs. If you want more information on how to install Git, head over and take a look at https://github.com/msysgit/msysgit/wiki/InstallMSysGit for more details. Once Git is installed, change to the jquery folder from within the Command Prompt and enter this command to download and install the dependencies needed to build jQuery: npm install The final stage of the build process is to build the library into the file we all know and love; from the same Command Prompt, enter this command: grunt Browse to the jquery folder—within this will be a folder called dist, which contains our custom build of jQuery, ready for use. If there are modules within the library that we don't need, we can run a custom build. We can set the Grunt task to remove these when building the library, leaving in those that are needed for our project. For a complete list of all the modules that we can exclude, see https://github.com/jquery/jquery#modules. For example, to remove AJAX support from our build, we can run this command in place of step 5, as shown previously: grunt custom:-ajax This results in a file saving on the original raw version of 30 KB as shown in the following screenshot: The JavaScript and map files can now be incorporated into our projects in the usual way. For a detailed tutorial on the build process, this article by Dan Wellman is worth a read (https://www.packtpub.com/books/content/building-custom-version-jquery). Using a GUI as an alternative There is an online GUI available, which performs much the same tasks, without the need to install Git or Grunt. It's available at hhttp://projects.jga.me/jquery-builder/, although it is worth noting that it hasn't been updated for a while! Okay, so we have jQuery installed; let's take a look at one more useful function that will help in the event of debugging errors in our code. Support for source maps has been made available within jQuery since version 1.9. Let's take a look at how they work and see a simple example in action. Adding source map support Imagine a scenario, if you will, where you've created a killer site, which is running well, until you start getting complaints about problems with some of the jQuery-based functionality that is used on the site. Sounds familiar? Using an uncompressed version of jQuery on a production site is not an option; instead we can use source maps. Simply put, these map a compressed version of jQuery against the relevant line in the original source. Historically, source maps have given developers a lot of heartache when implementing, to the extent that the jQuery Team had to revert to disabling the automatic use of maps! For best effects, it is recommended that you use a local web server, such as WAMP (PC) or MAMP (Mac), to view this demo and that you use Chrome as your browser. Source maps are not difficult to implement; let's run through how you can implement them: Extract a copy of the sourcemap folder and save it to your project area locally. Press Ctrl + Shift + I to bring up the Developer Tools in Chrome. Click on Sources, then double-click on the sourcemap.html file—in the code window, and finally click on 17. Now, run the demo in Chrome—we will see it paused; revert back to the developer toolbar where line 17 is highlighted. The relevant calls to the jQuery library are shown on the right-hand side of the screen: If we double-click on the n.event.dispatch entry on the right, Chrome refreshes the toolbar and displays the original source line (highlighted) from the jQuery library, as shown here: It is well worth spending the time to get to know source maps—all the latest browsers support it, including IE11. Even though we've only used a simple example here, it doesn't matter as the principle is exactly the same, no matter how much code is used in the site. For a more in-depth tutorial that covers all the browsers, it is worth heading over to http://blogs.msdn.com/b/davrous/archive/2014/08/22/enhance-your-javascript-debugging-life-thanks-to-the-source-map-support-available-in-ie11-chrome-opera-amp-firefox.aspx—it is worth a read! Adding support for source maps We've just previewed the source map, source map support has already been added to the library. It is worth noting though that source maps are not included with the current versions of jQuery by default. If you need to download a more recent version or add support for the first time, then follow these steps: Source maps can be downloaded from the main site using http://code.jquery.com/jquery-X.X.X.min.map, where X represents the version number of jQuery being used. Open a copy of the minified version of the library and then add this line at the end of the file: //# sourceMappingURL=jquery.min.map Save it and then store it in the JavaScript folder of your project. Make sure you have copies of both the compressed and uncompressed versions of the library within the same folder. Let's move on and look at one more critical part of loading jQuery: if, for some unknown reason, jQuery becomes completely unavailable, then we can add a fallback position to our site that allows graceful degradation. It's a small but crucial part of any site and presents a better user experience than your site simply falling over! Working with Modernizr as a fallback A best practice when working with jQuery is to ensure that a fallback is provided for the library, should the primary version not be available. (Yes, it's irritating when it happens, but it can happen!) Typically, we might use a little JavaScript, such as the following example, in the best practice suggestions. This would work perfectly well but doesn't provide a graceful fallback. Instead, we can use Modernizr to perform the check for us and provide a graceful degradation if all fails. Modernizr is a feature detection library for HTML5/CSS3, which can be used to provide a standardized fallback mechanism in the event of a functionality not being available. You can learn more at http://www.modernizr.com. As an example, the code might look like this at the end of our website page. We first try to load jQuery using the CDN link, falling back to a local copy if that hasn't worked or an alternative if both fail: <body><script src="js/modernizr.js"></script><script type="text/javascript">Modernizr.load([{load: 'http://code.jquery.com/jquery-2.1.1.min.js',complete: function () {// Confirm if jQuery was loaded using CDN link// if not, fall back to local versionif ( !window.jQuery ) {Modernizr.load('js/jquery-latest.min.js');}}},// This script would wait until fallback is loaded, beforeloading{ load: 'jquery-example.js' }]);</script></body> In this way, we can ensure that jQuery either loads locally or from the CDN link—if all else fails, then we can at least make a graceful exit. Best practices for loading jQuery So far, we've examined several ways of loading jQuery into our pages, over and above the usual route of downloading the library locally or using a CDN link in our code. Now that we have it installed, it's a good opportunity to cover some of the best practices we should try to incorporate into our pages when loading jQuery: Always try to use a CDN to include jQuery on your production site. We can take advantage of the high availability and low latency offered by CDN services; the library may already be precached too, avoiding the need to download it again. Try to implement a fallback on your locally hosted library of the same version. If CDN links become unavailable (and they are not 100 percent infallible), then the local version will kick in automatically, until the CDN link becomes available again: <script type="text/javascript" src="//code.jquery.com/jquery-1.11.1.min.js"></script><script>window.jQuery || document.write('<scriptsrc="js/jquery-1.11.1.min.js"></script>')</script> Note that although this will work equally well as using Modernizr, it doesn't provide a graceful fallback if both the versions of jQuery should become unavailable. Although one hopes to never be in this position, at least we can use CSS to provide a graceful exit! Use protocol-relative/protocol-independent URLs; the browser will automatically determine which protocol to use. If HTTPS is not available, then it will fall back to HTTP. If you look carefully at the code in the previous point, it shows a perfect example of a protocol-independent URL, with the call to jQuery from the main jQuery Core site. If possible, keep all your JavaScript and jQuery inclusions at the bottom of your page—scripts block the rendering of the rest of the page until they have been fully rendered. Use the jQuery 2.x branch, unless you need to support IE6-8; in this case, use jQuery 1.x instead—do not load multiple jQuery versions. If you load jQuery using a CDN link, always specify the complete version number you want to load, such as jquery-1.11.1.min.js. If you are using other libraries, such as Prototype, MooTools, Zepto, and so on, that use the $ sign as well, try not to use $ to call jQuery functions and simply use jQuery instead. You can return the control of $ back to the other library with a call to the $.noConflict() function. For advanced browser feature detection, use Modernizr. It is worth noting that there may be instances where it isn't always possible to follow best practices; circumstances may dictate that we need to make allowances for requirements, where best practices can't be used. However, this should be kept to a minimum where possible; one might argue that there are flaws in our design if most of the code doesn't follow best practices! Summary If you thought that the only methods to include jQuery were via a manual download or using a CDN link, then hopefully this article has opened your eyes to some alternatives—let's take a moment to recap what we have learned. We kicked off with a customary look at how most developers are likely to include jQuery before quickly moving on to look at other sources. We started with a look at how to use Node, before turning our attention to using the Bower package manager. Next, we had a look at how we can reference individual modules within jQuery using the AMD approach. We then moved on and turned our attention to creating custom builds of the library using Git. We then covered how we can use source maps to debug our code, with a look at enabling support for them within Google's Chrome browser. To round out our journey of loading jQuery, we saw what might happen if we can't load jQuery at all and how we can get around this, by using Modernizr to allow our pages to degrade gracefully. We then finished the article with some of the best practices that we can follow when referencing jQuery. Resources for Article: Further resources on this subject: Using different jQuery event listeners for responsive interaction [Article] Building a Custom Version of jQuery [Article] Learning jQuery [Article]
Read more
  • 0
  • 0
  • 30985

article-image-digital-forensics-using-autopsy
Savia Lobo
24 May 2018
10 min read
Save for later

Getting started with Digital forensics using Autopsy

Savia Lobo
24 May 2018
10 min read
Digital forensics involves the preservation, acquisition, documentation, analysis, and interpretation of evidence from various storage media types. It is not only limited to laptops, desktops, tablets, and mobile devices but also extends to data in transit which is transmitted across public or private networks. In this tutorial, we will cover how one can carry out digital forensics with Autopsy. Autopsy is a digital forensics platform and graphical interface to the sleuth kit and other digital forensics tools. This article is an excerpt taken from the book, 'Digital Forensics with Kali Linux', written by Shiva V.N. Parasram. Let's proceed with the analysis using the Autopsy browser by first getting acquainted with the different ways to start Autopsy. Starting Autopsy Autopsy can be started in two ways. The first uses the Applications menu by clicking on Applications | 11 - Forensics | autopsy: Alternatively, we can click on the Show applications icon (last item in the side menu) and type autopsy into the search bar at the top-middle of the screen and then click on the autopsy icon: Once the autopsy icon is clicked, a new terminal is opened showing the program information along with connection details for opening The Autopsy Forensic Browser. In the following screenshot, we can see that the version number is listed as 2.24 with the path to the Evidence Locker folder as /var/lib/autopsy: To open the Autopsy browser, position the mouse over the link in the terminal, then right-click and choose Open Link, as seen in the following screenshot: Creating a new case To create a new case, follow the given steps: When the Autopsy Forensic Browser opens, investigators are presented with three options. Click on NEW CASE: Enter details for the Case Name, Description, and Investigator Names. For the Case Name, I've entered SP-8-dftt, as it closely matches the image name (8-jpeg-search.dd), which we will be using for this investigation. Once all information is entered, click NEW CASE: Several investigator name fields are available, as there may be instances where several investigators may be working together. The locations of the Case directory and Configuration file are displayed and shown as created.  It's important to take note of the case directory location, as seen in the screenshot: Case directory (/var/lib/autopsy/SP-8-dftt/) created. Click ADD HOST to continue: Enter the details for the Host Name (name of the computer being investigated) and the Description of the host. Optional settings: Time zone: Defaults to local settings, if not specified Timeskew Adjustment: Adds a value in seconds to compensate for time differences Path of Alert Hash Database: Specifies the path of a created database of known bad hashes Path of Ignore Hash Database: Specifies the path of a created database of known good hashes similar to the NIST NSRL: Click on the ADD HOST button to continue. Once the host is added and directories are created, we add the forensic image we want to analyze by clicking the ADD IMAGE button: Click on the ADD IMAGE FILE button to add the image file: To import the image for analysis, the full path must be specified. On my machine, I've saved the image file (8-jpeg-search.dd) to the Desktop folder. As such, the location of the file would be /root/Desktop/ 8-jpeg-search.dd. For the Import Method, we choose Symlink. This way the image file can be imported from its current location (Desktop) to the Evidence Locker without the risks associated with moving or copying the image file. If you are presented with the following error message, ensure that the specified image location is correct and that the forward slash (/) is used: Upon clicking Next, the Image File Details are displayed. To verify the integrity of the file, select the radio button for Calculate the hash value for this image, and select the checkbox next to Verify hash after importing? The File System Details section also shows that the image is of a ntfs partition. Click on the ADD button to continue: After clicking the ADD button in the previous screenshot, Autopsy calculates the MD5 hash and links the image into the evidence locker. Press OK to continue: At this point, we're just about ready to analyze the image file. If there are multiple cases listed in the gallery area from any previous investigations you may have worked on, be sure to choose the 8-jpeg-search.dd file and case: Before proceeding, we can click on the IMAGE DETAILS option. This screen gives detail such as the image name, volume ID, file format, file system, and also allows for the extraction of ASCII, Unicode, and unallocated data to enhance and provide faster keyword searches. Click on the back button in the browser to return to the previous menu and continue with the analysis: Before clicking on the ANALYZE button to start our investigation and analysis, we can also verify the integrity of the image by creating an MD5 hash, by clicking on the IMAGE INTEGRITY button: Several other options exist such as FILE ACTIVITY TIMELINES, HASH DATABASES, and so on. We can return to these at any point in the investigation. After clicking on the IMAGE INTEGRITY button, the image name and hash are displayed. Click on the VALIDATE button to validate the MD5 hash: The validation results are displayed in the lower-left corner of the Autopsy browser window: We can see that our validation was successful, with matching MD5 hashes displayed in the results. Click on the CLOSE button to continue. To begin our analysis, we click on the ANALYZE button: Analysis using Autopsy Now that we've created our case, added host information with appropriate directories, and added our acquired image, we get to the analysis stage. After clicking on the ANALYZE button (see the previous screenshot), we're presented with several options in the form of tabs, with which to begin our investigation: Let's look at the details of the image by clicking on the IMAGE DETAILS tab. In the following snippet, we can see the Volume Serial Number and the operating system (Version) listed as Windows XP: Next, we click on the FILE ANALYSIS tab. This mode opens into File Browsing Mode, which allows the examination of directories and files within the image. Directories within the image are listed by default in the main view area: In File Browsing Mode, directories are listed with the Current Directory specified as C:/. For each directory and file, there are fields showing when the item was WRITTEN, ACCESSED, CHANGED, and CREATED, along with its size and META data: WRITTEN: The date and time the file was last written to ACCESSED: The date and time the file was last accessed (only the date is accurate) CHANGED: The date and time the descriptive data of the file was modified CREATED: The data and time the file was created META: Metadata describing the file and information about the file: For integrity purposes, MD5 hashes of all files can be made by clicking on the GENERATE MD5 LIST OF FILES button. Investigators can also make notes about files, times, anomalies, and so on, by clicking on the ADD NOTE button: The left pane contains four main features that we will be using: Directory Seek: Allows for the searching of directories File Name Search: Allows for the searching of files by Perl expressions or filenames ALL DELETED FILES: Searches the image for deleted files EXPAND DIRECTORIES: Expands all directories for easier viewing of contents By clicking on EXPAND DIRECTORIES, all contents are easily viewable and accessible within the left pane and main window. The + next to a directory indicates that it can be further expanded to view subdirectories (++) and their contents: To view deleted files, we click on the ALL DELETED FILES button in the left pane. Deleted files are marked in red and also adhere to the same format of WRITTEN, ACCESSED, CHANGED, and CREATED times. From the following screenshot, we can see that the image contains two deleted files: We can also view more information about this file by clicking on its META entry. By viewing the metadata entries of a file (last column to the right), we can also view the hexadecimal entries for the file, which may give the true file extensions, even if the extension was changed. In the preceding screenshot, the second deleted file (file7.hmm) has a peculiar file extension of .hmm. Click on the META entry (31-128-3) to view the metadata: Under the Attributes section, click on the first cluster labelled 1066 to view header information of the file: We can see that the first entry is .JFIF, which is an abbreviation for JPEG File Interchange Format. This means that the file7.hmm file is an image file but had its extension changed to .hmm. Sorting files Inspecting the metadata of each file may not be practical with large evidence files. For such an instance, the FILE TYPE feature can be used. This feature allows for the examination of existing (allocated), deleted (unallocated), and hidden files. Click on the FILE TYPE tab to continue: Click Sort files into categories by type (leave the default-checked options as they are) and then click OK to begin the sorting process: Once sorting is complete, a results summary is displayed. In the following snippet, we can see that there are five Extension Mismatches: To view the sorted files, we must manually browse to the location of the output folder, as Autopsy 2.4 does not support viewing of sorted files. To reveal this location, click on View Sorted Files in the left pane: The output folder locations will vary depending on the information specified by the user when first creating the case, but can usually be found at /var/lib/autopsy/<case name>/<host name>/output/sorter-vol#/index.html. Once the index.html file has been opened, click on the Extension Mismatch link: The five listed files with mismatched extensions should be further examined by viewing metadata content, with notes added by the investigator. Reopening cases in Autopsy Cases are usually ongoing and can easily be restarted by starting Autopsy and clicking on OPEN CASE: In the CASE GALLERY, be sure to choose the correct case name and, from there, continue your examination: To recap, we looked at forensics using the Autopsy Forensic Browser with The Sleuth Kit. Compared to individual tools, Autopsy has case management features and supports various types of file analysis, searching, and sorting of allocated, unallocated, and hidden files. Autopsy can also perform hashing on a file and directory levels to maintain evidence integrity. If you enjoyed reading this article, do check out, 'Digital Forensics with Kali Linux' to take your forensic abilities and investigations to a professional level, catering to all aspects of a digital forensic investigation from hashing to reporting. What is Digital Forensics? IoT Forensics: Security in an always connected world where things talk Working with Forensic Evidence Container Recipes
Read more
  • 0
  • 0
  • 30732

article-image-how-to-extract-sim-card-data-from-android-devices-tutorial
Sugandha Lahoti
03 Feb 2019
9 min read
Save for later

How to extract SIM card data from Android devices [Tutorial]

Sugandha Lahoti
03 Feb 2019
9 min read
This tutorial discusses logical data extraction, and one of its subtopics Android SIM card extractions. This article is taken from the book Learning Android Forensics by Oleg Skulkin, Donnie Tindall, and Rohit Tamma. This book explore open source and commercial forensic tools and teaches readers the basic skills of Android malware identification and analysis. Logical extraction overview In digital forensics, the term logical extraction is typically used to refer to extractions that don't recover deleted data or do not include a full bit-by-bit copy of the evidence. However, a more correct definition of logical extraction is any method that requires communication with the base operating system. Because of this interaction with the operating system, a forensic examiner cannot be sure that they have recovered all of the data possible; the operating system is choosing which data it allows the examiner to access. In traditional computer forensics, logical extraction is analogous to copying and pasting a folder in order to extract data from a system; this process will only copy files that the user can access and see. If any hidden or deleted files are present in the folder being copied, they won't be in the pasted version of the folder. As you'll see, however, the line between logical and physical extractions in mobile forensics is somewhat blurrier than in traditional computer forensics. For example, deleted data can routinely be recovered from logical extractions on mobile devices due to the prevalence of SQLite databases being used to store data. Furthermore, almost every mobile extraction will require some form of interaction with the operating Android OS; there's no simple equivalent to pulling a hard drive and imaging it without booting the drive. What data can be recovered logically? For the most part, any and all user data may be recovered logically: Contacts Call logs SMS/MMS Application data System logs and information The bulk of this data is stored in SQLite databases, so it's even possible to recover large amounts of deleted data through a logical extraction. Root access When forensically analyzing an Android device, the limiting factor is often not the type of data being sought, but rather whether or not the examiner has the ability to access the data. All of the data listed previously, when stored on the internal flash memory, is protected and requires root access to read. The exception to this is application data that is stored on the SD card, which will be discussed later in this book. Without root access, a forensic examiner cannot simply copy information from the /data partition. The examiner will have to find some method of escalating privileges in order to gain access to the contacts, call logs, SMS/MMS, and application data. These methods often carry many risks, such as the potential to destroy or brick the device (making it unable to boot), and may alter data on the device in order to gain permanence. The methods commonly vary from device to device, and there is no universal, one-click method to gain root access to every device. Commercial mobile forensic tools such as Oxygen Forensic Detective and Cellebrite UFED have built-in capabilities to temporarily and safely root many devices but do not cover the wide range of all Android devices. The decision to root a device should be in accordance with your local operating procedures and court opinions in your jurisdiction. The legal acceptance of evidence obtained by rooting varies by jurisdiction. Android SIM card extractions Traditionally, SIM cards were used for transferring data between devices. SIM cards in the past were used to store many different types of data, such as the following: User data Contacts SMS messages Dialed calls Network data Integrated Circuit Card Identifier (ICCID): Serial number of the SIM International Mobile Subscriber Identity (IMSI): Identifier that ties the SIM to a specific user account MSISDN: Phone number assigned to the SIM Location Area Identity (LAI): Identifies the cell that a user is in Authentication Key (Ki): Used to authenticate the mobile network Various other network-specific information With the rise in capacity of device storage, SD cards, and cloud backups, the necessity for storing data on a SIM card has decreased. As such, most modern smartphones typically do not store much, if any, user data on the SIM card. All network data listed previously does still reside on the SIM, as a SIM is necessary to connect to all modern (4G) cellular networks. As with all Android devices, though, there is no concrete stipulation that user data can't be stored on a SIM; it simply doesn't happen by default. Individual device manufacturers can easily decide to write user data to the SIM, and individual users can download applications to provide that functionality. This means that a device's SIM card should always be examined during a forensic examination. It is a very quick process, and should never be overlooked. Acquiring SIM card data The SIM card should always be removed from the device and examined separately. While some tools claim to read the SIM card through the device interface, this may not recover deleted data or all data on the SIM; the only way for an examiner to be certain all data was acquired is to read the SIM through a standalone SIM card reader with a tool that has been tested and verified. The location of the SIM will vary by device but is typically either stored beneath the battery or in a tray located on the side of the device. Once the SIM is removed, it should be placed in a SIM card reader. There are hundreds of SIM card readers available in the marketplace, and all major mobile forensics tools come with an included reader that will work with their software. Oftentimes, the forensic tools will also support third-party SIM readers as well. There is a surprising lack of thorough, free SIM card reading software available. Any software used should always be tested and validated on a SIM card that has been populated with known data prior to being used in an actual forensic investigation. Also, keep in mind that much of the free software available works for older 2G/3G SIMs, but may not work properly on a modern 4G SIM. We used the Mobiledit! Lite, a free version of Mobiledit!, for the following screenshots. It is available at: http://www.mobiledit.com/downloads. The following is a sample 4G SIM card extraction from an Android phone running version 4.4.4; note that nothing that could be considered user data was acquired despite the SIM being used actively for over a year, though fields such as the ICCID, IMSI, and MSISDN (own phone number) could be useful for subpoenas/warrants or other aspects of an investigation: SIM card extraction overview The following screenshot highlights SMS messages on the SIM card: The following screenshot highlights the phonebook of the SIM card: The following screenshot highlights the phone number of the SIM card (also called the MSISDN): SIM Security Due to the fact that SIM cards conform to established, international standards, all SIM cards provide the same security functionality: a 4- to 8-digit PIN. Generally, this PIN must be set through a menu on the device. On Android devices, this setting is found at Settings | Security | Set up SIM card lock. The SIM PIN is completely independent of any lock screen security settings and only has to be entered when the device boots. The SIM PIN only protects user data on the SIM; all network information is still recoverable even if the SIM is PIN locked. The SIM card will allow three attempts to enter the PIN; if one of these attempts are correct, the counter will reset. On the other hand, if all of these attempts are incorrect, the SIM will enter Personal Unblocking Key (PUK) mode. The PUK is an 8-digit number assigned by the carrier and is frequently found on documentation when the SIM is purchased. Bypassing a PUK is not possible with any commercial forensic software; because of this, an examiner should never attempt to enter the PIN on the device as the device will not indicate how many attempts remain before the PUK is activated. An examiner could unwittingly PUK lock the SIM and be unable to access the device. Forensic tools, however, will show how many attempts remain before the PUK is activated, as seen in the previous screenshots. Common carrier defaults for SIM PINs are 0000 and 1234. If three tries remain before activating the PUK, an examiner may successfully unlock the SIM with one of these defaults. Carriers frequently retain PUK keys when a SIM is issued. These may be available through a subpoena or warrant issued to the carrier. SIM cloning The SIM PIN itself provides almost no additional security, and can easily be bypassed through SIM cloning. SIM cloning is a feature provided in almost all commercial mobile forensic software, although the term cloning is somewhat misleading. SIM cloning, in the case of mobile forensics, is the process of copying the network data from a locked SIM onto a forensically sterile SIM that does not have the PIN activated. The phone will identify the cloned SIM based on this network data (typically the ICCID and IMSI) and think that it is the same SIM that was inserted previously, but this time there will be no SIM PIN. This cloned SIM will also be unable to access the cellular network, which makes it an effective solution similar to Airplane Mode. Therefore, SIM cloning will allow an examiner to access the device, but the user data on the original SIM is still inaccessible as it remains protected by the PIN. We are unaware of any free software that performs forensic SIM cloning. It is supported by almost all commercial mobile forensic kits, however. These kits will typically include a SIM card reader, software to perform the clone, as well as multiple blank SIM cards for the cloning process. This article has covered SIM card extraction, which is a subtopic of logical extractions of Android devices. To know more about the other methods of logical extractions in Android devices, read our book Learning Android Forensics. What role does Linux play in securing Android devices? How the Titan M chip will improve Android security Getting your Android app ready for the Play Store[Tutorial]
Read more
  • 0
  • 0
  • 30631
article-image-brute-forcing-http-applications-and-web-applications-using-nmap-tutorial
Savia Lobo
11 Nov 2018
6 min read
Save for later

Brute forcing HTTP applications and web applications using Nmap [Tutorial]

Savia Lobo
11 Nov 2018
6 min read
Many home routers, IP webcams, and web applications still rely on HTTP authentication these days, and we, as system administrators or penetration testers, need to make sure that the system or user accounts are not using weak credentials. Now, thanks to the NSE script http-brute, we can perform robust dictionary attacks against HTTP basic, digest, and NTLM authentication. This article is an excerpt taken from the book Nmap: Network Exploration and Security Auditing Cookbook - Second Edition, written by Paulino Calderon. This book includes the basic usage of Nmap and related tools like Ncat, Ncrack, Ndiff, and Zenmap and much more. In this article, we will learn how to perform brute force password auditing against web servers that are using HTTP authentication and also against popular and custom web applications with Nmap. Brute forcing HTTP applications How to do it... Use the following Nmap command to perform brute force password auditing against a resource protected by HTTP's basic authentication: $ nmap -p80 --script http-brute <target> The results will return all the valid accounts that were found (if any): PORT STATE SERVICE REASON 80/tcp open http syn-ack | http-brute: | Accounts | admin:secret => Valid credentials | Statistics |_ Perfomed 603 guesses in 7 seconds, average tps: 86 How it works... The Nmap options -p80 --script http-brute tells Nmap to launch the http-brute script against the web server running on port 80. This script was originally committed by Patrik Karlsson, and it was created to launch dictionary attacks against URIs protected by HTTP authentication. The http-brute script uses, by default, the database files usernames.lst and passwords.lst located at /nselib/data/ to try each password, for every user, to hopefully find a valid account. There's more... The script http-brute depends on the NSE libraries unpwdb and brute. Read the Appendix B, Brute Force Password Auditing Options, for more information. To use different username and password lists, set the arguments userdb and passdb: $ nmap -p80 --script http-brute --script-args userdb=/var/usernames.txt,passdb=/var/passwords.txt <target> To quit after finding one valid account, use the argument brute.firstOnly: $ nmap -p80 --script http-brute --script-args brute.firstOnly <target> By default, http-brute uses Nmap's timing template to set the following timeout limits: -T3,T2,T1: 10 minutes -T4: 5 minutes -T5: 3 minutes For setting a different timeout limit, use the argument unpwd.timelimit. To run it indefinitely, set it to 0: $ nmap -p80 --script http-brute --script-argsunpwdb.timelimit=0 <target> $ nmap -p80 --script http-brute --script-args unpwdb.timelimit=60m <target> Brute modes The brute library supports different modes that alter the combinations used in the attack. The available modes are: user: In this mode, for each user listed in userdb, every password in passdb will be tried: $ nmap --script http-brute --script-args brute.mode=user <target> pass: In this mode, for each password listed in passdb, every user in userdb will be tried: $ nmap --script http-brute --script-args brute.mode=pass <target> creds: This mode requires the additional argument brute.credfile: $ nmap --script http-brute --script-args brute.mode=creds,brute.credfile=./creds.txt <target> Brute forcing web applications Performing brute force password auditing against web applications is an essential step to evaluate the password strength of system accounts. There are powerful tools such as THC Hydra, but Nmap offers great flexibility as it is fully configurable and contains a database of popular web applications, such as WordPress, Joomla!, Django, Drupal, MediaWiki, and WebSphere. How to do it... Use the following Nmap command to perform brute force password auditing against web applications using forms: $ nmap --script http-form-brute -p 80 <target> If credentials are found, they will be shown in the results: PORT STATE SERVICE REASON 80/tcp open http syn-ack | http-form-brute: | Accounts | user:secret - Valid credentials | Statistics |_ Perfomed 60023 guesses in 467 seconds, average tps: 138   How it works... The Nmap options -p80 --script http-form-brute tells Nmap to launch the http-form-brute script against the web server running on port 80. This script was originally committed by Patrik Karlsson, and it was created to launch dictionary attacks against authentication systems based on web forms. The script automatically attempts to detect the form fields required to authenticate, and it uses internally a database of popular web applications to help during the form detection phase. There's more... The script http-form-brute depends on the correct detection of the form fields. Often you will be required to manually set via script arguments the name of the fields holding the username and password variables. If the script argument http-form-brute.passvar is set, form detection will not be performed: $ nmap -p80 --script http-form-brute --script-args http-form-brute.passvar=contrasenia,http-form-brute.uservar=usuario <target> In a similar way, often you will need to set the script arguments http-form-brute.onsuccess or http-form-brute.onfailure to set the success/error messages returned when attempting to authenticate: $nmap -p80 --script http-form-brute --script-args http-form-brute.onsuccess=Exito <target> Brute forcing WordPress installations If you are targeting a popular application, remember to check whether there are any NSE scripts specialized on attacking them. For example, WordPress installations can be audited with the script http-wordpress-brute: $ nmap -p80 --script http-wordpress-brute <target> To set the number of threads, use the script argument http-wordpress-brute.threads: $ nmap -p80 --script http-wordpress-brute --script-args http-wordpress-brute.threads=5 <target>   If the server has virtual hosting, set the host field using the argument http-wordpress-brute.hostname: $ nmap -p80 --script http-wordpress-brute --script-args http-wordpress-brute.hostname="ahostname.wordpress.com" <target> To set a different login URI, use the argument http-wordpress-brute.uri: $ nmap -p80 --script http-wordpress-brute --script-args http-wordpress-brute.uri="/hidden-wp-login.php" <target> To change the name of the POST variable that stores the usernames and passwords, set the arguments http-wordpress-brute.uservar and http-wordpress-brute.passvar: $ nmap -p80 --script http-wordpress-brute --script-args http-wordpress-brute.uservar=usuario,http-wordpress-brute.passvar=pasguord <target> Brute forcing WordPress installations Another good example of a specialized NSE brute force script is http-joomla-brute. This script is designed to perform brute force password auditing against Joomla! installations. By default, our generic brute force script for HTTP will fail against Joomla! CMS since the application generates dynamically a security token, but this NSE script will automatically fetch it and include it in the login requests. Use the following Nmap command to launch the script: $ nmap -p80 --script http-joomla-brute <target> To set the number of threads, use the script argument http-joomla-brute.threads: $ nmap -p80 --script http-joomla-brute --script-args http-joomla-brute.threads=5 <target> To change the name of the POST variable that stores the login information, set the arguments http-joomla-brute.uservar and http-joomla-brute.passvar: $ nmap -p80 --script http-joomla-brute --script-args http-joomla-brute.uservar=usuario,http-joomla-brute.passvar=pasguord <target> To summarize, we learned how to brute force password auditing against web servers custom web applications with Nmap. If you've enjoyed reading this post, do check out our book, Nmap: Network Exploration and Security Auditing Cookbook - Second Edition to know more to learn about Lua programming and NSE script development which will allow you to further extend the power of Nmap. Discovering network hosts with ‘TCP SYN’ and ‘TCP ACK’ ping scans in Nmap [Tutorial] Introduction to the Nmap Scripting Engine Exploring the Nmap Scripting Engine API and Libraries
Read more
  • 0
  • 0
  • 30522

article-image-squid-proxy-server-fine-tuning-achieve-better-performance
Packt
25 Apr 2011
12 min read
Save for later

Squid Proxy Server: Fine Tuning to Achieve Better Performance

Packt
25 Apr 2011
12 min read
  Squid Proxy Server 3.1: Beginner's Guide Improve the performance of your network using the caching and access control capabilities of Squid         Read more about this book       Whether you only run one site, or are in charge of a whole network, Squid is an invaluable tool which improves performance immeasurably. Caching and performance optimization usually requires a lot of work on the developer's part, but Squid does all that for you. In this article we will learn to fine-tune our cache to achieve a better HIT ratio to save bandwidth and reduce the average page load time. In this article by Kulbir Saini, author of Squid Proxy Server 3 Beginners Guide, we will take a look at the following: Cache peers or neighbors Caching the web documents in the main memory and hard disk Tuning Squid to enhance bandwidth savings and reduce latency (For more resources on Proxy Servers, see here.) Cache peers or neighbors Cache peers or neighbors are the other proxy servers with which our Squid proxy server can: Share its cache with to reduce bandwidth usage and access time Use it as a parent or sibling proxy server to satisfy its clients' requests Use it as a parent or sibling proxy server We normally deploy more than one proxy server in the same network to share the load of a single server for better performance. The proxy servers can use each other's cache to retrieve the cached web documents locally to improve performance. Let's have a brief look at the directives provided by Squid for communication among different cache peers. Declaring cache peers The directive cache_peer is used to tell Squid about proxy servers in our neighborhood. Let's have a quick look at the syntax for this directive: cache_peer HOSTNAME_OR_IP_ADDRESS TYPE PROXY_PORT ICP_PORT [OPTIONS] In this code, HOSTNAME_OR_IP_ADDRESS is the hostname or IP address of the target proxy server or cache peer. TYPE specifies the type of the proxy server, which in turn, determines how that proxy server will be used by our proxy server. The other proxy servers can be used as a parent, sibling, or a member of a multicast group. Time for action – adding a cache peer Let's add a proxy server (parent.example.com) that will act as a parent proxy to our proxy server: cache_peer parent.example.com parent 3128 3130 default proxy-only 3130 is the standard ICP port. If the other proxy server is not using the standard ICP port, we should change the code accordingly. This code will direct Squid to use parent.example.com as a proxy server to satisfy client requests in case it's not able to do so itself. The option default specifies that this cache peer should be used as a last resort in the scenario where other peers can't be contacted. The option proxy-only specifies that the content fetched using this peer should not be cached locally. This is helpful when we don't want to replicate cached web documents, especially when the two peers are connected with a high bandwidth backbone. What just happened? We added parent.example.com as a cache peer or parent proxy to our Squid proxy server. We also used the option proxy-only, which means the requests fetched using this cache peer will not be cached on our proxy server. There are several other options in which you can add cache peers, for various purposes, such as, a hierarchy. Quickly restricting access to domains using peers If we have added a few proxy servers as cache peers to our Squid server, we may have the desire to have a little bit of control over the requests being forwarded to the peers. The directive cache_peer_domain is a quick way to achieve the desired control. The syntax of this directive is quite simple: cache_peer_domain CACHE_PEER_HOSTNAME [!]DOMAIN1 [[!]DOMAIN2 ...] In the code, CACHE_PEER_HOSTNAME is the hostname or IP address of the cache peer being used when declaring it as a cache peer, using the cache_peer directive. We can specify any number of domains which may be fetched through this cache peer. Adding a bang (!) as a prefix to the domain name will prevent the use of this cache peer for that particular domain. Let's say we want to use the videoproxy.example.com cache peer for browsing video portals like Youtube, Netflix, Metacafe, and so on. cache_peer_domain videoproxy.example.com .youtube.com .netflix.comcache_peer_domain videoproxy.example.com .metacafe.com These two lines will configure Squid to use the videoproxy.example.com cache peer for requests to the domains youtube.com, netflix.com, and metacafe.com only. Requests to other domains will not be forwarded using this peer. Advanced control on access using peers We just learned about cache_peer_domain, which provides a way to control access using cache peers. However, it's not really flexible in granting or revoking access. That's when cache_peer_access comes into the picture, which provides a very flexible way to control access using cache peers using ACLs. The syntax and implications are similar to other access directives such as http_access. cache_peer_access CACHE_PEER_HOSTNAME allow|deny [!]ACL_NAME Let's write the following configuration lines, which will allow only the clients on the network 192.0.2.0/24 to use the cache peer acadproxy.example.com for accessing Youtube, Netflix, and Metacafe. acl my_network src 192.0.2.0/24acl video_sites dstdomain .youtube.com .netflix.com .metacafe.comcache_peer_access acadproxy.example.com allow my_network video_sitescache_peer_access acadproxy.example.com deny all In the same way, we can use other ACL types to achieve better control over access to various websites using cache peers. Caching web documents All this time, we have been talking about the caching of web documents and how it helps in saving bandwidth and improving the end user experience, now it's time to learn how and where Squid actually keeps these cached documents so that they can be served on demand. Squid uses main memory (RAM) and hard disks for storing or caching the web documents. Caching is a complex process but Squid handles it beautifully and exposes the directives using squid.conf, so that we can control how much should be cached and what should be given the highest priority while caching. Let's have a brief look at the caching-related directives provided by Squid. Using main memory (RAM) for caching The web documents cached in the main memory or RAM can be served very quickly as data read/write speeds of RAM are very high compared to hard disks with mechanical parts. However, as the amount of space available in RAM for caching is very low compared to the cache space available on hard disks, only very popular objects or the documents with a very high probability of being requested again, are stored in cache space available in RAM. As the cache space in memory is precious, the documents are stored on a priority basis. Let's have a look at the different types of objects which can be cached. In-transit objects or current requests These are the objects related to the current requests and they have the highest priority to be kept in the cache space in RAM. These objects must be kept in RAM and if there is a situation where the incoming request rate is quite high and we are about to overflow the cache space in RAM, Squid will try to keep the served part (the part which has already been sent to the client) on the disk to create free space in RAM. Hot or popular objects These objects or web documents are popular and are requested quite frequently compared to others. These are stored in the cache space left after storing the in-transit objects as these have a lower priority than in-transit objects. These objects are generally pushed to disk when there is a need to generate more in RAM cache space for storing the in-transit objects. Negatively cached objects Negatively cached objects are error messages which Squid has encountered while fetching a page or web document on behalf of a client. For example, if a request to a web page has resulted in a HTTP error 404 (page not found), and Squid receives a subsequent request for the same web page, then Squid will check if the response is still fresh and will return a reply from the cache itself. If there is a request for the same page after the negatively cached object corresponding to that page has expired, Squid will check again if the page is available. Negatively cached objects have the same priority as hot or popular objects and they can be pushed to disk at any time in favor of in-transit objects. Specifying cache space in RAM So far we have learned about how the available cache space is utilized for storing or caching different types of objects with different priorities. Now, it's time to learn about specifying the amount of RAM space we want to dedicate for caching. While deciding the RAM space for caching, we should be neither greedy nor paranoid. If we specify a large percentage of RAM for caching, the overall system performance will suffer as the system will start swapping processes in case there is no free RAM left for other processes. If we use a very low percentage of RAM for caching, then we'll not be able to take full advantage of Squid's caching mechanism. The default size of the memory cache is 256 MB. Time for action – specifying space for memory caching We can use extra RAM space available on a running system after sparing a chunk of memory that can be utilized by the running process under heavy load. To find out the amount of free RAM available on our system, we can use either the top or free command. To find out the free RAM in Megabytes, we can use the free command as follows: $ free -m For more details, please check the top(1) and free(1) man pages. Now, let's say we have 4 GB of total RAM on the server and all the processes are running comfortably in 1 GB of RAM space. After securing another 512 MB for emergency situations where running processes may take extra memory, we can safely allocate 2.5 GB of RAM for caching. To specify the cache size in the main memory, we use the directive cache_mem. It has a very simple format. As we have learned before, we can specify the memory size in bytes, KB, MB, or GB. Let's specify the cache memory size for the previous example: cache_mem 2500 MB The previous value specified with cache_mem is in Megabytes. What just happened? We learned about calculating the approximate space in the main memory, which can be used to cache web documents and therefore enhance the performance of the Squid server by a significant margin. Have a go hero – calculating cache_mem for your machine Note down the total RAM on your machine and calculate the approximate space in megabytes that you can allocate for memory caching. Maximum object size in memory As we have limited space in memory available for caching objects, we need to use the space in an optimized way. We should plan to set this a bit low, as setting it to a too larger size will mean that there will be a lesser number of cached objects in the memory and the HIT (being found in cache) rate will suffer significantly. The default maximum size used by Squid is 512 KB, but we can change it depending on our value for cache_mem. So, if we want to set it to 1 MB, as we have a lot of RAM available for caching (as in the previous example), we can use the maximum_object_size_in_memory directive as follows: maximum_object_size_in_memory 1 MB This command will set the allowed maximum object size in memory cache to 1 MB. Memory cache mode With the newer versions of Squid, we can control which objects we want to keep in the memory cache for optimizing the performance. Squid offers the directive memory_cache_mode to set the mode that Squid should use to utilize the space available in memory cache. There are three different modes available: Mode Description always The mode always is used to keep all the most recently fetched objects that can fit in the available space. This is the default mode used by Squid. disk When the disk mode is set, only the objects which are already cached on a hard disk and have received a HIT (meaning they were requested subsequently after being cached), will be stored in the memory cache. network Only the objects which have been fetched from the network (including neighbors) are kept in the memory cache, if the network mode is set. Setting the mode is easy and can be set using the memory_cache_mode directive as shown: memory_cache_mode always This configuration line will set memory cache mode to always; this means that most recently fetched objects will be kept in the memory.  
Read more
  • 0
  • 2
  • 30263