Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-brute-forcing-http-applications-and-web-applications-using-nmap-tutorial
Savia Lobo
11 Nov 2018
6 min read
Save for later

Brute forcing HTTP applications and web applications using Nmap [Tutorial]

Savia Lobo
11 Nov 2018
6 min read
Many home routers, IP webcams, and web applications still rely on HTTP authentication these days, and we, as system administrators or penetration testers, need to make sure that the system or user accounts are not using weak credentials. Now, thanks to the NSE script http-brute, we can perform robust dictionary attacks against HTTP basic, digest, and NTLM authentication. This article is an excerpt taken from the book Nmap: Network Exploration and Security Auditing Cookbook - Second Edition, written by Paulino Calderon. This book includes the basic usage of Nmap and related tools like Ncat, Ncrack, Ndiff, and Zenmap and much more. In this article, we will learn how to perform brute force password auditing against web servers that are using HTTP authentication and also against popular and custom web applications with Nmap. Brute forcing HTTP applications How to do it... Use the following Nmap command to perform brute force password auditing against a resource protected by HTTP's basic authentication: $ nmap -p80 --script http-brute <target> The results will return all the valid accounts that were found (if any): PORT STATE SERVICE REASON 80/tcp open http syn-ack | http-brute: | Accounts | admin:secret => Valid credentials | Statistics |_ Perfomed 603 guesses in 7 seconds, average tps: 86 How it works... The Nmap options -p80 --script http-brute tells Nmap to launch the http-brute script against the web server running on port 80. This script was originally committed by Patrik Karlsson, and it was created to launch dictionary attacks against URIs protected by HTTP authentication. The http-brute script uses, by default, the database files usernames.lst and passwords.lst located at /nselib/data/ to try each password, for every user, to hopefully find a valid account. There's more... The script http-brute depends on the NSE libraries unpwdb and brute. Read the Appendix B, Brute Force Password Auditing Options, for more information. To use different username and password lists, set the arguments userdb and passdb: $ nmap -p80 --script http-brute --script-args userdb=/var/usernames.txt,passdb=/var/passwords.txt <target> To quit after finding one valid account, use the argument brute.firstOnly: $ nmap -p80 --script http-brute --script-args brute.firstOnly <target> By default, http-brute uses Nmap's timing template to set the following timeout limits: -T3,T2,T1: 10 minutes -T4: 5 minutes -T5: 3 minutes For setting a different timeout limit, use the argument unpwd.timelimit. To run it indefinitely, set it to 0: $ nmap -p80 --script http-brute --script-argsunpwdb.timelimit=0 <target> $ nmap -p80 --script http-brute --script-args unpwdb.timelimit=60m <target> Brute modes The brute library supports different modes that alter the combinations used in the attack. The available modes are: user: In this mode, for each user listed in userdb, every password in passdb will be tried: $ nmap --script http-brute --script-args brute.mode=user <target> pass: In this mode, for each password listed in passdb, every user in userdb will be tried: $ nmap --script http-brute --script-args brute.mode=pass <target> creds: This mode requires the additional argument brute.credfile: $ nmap --script http-brute --script-args brute.mode=creds,brute.credfile=./creds.txt <target> Brute forcing web applications Performing brute force password auditing against web applications is an essential step to evaluate the password strength of system accounts. There are powerful tools such as THC Hydra, but Nmap offers great flexibility as it is fully configurable and contains a database of popular web applications, such as WordPress, Joomla!, Django, Drupal, MediaWiki, and WebSphere. How to do it... Use the following Nmap command to perform brute force password auditing against web applications using forms: $ nmap --script http-form-brute -p 80 <target> If credentials are found, they will be shown in the results: PORT STATE SERVICE REASON 80/tcp open http syn-ack | http-form-brute: | Accounts | user:secret - Valid credentials | Statistics |_ Perfomed 60023 guesses in 467 seconds, average tps: 138   How it works... The Nmap options -p80 --script http-form-brute tells Nmap to launch the http-form-brute script against the web server running on port 80. This script was originally committed by Patrik Karlsson, and it was created to launch dictionary attacks against authentication systems based on web forms. The script automatically attempts to detect the form fields required to authenticate, and it uses internally a database of popular web applications to help during the form detection phase. There's more... The script http-form-brute depends on the correct detection of the form fields. Often you will be required to manually set via script arguments the name of the fields holding the username and password variables. If the script argument http-form-brute.passvar is set, form detection will not be performed: $ nmap -p80 --script http-form-brute --script-args http-form-brute.passvar=contrasenia,http-form-brute.uservar=usuario <target> In a similar way, often you will need to set the script arguments http-form-brute.onsuccess or http-form-brute.onfailure to set the success/error messages returned when attempting to authenticate: $nmap -p80 --script http-form-brute --script-args http-form-brute.onsuccess=Exito <target> Brute forcing WordPress installations If you are targeting a popular application, remember to check whether there are any NSE scripts specialized on attacking them. For example, WordPress installations can be audited with the script http-wordpress-brute: $ nmap -p80 --script http-wordpress-brute <target> To set the number of threads, use the script argument http-wordpress-brute.threads: $ nmap -p80 --script http-wordpress-brute --script-args http-wordpress-brute.threads=5 <target>   If the server has virtual hosting, set the host field using the argument http-wordpress-brute.hostname: $ nmap -p80 --script http-wordpress-brute --script-args http-wordpress-brute.hostname="ahostname.wordpress.com" <target> To set a different login URI, use the argument http-wordpress-brute.uri: $ nmap -p80 --script http-wordpress-brute --script-args http-wordpress-brute.uri="/hidden-wp-login.php" <target> To change the name of the POST variable that stores the usernames and passwords, set the arguments http-wordpress-brute.uservar and http-wordpress-brute.passvar: $ nmap -p80 --script http-wordpress-brute --script-args http-wordpress-brute.uservar=usuario,http-wordpress-brute.passvar=pasguord <target> Brute forcing WordPress installations Another good example of a specialized NSE brute force script is http-joomla-brute. This script is designed to perform brute force password auditing against Joomla! installations. By default, our generic brute force script for HTTP will fail against Joomla! CMS since the application generates dynamically a security token, but this NSE script will automatically fetch it and include it in the login requests. Use the following Nmap command to launch the script: $ nmap -p80 --script http-joomla-brute <target> To set the number of threads, use the script argument http-joomla-brute.threads: $ nmap -p80 --script http-joomla-brute --script-args http-joomla-brute.threads=5 <target> To change the name of the POST variable that stores the login information, set the arguments http-joomla-brute.uservar and http-joomla-brute.passvar: $ nmap -p80 --script http-joomla-brute --script-args http-joomla-brute.uservar=usuario,http-joomla-brute.passvar=pasguord <target> To summarize, we learned how to brute force password auditing against web servers custom web applications with Nmap. If you've enjoyed reading this post, do check out our book, Nmap: Network Exploration and Security Auditing Cookbook - Second Edition to know more to learn about Lua programming and NSE script development which will allow you to further extend the power of Nmap. Discovering network hosts with ‘TCP SYN’ and ‘TCP ACK’ ping scans in Nmap [Tutorial] Introduction to the Nmap Scripting Engine Exploring the Nmap Scripting Engine API and Libraries
Read more
  • 0
  • 0
  • 35991

article-image-build-reinforcement-learning-agent-in-keras-tutorial
Amey Varangaonkar
20 Aug 2018
6 min read
Save for later

Build your first Reinforcement learning agent in Keras [Tutorial]

Amey Varangaonkar
20 Aug 2018
6 min read
Today there are a variety of tools available at your disposal to develop and train your own Reinforcement learning agent. In this tutorial, we are going to learn about a Keras-RL agent called CartPole. We will go through this example because it won't consume your GPU, and your cloud budget to run. Also, this logic can be easily extended to other Atari problems. This article is an excerpt taken from the book Deep Learning Quick Reference, written by Mike Bernico. Let's talk quickly about the CartPole environment first: CartPole: The CartPole environment consists of a pole, balanced on a cart. The agent has to learn how to balance the pole vertically, while the cart underneath it moves. The agent is given the position of the cart, the velocity of the cart, the angle of the pole, and the rotational rate of the pole as inputs. The agent can apply a force on either side of the cart. If the pole falls more than 15 degrees from vertical, it's game over for our agent. The CartPole agent will use a fairly modest neural network that you should be able to train fairly quickly even without a GPU. We will start by looking at the model architecture. Then we will define the network's memory, exploration policy, and finally, train the agent. CartPole neural network architecture Three hidden layers with 16 neurons each are more than enough to solve this simple problem. We will use the following code to define the model: def build_model(state_size, num_actions): input = Input(shape=(1,state_size)) x = Flatten()(input) x = Dense(16, activation='relu')(x) x = Dense(16, activation='relu')(x) x = Dense(16, activation='relu')(x) output = Dense(num_actions, activation='linear')(x) model = Model(inputs=input, outputs=output) print(model.summary()) return model The input will be a 1 x state space vector and there will be an output neuron for each possible action that will predict the Q value of that action for each step. By taking the argmax of the outputs, we can choose the action with the highest Q value, but we don't have to do that ourselves as Keras-RL will do it for us. Keras-RL Memory Keras-RL provides us with a class called rl.memory.SequentialMemory that provides a fast and efficient data structure that we can store the agent's experiences in: memory = SequentialMemory(limit=50000, window_length=1) We need to specify a maximum size for this memory object, which is a hyperparameter. As new experiences are added to this memory and it becomes full, old experiences are forgotten. Keras-RL Policy Keras-RL provides an -greedy Q Policy called rl.policy.EpsGreedyQPolicy that we can use to balance exploration and exploitation. We can use rl.policy.LinearAnnealedPolicy to decay our  as the agent steps forward in the world, as shown in the following code: policy = LinearAnnealedPolicy(EpsGreedyQPolicy(), attr='eps', value_max=1., value_min=.1, value_test=.05, nb_steps=10000) Here we're saying that we want to start with a value of 1 for  and go no smaller than 0.1, while testing if our random number is less than 0.05. We set the number of steps between 1 and .1 to 10,000 and Keras-RL handles the decay math for us. Agent With a model, memory, and policy defined, we're now ready to create a deep Q network Agent and send that agent those objects. Keras-RL provides an agent class called rl.agents.dqn.DQNAgent that we can use for this, as shown in the following code: dqn = DQNAgent(model=model, nb_actions=num_actions, memory=memory, nb_steps_warmup=10, target_model_update=1e-2, policy=policy) dqn.compile(Adam(lr=1e-3), metrics=['mae']) Two of these parameters are probably unfamiliar at this point, target_model_update and nb_steps_warmup: nb_steps_warmup: Determines how long we wait before we start doing experience replay, which if you recall, is when we actually start training the network. This lets us build up enough experience to build a proper minibatch. If you choose a value for this parameter that's smaller than your batch size, Keras RL will sample with a replacement. target_model_update: The Q function is recursive and when the agent updates it's network for Q(s,a) that update also impacts the prediction it will make for Q(s', a). This can make for a very unstable network. The way most deep Q network implementations address this limitation is by using a target network, which is a copy of the deep Q network that isn't trained, but rather replaced with a fresh copy every so often. The target_model_update parameter controls how often this happens. Keras-RL Training Keras-RL provides several Keras-like callbacks that allow for convenient model checkpointing and logging. We will use both of those callbacks below. If you would like to see more of the callbacks Keras-RL provides, they can be found here: https://github.com/matthiasplappert/keras-rl/blob/master/rl/callbacks.py. You can also find a Callback class that you can use to create your own Keras-RL callbacks. We will use the following code to train our model: def build_callbacks(env_name): checkpoint_weights_filename = 'dqn_' + env_name + '_weights_{step}.h5f' log_filename = 'dqn_{}_log.json'.format(env_name) callbacks = [ModelIntervalCheckpoint(checkpoint_weights_filename, interval=5000)] callbacks += [FileLogger(log_filename, interval=100)] return callbacks callbacks = build_callbacks(ENV_NAME) dqn.fit(env, nb_steps=50000, visualize=False, verbose=2, callbacks=callbacks) Once the agent's callbacks are built, we can fit the DQNAgent by using a .fit() method. Take note of the visualize parameter in this example. If visualize were set to True, we would be able to watch the agent interact with the environment as we went. However, this significantly slows down the training. Results After the first 250 episodes, we will see that the total rewards for the episode approach 200 and the episode steps also approach 200. This means that the agent has learned to balance the pole on the cart until the environment ends at a maximum of 200 steps. It's of course fun to watch our success, so we can use the DQNAgent .test() method to evaluate for some number of episodes. The following code is used to define this method: dqn.test(env, nb_episodes=5, visualize=True) Here we've set visualize=True so we can watch our agent balance the pole, as shown in the following image: There we go, that's one balanced pole! Alright, I know, I'll admit that balancing a pole on a cart isn't all that cool, but it's a good enough demonstration of the process! Hopefully, you have now understood the dynamics behind the process, and as we discussed earlier, the solution to this problem can be applied to other similar game-based problems. If you found this article to be useful, make sure you check out the book Deep Learning Quick Reference to understand the other different types of reinforcement models you can build using Keras. Top 5 tools for reinforcement learning DeepCube: A new deep reinforcement learning approach solves the Rubik’s cube with no human help OpenAI builds reinforcement learning based system giving robots human like dexterity
Read more
  • 0
  • 0
  • 35510

article-image-9-useful-r-packages-for-nlp-text-mining
Amey Varangaonkar
18 Dec 2017
6 min read
Save for later

9 Useful R Packages for NLP & Text Mining

Amey Varangaonkar
18 Dec 2017
6 min read
[box type="note" align="" class="" width=""]The following excerpt is taken from the book Mastering Text Mining with R, co-authored by Ashish Kumar and Avinash Paul. This book lists various techniques to extract useful and high-quality information from your textual data.[/box] There is a wide range of packages available in R for natural language processing and text mining. In the article below, we present some of the popular and widely used R packages for NLP: OpenNLP OpenNLP is an R package which provides an interface, Apache OpenNLP, which is a  machine-learning-based toolkit written in Java for natural language processing activities. Apache OpenNLP is widely used for most common tasks in NLP, such as tokenization, POS tagging, named entity recognition (NER), chunking, parsing, and so on. It provides wrappers for Maxent entropy models using the Maxent Java package. It provides functions for sentence annotation, word annotation, POS tag annotation, and annotation parsing using the Apache OpenNLP chunking parser. The Maxent Chunk annotator function computes the chunk annotation using the Maxent chunker provided by OpenNLP. The Maxent entity annotator function in R package utilizes the Apache OpenNLP Maxent name finder for entity annotation. Model files can be downloaded from http://opennlp.sourceforge.net/models-1.5/. These language models can be effectively used in R packages by installing the OpenNLPmodels.language package from the repository at http://datacube.wu.ac.at. Get the OpenNLP package here. Rweka The RWeka package in R provides an interface to Weka. Weka is an open source software developed by a machine learning group at the University of Wakaito, which provides a wide range of machine learning algorithms which can either be directly applied to a dataset or it can be called from a Java code. Different data-mining activities, such as data processing, supervised and unsupervised learning, association mining, and so on, can be performed using the RWeka package. For natural language processing, RWeka provides tokenization and stemming functions. RWeka packages provide an interface to Alphabetic, NGramTokenizers, and wordTokenizer functions, which can efficiently perform tokenization for contiguous alphabetic sequence, string-split to n-grams, or simple word tokenization, respectively. Get started with Rweka here. RcmdrPlugin.temis The RcmdrPlugin.temis package in R provides a graphical integrated text-mining solution. This package can be leveraged for many text-mining tasks, such as importing and cleaning a corpus, terms and documents count, term co-occurrences, correspondence analysis, and so on. Corpora can be imported from different sources and analysed using the importCorpusDlg function. The package provides flexible data source options to import corpora from different sources, such as text files, spreadsheet files, XML, HTML files, Alceste format and Twitter search. The Import function in this package processes the corpus and generates a term-document matrix. The package provides different functions to summarize and visualize the corpus statistics. Correspondence analysis and hierarchical clustering can be performed on the corpus. The corpusDissimilarity function helps analyse and create a crossdissimilarity table between term-documents present in the corpus. This package provides many functions to help the users explore the corpus. For example, frequentTerms to list the most frequent terms of a corpus, specificTerms to list terms most associated with each document, subsetCorpusByTermsDlg to create a subset of the corpus. Term frequency, term co-occurrence, term dictionary, temporal evolution of occurrences or term time series, term metadata variables, and corpus temporal evolution are among the other very useful functions available in this package for text mining. Download the package from CRAN page. tm The tm package is a text-mining framework which provides some powerful functions which will aid in text-processing steps. It has methods for importing data, handling corpus, metadata management, creation of term document matrices, and preprocessing methods. For managing documents using the tm package, we create a corpus which is a collection of text documents. There are two types of implementation, volatile corpus (VCorpus) and permanent corpus (PCropus). VCorpus is completely held in memory and when the R object is destroyed the corpus is gone. PCropus is stored in the filesystem and is present even after the R object is destroyed; this corpus can be created by using the VCorpus and PCorpus functions respectively. This package provides a few predefined sources which can be used to import text, such as DirSource, VectorSource, or DataframeSource. The getSources method lists available sources, and users can create their own sources. The tm package ships with several reader options: readPlain, readPDF, and readDOC. We can execute the getReaders method for an up-to-date list of available readers. To write a corpus to the filesystem, we can use writeCorpus. For inspecting a corpus, there are methods such as inspect and print. For transformation of text, such as stop-word removal, stemming, whitespace removal, and so on, we can use the tm_map, content_transformer, tolower, stopwords("english") functions. For metadata management, meta comes in handy. The tm package provides various quantitative function for text analysis, such as DocumentTermMatrix , findFreqTerms, findAssocs, and removeSparseTerms. Download the tm package here. languageR languageR provides data sets and functions for statistical analysis on text data. This package contains functions for vocabulary richness, vocabulary growth, frequency spectrum, also mixed-effects models and so on. There are simulation functions available: simple regression, quasi-F factor, and Latin-square designs. Apart from that, this package can also be used for correlation, collinearity diagnostic, diagnostic visualization of logistic models, and so on. koRpus The koRpus package is a versatile tool for text mining which implements many functions for text readability and lexical variation. Apart from that, it can also be used for basic level functions such as tokenization and POS tagging. You can find more information about its current version and dependencies here. RKEA The RKEA package provides an interface to KEA, which is a tool for keyword extraction from texts. RKEA requires a keyword extraction model, which can be created by manually indexing a small set of texts, using which it extracts keywords from the document. maxent The maxent package in R provides tools for low-memory implementation of multinomial logistic regression, which is also called the maximum entropy model. This package is quite helpful for classification processes involving sparse term-document matrices, and low memory consumption on huge datasets. Download and get started with maxent. lsa Truncated singular vector decomposition can help overcome the variability in a term-document matrix by deriving the latent features statistically. The lsa package in R provides an implementation of latent semantic analysis. The ease of use and efficiency of R packages can be very handy when carrying out even the trickiest of text mining task. As a result, they have grown to become very popular in the community. If you found this post useful, you should definitely refer to our book Mastering Text Mining with R. It will give you ample techniques for effective text mining and analytics using the above mentioned packages.
Read more
  • 0
  • 1
  • 35434

article-image-openai-gym-environments-wrappers-and-monitors-tutorial
Packt Editorial Staff
17 Jul 2018
9 min read
Save for later

Extending OpenAI Gym environments with Wrappers and Monitors [Tutorial]

Packt Editorial Staff
17 Jul 2018
9 min read
In this article we are going to discuss two OpenAI Gym functionalities; Wrappers and Monitors. These functionalities are present in OpenAI to make your life easier and your codes cleaner. It provides you these convenient frameworks to extend the functionality of your existing environment in a modular way and get familiar with an agent's activity. So, let's take a quick overview of these classes. This article is an extract taken from the book, Deep Reinforcement Learning Hands-On, Second Edition written by, Maxim Lapan. What are Wrappers? Very frequently, you will want to extend the environment's functionality in some generic way. For example, an environment gives you some observations, but you want to accumulate them in some buffer and provide to the agent the N last observations, which is a common scenario for dynamic computer games, when one single frame is just not enough to get full information about the game state. Another example is when you want to be able to crop or preprocess an image's pixels to make it more convenient for the agent to digest, or if you want to normalize reward scores somehow. There are many such situations which have the same structure: you'd like to “wrap” the existing environment and add some extra logic doing something. Gym provides you with a convenient framework for these situations, called a Wrapper class. How does a wrapper work? The class structure is shown on the following diagram. The Wrapper class inherits the Env class. Its constructor accepts the only argument: the instance of the Env class to be “wrapped”. To add extra functionality, you need to redefine the methods you want to extend like step() or reset(). The only requirement is to call the original method of the superclass. Figure 1: The hierarchy of Wrapper classes in Gym. To handle more specific requirements, like a Wrapper which wants to process only observations from the environment, or only actions, there are subclasses of Wrapper which allow filtering of only a specific portion of information. They are: ObservationWrapper: You need to redefine its observation(obs) method. Argument obs is an observation from the wrapped environment, and this method should return the observation which will be given to the agent. RewardWrapper: Exposes the method reward(rew), which could modify the reward value given to the agent. ActionWrapper: You need to override the method action(act) which could tweak the action passed to the wrapped environment to the agent. Now let’s implement some wrappers To make it slightly more practical, let's imagine a situation where we want to intervene in the stream of actions sent by the agent and, with a probability of 10%, replace the current action with random one. By issuing the random actions, we make our agent explore the environment and from time to time drift away from the beaten track of its policy. This is an easy thing to do using the ActionWrapper class. import gym from typing import TypeVar import random Action = TypeVar('Action') class RandomActionWrapper(gym.ActionWrapper):     def __init__(self, env, epsilon=0.1):         super(RandomActionWrapper, self).__init__(env)         self.epsilon = epsilon Here we initialize our wrapper by calling a parent's __init__ method and saving epsilon (a probability of a random action). def action(self, action):         if random.random() < self.epsilon:             print("Random!")            return self.env.action_space.sample()        return action This is a method that we need to override from a parent's class to tweak the agent's actions. Every time we roll the die, with the probability of epsilon, we sample a random action from the action space and return it instead of the action the agent has sent to us. Please note, by using action_space and wrapper abstractions, we were able to write abstract code which will work with any environment from the Gym. Additionally, we print the message every time we replace the action, just to check that our wrapper is working. In production code, of course, this won't be necessary. if __name__ == "__main__":    env = RandomActionWrapper(gym.make("CartPole-v0")) Now it's time to apply our wrapper. We create a normal CartPole environment and pass it to our wrapper constructor. From here on we use our wrapper as a normal Env instance, instead of the original CartPole. As the Wrapper class inherits the Env class and exposes the same interface, we can nest our wrappers in any combination we want. This is a powerful, elegant and generic solution: obs = env.reset()    total_reward = 0.0    while True:        obs, reward, done, _ = env.step(0)        total_reward += reward        if done:            break    print("Reward got: %.2f" % total_reward) Here is almost the same code, except that every time we issue the same action: 0. Our agent is dull and always does the same thing. By running the code, you should see that the wrapper is indeed working: rl_book_samples/ch02$ python 03_random_actionwrapper.py WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype. Random! Random! Random! Random! Reward got: 12.00 If you want, you can play with the epsilon parameter on the wrapper's creation and check that randomness improves the agent's score on average. We should move on and look at another interesting gem hidden inside Gym: Monitor. What is a Monitor? Another class you should be aware of is Monitor. It is implemented like Wrapper and can write information about your agent's performance in a file with optional video recording of your agent in action. Some time ago, it was possible to upload the result of Monitor class' recording to the https://gym.openai.com website and see your agent's position in comparison to other people's results (see thee following screenshot), but, unfortunately, at the end of August 2017, OpenAI decided to shut down this upload functionality and froze all the results. There are several activities to implement an alternative to the original website, but they are not ready yet. I hope this situation will be resolved soon, but at the time of writing it's not possible to check your result against those of others. Just to give you an idea of how the Gym web interface looked, here is the CartPole environment leaderboard: Figure 2: OpenAI Gym web interface with CartPole submissions Every submission in the web interface had details about training dynamics. For example, below is the author's solution for one of Doom's mini-games: Figure 3: Submission dynamics on the DoomDefendLine environment. Despite this, Monitor is still useful, as you can take a look at your agent's life inside the environment. How to add Monitor to your agent So, here is how we add Monitor to our random CartPole agent, which is the only difference (the whole code is in Chapter02/04_cartpole_random_monitor.py). if __name__ == "__main__":    env = gym.make("CartPole-v0")    env = gym.wrappers.Monitor(env, "recording") The second argument we're passing to Monitor is the name of the directory it will write the results to. This directory shouldn't exist, otherwise your program will fail with an exception (to overcome this, you could either remove the existing directory or pass the force=True argument to Monitor class' constructor). The Monitor class requires the FFmpeg utility to be present on the system, which is used to convert captured observations into an output video file. This utility must be available, otherwise Monitor will raise an exception. The easiest way to install FFmpeg is by using your system's package manager, which is OS distribution-specific. To start this example, one of three extra prerequisites should be met: The code should be run in an X11 session with the OpenGL extension (GLX) The code should be started in an Xvfb virtual display You can use X11 forwarding in ssh connection The cause of this is video recording, which is done by taking screenshots of the window drawn by the environment. Some of the environment uses OpenGL to draw its picture, so the graphical mode with OpenGL needs to be present. This could be a problem for a virtual machine in the cloud, which physically doesn't have a monitor and graphical interface running. To overcome this, there is a special “virtual” graphical display, called Xvfb (X11 virtual framebuffer), which basically starts a virtual graphical display on the server and forces the program to draw inside it. That would be enough to make Monitor happily create the desired videos. To start your program in the Xvbf environment, you need to have it installed on your machine (it usually requires installing the package xvfb) and run the special script xvfb-run: $ xvfb-run -s "-screen 0 640x480x24" python 04_cartpole_random_monitor.py [2017-09-22 12:22:23,446] Making new env: CartPole-v0 [2017-09-22 12:22:23,451] Creating monitor directory recording [2017-09-22 12:22:23,570] Starting new video recorder writing to recording/openaigym.video.0.31179.video000000.mp4 Episode done in 14 steps, total reward 14.00 [2017-09-22 12:22:26,290] Finished writing results. You can upload them to the scoreboard via gym.upload('recording') As you may see from the log above, video has been written successfully, so you can peek inside one of your agent's sections by playing it. Another way to record your agent's actions is using ssh X11 forwarding, which uses ssh ability to tunnel X11 communications between the X11 client (Python code which wants to display some graphical information) and X11 server (software which knows how to display this information and has access to your physical display). In X11 architecture, the client and the server are separated and can work on different machines. To use this approach, you need the following: X11 server running on your local machine. Linux comes with X11 server as a standard component (all desktop environments are using X11). On a Windows machine you can set up third-party X11 implementations like open source VcXsrv (available in https://sourceforge.net/projects/vcxsrv/). The ability to log into your remote machine via ssh, passing –X command line option: ssh –X servername. This enables X11 tunneling and allows all processes started in this session to use your local display for graphics output. Then you can start a program which uses Monitor class and it will display the agent's actions, capturing the images into a video file. To summarize, we discussed the two extra functionalities in an OpenAI Gym; Wrappers and Monitors. To solve complex real world problems in Deep Learning, grab this practical guide Deep Reinforcement Learning Hands-On, Second Edition today. How Reinforcement Learning works How to implement Reinforcement Learning with TensorFlow Top 5 tools for reinforcement learning
Read more
  • 0
  • 0
  • 35340

article-image-building-your-own-basic-behavior-tree-tutorial
Natasha Mathur
11 Oct 2018
12 min read
Save for later

Building your own Basic Behavior tree in Unity [Tutorial]

Natasha Mathur
11 Oct 2018
12 min read
Behavior trees (BTs) have been gaining popularity among game developers very steadily.  Games such as Halo and Gears of War are among the more famous franchises to make extensive use of BTs. An abundance of computing power in PCs, gaming consoles, and mobile devices has made them a good option for implementing AI in games of all types and scopes. In this tutorial, we will look at the basics of a behavior tree and its implementation.  Over the last decade, BTs have become the pattern of choice for many developers when it comes to implementing behavioral rules for their AI agents. This tutorial is an excerpt taken from the book 'Unity 2017 Game AI programming - Third Edition' written by Raymundo Barrera, Aung Sithu Kyaw, and Thet Naing Swe. Note: You need to have Unity 2017 installed on a system that has either Windows 7 SP1+, 8, 10, 64-bit versions or Mac OS X 10.9+. Let's first have a look at the basics of behavior trees. Learning the basics of behavior trees Behavior trees got their name from their hierarchical, branching system of nodes with a common parent, known as the root. Behavior trees mimic the real thing they are named after—in this case, trees, and their branching structure. If we were to visualize a behavior tree, it would look something like the following figure: A basic tree structure Of course, behavior trees can be made up of any number of nodes and child nodes. The nodes at the very end of the hierarchy are referred to as leaf nodes, just like a tree. Nodes can represent behaviors or tests. Unlike state machines, which rely on transition rules to traverse through them, a BT's flow is defined strictly by each node's order within the larger hierarchy. A BT begins evaluating from the top of the tree (based on the preceding visualization), then continues through each child, which, in turn, runs through each of its children until a condition is met or the leaf node is reached. BTs always begin evaluating from the root node. Evaluating the existing solutions - Unity Asset store and others The Unity asset store is an excellent resource for developers. Not only are you able to purchase art, audio, and other kinds of assets, but it is also populated with a large number of plugins and frameworks. Most relevant to our purposes, there are a number of behavior tree plugins available on the asset store, ranging from free to a few hundred dollars. Most, if not all, provide some sort of GUI to make visualizing and arranging a fairly painless experience. There are many advantages of going with an off-the-shelf solution from the asset store. Many of the frameworks include advanced functionality such as runtime (and often visual) debugging, robust APIs, serialization, and data-oriented tree support. Many even include sample leaf logic nodes to use in your game, minimizing the amount of coding you have to do to get up and running. Some other alternatives are Behavior Machine and Behavior Designer, which offer different pricing tiers (Behavior Machine even offers a free edition) and a wide array of useful features. Many other options can be found for free around the web as both generic C# and Unity-specific implementations. Ultimately, as with any other system, the choice of rolling your own or using an existing solution will depend on your time, budget, and project. Implementing a basic behavior tree framework Our example focuses on simple logic to highlight the functionality of the tree, rather than muddy up the example with complex game logic. The goal of our example is to make you feel comfortable with what can seem like an intimidating concept in game AI, and give you the necessary tools to build your own tree and expand upon the provided code if you do so. Implementing a base Node class There is a base functionality that needs to go into every node. Our simple framework will have all the nodes derived from a base abstract Node.cs class. This class will provide said base functionality or at least the signature to expand upon that functionality: using UnityEngine; using System.Collections; [System.Serializable] public abstract class Node { /* Delegate that returns the state of the node.*/ public delegate NodeStates NodeReturn(); /* The current state of the node */ protected NodeStates m_nodeState; public NodeStates nodeState { get { return m_nodeState; } } /* The constructor for the node */ public Node() {} /* Implementing classes use this method to evaluate the desired set of conditions */ public abstract NodeStates Evaluate(); } The class is fairly simple. Think of Node.cs as a blueprint for all the other node types to be built upon. We begin with the NodeReturn delegate, which is not implemented in our example, but the next two fields are. However, m_nodeState is the state of a node at any given point. As we learned earlier, it will be either FAILURE, SUCCESS, or RUNNING. The nodeState value is simply a getter for m_nodeState since it is protected and we don't want any other area of the code directly setting m_nodeState inadvertently. Next, we have an empty constructor, for the sake of being explicit, even though it is not being used. Lastly, we have the meat and potatoes of our Node.cs class—the Evaluate() method. As we'll see in the classes that implement Node.cs, Evaluate() is where the magic happens. It runs the code that determines the state of the node. Extending nodes to selectors To create a selector, we simply expand upon the functionality that we described in the Node.cs class: using UnityEngine; using System.Collections; using System.Collections.Generic; public class Selector : Node { /** The child nodes for this selector */ protected List<Node> m_nodes = new List<Node>(); /** The constructor requires a lsit of child nodes to be * passed in*/ public Selector(List<Node> nodes) { m_nodes = nodes; } /* If any of the children reports a success, the selector will * immediately report a success upwards. If all children fail, * it will report a failure instead.*/ public override NodeStates Evaluate() { foreach (Node node in m_nodes) { switch (node.Evaluate()) { case NodeStates.FAILURE: continue; case NodeStates.SUCCESS: m_nodeState = NodeStates.SUCCESS; return m_nodeState; case NodeStates.RUNNING: m_nodeState = NodeStates.RUNNING; return m_nodeState; default: continue; } } m_nodeState = NodeStates.FAILURE; return m_nodeState; } } As we learned earlier, selectors are composite nodes: this means that they have one or more child nodes. These child nodes are stored in the m_nodes List<Node> variable. Although it's conceivable that one could extend the functionality of this class to allow adding more child nodes after the class has been instantiated, we initially provide this list via the constructor. The next portion of the code is a bit more interesting as it shows us a real implementation of the concepts we learned earlier. The Evaluate() method runs through all of its child nodes and evaluates each one individually. As a failure doesn't necessarily mean a failure for the entire selector, if one of the children returns FAILURE, we simply continue on to the next one. Inversely, if any child returns SUCCESS, then we're all set; we can set this node's state accordingly and return that value. If we make it through the entire list of child nodes and none of them have returned SUCCESS, then we can essentially determine that the entire selector has failed and we assign and return a FAILURE state. Moving on to sequences Sequences are very similar in their implementation, but as you might have guessed by now, the Evaluate() method behaves differently: using UnityEngine; using System.Collections; using System.Collections.Generic; public class Sequence : Node { /** Children nodes that belong to this sequence */ private List<Node> m_nodes = new List<Node>(); /** Must provide an initial set of children nodes to work */ public Sequence(List<Node> nodes) { m_nodes = nodes; } /* If any child node returns a failure, the entire node fails. Whence all * nodes return a success, the node reports a success. */ public override NodeStates Evaluate() { bool anyChildRunning = false; foreach(Node node in m_nodes) { switch (node.Evaluate()) { case NodeStates.FAILURE: m_nodeState = NodeStates.FAILURE; return m_nodeState; case NodeStates.SUCCESS: continue; case NodeStates.RUNNING: anyChildRunning = true; continue; default: m_nodeState = NodeStates.SUCCESS; return m_nodeState; } } m_nodeState = anyChildRunning ? NodeStates.RUNNING : NodeStates.SUCCESS; return m_nodeState; } } The Evaluate() method in a sequence will need to return true for all the child nodes, and if any one of them fails during the process, the entire sequence fails, which is why we check for FAILURE first and set and report it accordingly. A SUCCESS state simply means we get to live to fight another day, and we continue on to the next child node. If any of the child nodes are determined to be in the RUNNING state, we report that as the state for the node, and then the parent node or the logic driving the entire tree can evaluate it again. Implementing a decorator as an inverter The structure of Inverter.cs is a bit different, but it derives from Node, just like the rest of the nodes. Let's take a look at the code and spot the differences: using UnityEngine; using System.Collections; public class Inverter : Node { /* Child node to evaluate */ private Node m_node; public Node node { get { return m_node; } } /* The constructor requires the child node that this inverter decorator * wraps*/ public Inverter(Node node) { m_node = node; } /* Reports a success if the child fails and * a failure if the child succeeds. Running will report * as running */ public override NodeStates Evaluate() { switch (m_node.Evaluate()) { case NodeStates.FAILURE: m_nodeState = NodeStates.SUCCESS; return m_nodeState; case NodeStates.SUCCESS: m_nodeState = NodeStates.FAILURE; return m_nodeState; case NodeStates.RUNNING: m_nodeState = NodeStates.RUNNING; return m_nodeState; } m_nodeState = NodeStates.SUCCESS; return m_nodeState; } } As you can see, since a decorator only has one child, we don't have List<Node>, but rather a single node variable, m_node. We pass this node in via the constructor (essentially requiring it), but there is no reason you couldn't modify this code to provide an empty constructor and a method to assign the child node after instantiation. The Evalute() implementation implements the behavior of an inverter.  When the child evaluates as SUCCESS, the inverter reports a FAILURE, and when the child evaluates as FAILURE, the inverter reports a SUCCESS. The RUNNING state is reported normally. Creating a generic action node Now we arrive at ActionNode.cs, which is a generic leaf node to pass in some logic via a delegate. You are free to implement leaf nodes in any way that fits your logic, as long as it derives from Node. This particular example is equal parts flexible and restrictive. It's flexible in the sense that it allows you to pass in any method matching the delegate signature, but is restrictive for this very reason—it only provides one delegate signature that doesn't take in any arguments: using System; using UnityEngine; using System.Collections; public class ActionNode : Node { /* Method signature for the action. */ public delegate NodeStates ActionNodeDelegate(); /* The delegate that is called to evaluate this node */ private ActionNodeDelegate m_action; /* Because this node contains no logic itself, * the logic must be passed in in the form of * a delegate. As the signature states, the action * needs to return a NodeStates enum */ public ActionNode(ActionNodeDelegate action) { m_action = action; } /* Evaluates the node using the passed in delegate and * reports the resulting state as appropriate */ public override NodeStates Evaluate() { switch (m_action()) { case NodeStates.SUCCESS: m_nodeState = NodeStates.SUCCESS; return m_nodeState; case NodeStates.FAILURE: m_nodeState = NodeStates.FAILURE; return m_nodeState; case NodeStates.RUNNING: m_nodeState = NodeStates.RUNNING; return m_nodeState; default: m_nodeState = NodeStates.FAILURE; return m_nodeState; } } } The key to making this node work is the m_action delegate. For those familiar with C++, a delegate in C# can be thought of as a function pointer of sorts. You can also think of a delegate as a variable containing (or more accurately, pointing to) a function. This allows you to set the function to be called at runtime. The constructor requires you to pass in a method matching its signature and is expecting that method to return a NodeStates enum. That method can implement any logic you want, as long as these conditions are met. Unlike other nodes we've implemented, this one doesn't fall through to any state outside of the switch itself, so it defaults to a FAILURE state. You may choose to default to a SUCCESS or RUNNING state, if you so wish, by modifying the default return. You can easily expand on this class by deriving from it or simply making the changes to it that you need. You can also skip this generic action node altogether and implement one-off versions of specific leaf nodes, but it's good practice to reuse as much code as possible. Just remember to derive from Node and implement the required code! We learned basics of how a behavior tree works, then we created a sample behavior tree using our framework. If you found this post useful and want to learn other concepts in Behavior trees, be sure to check out the book 'Unity 2017 Game AI programming - Third Edition'. AI for game developers: 7 ways AI can take your game to the next level Techniques and Practices of Game AI
Read more
  • 0
  • 5
  • 35231

article-image-working-webcam-and-pi-camera
Packt
09 Feb 2016
13 min read
Save for later

Working with a Webcam and Pi Camera

Packt
09 Feb 2016
13 min read
In this article by Ashwin Pajankar and Arush Kakkar, the author of the book Raspberry Pi By Example we will learn how to use different types and uses of cameras with our Pi. Let's take a look at the topics we will study and implement in this article: Working with a webcam Crontab Timelapse using a webcam Webcam video recording and playback Pi Camera and Pi NOIR comparison Timelapse using Pi Camera The PiCamera module in Python (For more resources related to this topic, see here.) Working with webcams USB webcams are a great way to capture images and videos. Raspberry Pi supports common USB webcams. To be on the safe side, here is a list of the webcams supported by Pi: http://elinux.org/RPi_USB_Webcams. I am using a Logitech HD c310 USB Webcam. You can purchase it online, and you can find the product details and the specifications at http://www.logitech.com/en-in/product/hd-webcam-c310. Attach your USB webcam to Raspberry Pi through the USB port on Pi and run the lsusb command in the terminal. This command lists all the USB devices connected to the computer. The output should be similar to the following output depending on which port is used to connect the USB webcam:   pi@raspberrypi ~/book/chapter04 $ lsusb Bus 001 Device 002: ID 0424:9514 Standard Microsystems Corp. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp. Bus 001 Device 004: ID 148f:2070 Ralink Technology, Corp. RT2070 Wireless Adapter Bus 001 Device 007: ID 046d:081b Logitech, Inc. Webcam C310 Bus 001 Device 006: ID 1c4f:0003 SiGma Micro HID controller Bus 001 Device 005: ID 1c4f:0002 SiGma Micro Keyboard TRACER Gamma Ivory Then, install the fswebcam utility by running the following command: sudo apt-get install fswebcam The fswebcam is a simple command-line utility that captures images with webcams for Linux computers. Once the installation is done, you can use the following command to create a directory for output images: mkdir /home/pi/book/output Then, run the following command to capture the image: fswebcam -r 1280x960 --no-banner ~/book/output/camtest.jpg This will capture an image with a resolution of 1280 x 960. You might want to try another resolution for your learning. The --no-banner command will disable the timestamp banner. The image will be saved with the filename mentioned. If you run this command multiple times with the same filename, the image file will be overwritten each time. So, make sure that you change the filename if you want to save previously captured images. The text output of the command should be similar to the following: --- Opening /dev/video0... Trying source module v4l2... /dev/video0 opened. No input was specified, using the first. --- Capturing frame... Corrupt JPEG data: 2 extraneous bytes before marker 0xd5 Captured frame in 0.00 seconds. --- Processing captured image... Disabling banner. Writing JPEG image to '/home/pi/book/output/camtest.jpg'. Crontab A cron is a time-based job scheduler in Unix-like computer operating systems. It is driven by a crontab (cron table) file, which is a configuration file that specifies shell commands to be run periodically on a given schedule. It is used to schedule commands or shell scripts to run periodically at a fixed time, date, or interval. The syntax for crontab in order to schedule a command or script is as follows: 1 2 3 4 5 /location/command Here, the following are the definitions: 1: Minutes (0-59) 2: Hours (0-23) 3: Days (0-31) 4: Months [0-12 (1 for January)] 5: Days of the week [0-7 ( 7 or 0 for Sunday)] /location/command: The script or command name to be scheduled The crontab entry to run any script or command every minute is as follows: * * * * * /location/command 2>&1 In the next section, we will learn how to use crontab to schedule a script to capture images periodically in order to create the timelapse sequence. You can refer to this URL for more details oncrontab: http://www.adminschoice.com/crontab-quick-reference. Creating a timelapse sequence using fswebcam Timelapse photography means capturing photographs in regular intervals and playing the images with a higher frequency in time than those that were shot. For example, if you capture images with a frequency of one image per minute for 10 hours, you will get 600 images. If you combine all these images in a video with 30 images per second, you will get 10 hours of timelapse video compressed in 20 seconds. You can use your USB webcam with Raspberry Pi to achieve this. We already know how to use the Raspberry Pi with a Webcam and the fswebcam utility to capture an image. The trick is to write a script that captures images with different names and then add this script in crontab and make it run at regular intervals. Begin with creating a directory for captured images: mkdir /home/pi/book/output/timelapse Open an editor of your choice, write the following code, and save it as timelapse.sh: #!/bin/bash DATE=$(date +"%Y-%m-%d_%H%M") fswebcam -r 1280x960 --no-banner /home/pi/book/output/timelapse/garden_$DATE.jpg Make the script executable using: chmod +x timelapse.sh This shell script captures the image and saves it with the current timestamp in its name. Thus, we get an image with a new filename every time as the file contains the timestamp. The second line in the script creates the timestamp that we're using in the filename. Run this script manually once, and make sure that the image is saved in the /home/pi/book/output/timelapse directory with the garden_<timestamp>.jpg name. To run this script at regular intervals, we need to schedule it in crontab. The crontab entry to run our script every minute is as follows: * * * * * /home/pi/book/chapter04/timelapse.sh 2>&1 Open the crontab of the Pi user with crontab –e. It will open crontab with nano as the editor. Add the preceding line to crontab, save it, and exit it. Once you exit crontab, it will show the following message: no crontab for pi - using an empty one crontab: installing new crontab Our timelapse webcam setup is now live. If you want to change the image capture frequency, then you have to change the crontab settings. To set it every 5 minutes, change it to */5 * * * *. To set it for every 2 hours, use 0 */2 * * *. Make sure that your MicroSD card has enough free space to store all the images for the time duration for which you need to keep your timelapse setup. Once you capture all the images, the next part is to encode them all in a fast playing video, preferably 20 to 30 frames per second. For this part, the mencoder utility is recommended. The following are the steps to create a timelapse video with mencoder on a Raspberry Pi or any Debian/Ubuntu machine: Install mencoder using sudo apt-get install mencoder. Navigate to the output directory by issuing: cd /home/pi/book/output/timelapse Create a list of your timelapse sequence images using: ls garden_*.jpg > timelapse.txt Use the following command to create a video: mencoder -nosound -ovc lavc -lavcopts vcodec=mpeg4:aspect=16/9:vbitrate=8000000 -vf scale=1280:960 -o timelapse.avi -mf type=jpeg:fps=30 mf://@timelapse.txt This will create a video with name timelapse.avi in the current directory with all the images listed in timelapse.txt with a 30 fps frame rate. The statement contains the details of the video codec, aspect ratio, bit rate, and scale. For more information, you can run man mencoder on Command Prompt. We will cover how to play a video in the next section. Webcam video recording and playback We can use a webcam to record live videos using avconv. Install avconv using sudo apt-get install libav-tools. Use the following command to record a video: avconv -f video4linux2 -r 25 -s 1280x960 -i /dev/video0 ~/book/output/VideoStream.avi It will show following output on the screen. pi@raspberrypi ~ $ avconv -f video4linux2 -r 25 -s 1280x960 -i /dev/video0 ~/book/output/VideoStream.avi avconv version 9.14-6:9.14-1rpi1rpi1, Copyright (c) 2000-2014 the Libav developers built on Jul 22 2014 15:08:12 with gcc 4.6 (Debian 4.6.3-14+rpi1) [video4linux2 @ 0x5d6720] The driver changed the time per frame from 1/25 to 2/15 [video4linux2 @ 0x5d6720] Estimating duration from bitrate, this may be inaccurate Input #0, video4linux2, from '/dev/video0': Duration: N/A, start: 629.030244, bitrate: 147456 kb/s Stream #0.0: Video: rawvideo, yuyv422, 1280x960, 147456 kb/s, 1000k tbn, 7.50 tbc Output #0, avi, to '/home/pi/book/output/VideoStream.avi': Metadata: ISFT : Lavf54.20.4 Stream #0.0: Video: mpeg4, yuv420p, 1280x960, q=2-31, 200 kb/s, 25 tbn, 25 tbc Stream mapping: Stream #0:0 -> #0:0 (rawvideo -> mpeg4) Press ctrl-c to stop encoding frame= 182 fps= 7 q=31.0 Lsize= 802kB time=7.28 bitrate= 902.4kbits/s video:792kB audio:0kB global headers:0kB muxing overhead 1.249878% Received signal 2: terminating. You can terminate the recording sequence by pressing Ctrl + C. We can play the video using omxplayer. It comes with the latest raspbian, so there is no need to install it. To play a file with the name vid.mjpg, use the following command: omxplayer ~/book/output/VideoStream.avi It will play the video and display some output similar to the one here: pi@raspberrypi ~ $ omxplayer ~/book/output/VideoStream.avi Video codec omx-mpeg4 width 1280 height 960 profile 0 fps 25.000000 Subtitle count: 0, state: off, index: 1, delay: 0 V:PortSettingsChanged: 1280x960@25.00 interlace:0 deinterlace:0 anaglyph:0 par:1.00 layer:0 have a nice day ;) Try playing timelapse and record videos using omxplayer. Working with the Pi Camera and NoIR Camera Modules These camera modules are specially manufactured for Raspberry Pi and work with all the available models. You will need to connect the camera module to the CSI port, located behind the Ethernet port, and activate the camera using the raspi-config utility if you haven't already. You can find the video instructions to connect the camera module to Raspberry Pi at http://www.raspberrypi.org/help/camera-module-setup/. This page lists the types of camera modules available: http://www.raspberrypi.org/products/. Two types of camera modules are available for the Pi. These are Pi Camera and Pi NoIR camera, and they can be found at https://www.raspberrypi.org/products/camera-module/ and https://www.raspberrypi.org/products/pi-noir-camera/, respectively. The following image shows Pi Camera and Pi NoIR Camera boards side by side: The following image shows the Pi Camera board connected to the Pi: The following is an image of the Pi camera board placed in the camera case: The main difference between Pi Camera and Pi NoIR Camera is that Pi Camera gives better results in good lighting conditions, whereas Pi NoIR (NoIR stands for No-Infra Red) is used for low light photography. To use NoIR Camera in complete darkness, we need to flood the object to be photographed with infrared light. This is a good time to take a look at the various enclosures for Raspberry Pi Models. You can find various cases available online at https://www.adafruit.com/categories/289. An example of a Raspberry Pi case is as follows: Using raspistill and raspivid To capture images and videos using the Raspberry Pi camera module, we need to use raspistill and raspivid utilities. To capture an image, run the following command: raspistill -o cam_module_pic.jpg This will capture and save the image with name cam_module_pic.jpg in the current directory. To capture a 20 second video with the camera module, run the following command: raspivid –o test.avi –t 20000 This will capture and save the video with name test.avi in the current directory. Unlike fswebcam and avconv, raspistill and raspivid do not write anything to the console. So, you need to check the current directory for the output. Also, one can run the echo $? command to check whether these commands executed successfully. We can also mention the complete location of the file to be saved in these command, as shown in the following example: raspistill -o /home/pi/book/output/cam_module_pic.jpg Just like fswebcam, raspistill can be used to record the timelapse sequence. In our timelapse shell script, replace the line that contains fswebcam with the appropriate raspistill command to capture the timelapse sequence and use mencoder again to create the video. This is left as an exercise for the readers. Now, let's take a look at the images taken with the Pi camera under different lighting conditions. The following is the image with normal lighting and the backlight: The following is the image with only the backlight: The following is the image with normal lighting and no backlight: For NoIR camera usage in the night under low light conditions, use IR illuminator light for better results. You can get it online. A typical off-the-shelf LED IR illuminator suitable for our purpose will look like the one shown here: Using picamera in Python with the Pi Camera module picamera is a Python package that provides a programming interface to the Pi Camera module. The most recent version of raspbian has picamera preinstalled. If you do not have it installed, you can install it using: sudo apt-get install python-picamera The following program quickly demonstrates the basic usage of the picamera module to capture an image: import picamera import time with picamera.PiCamera() as cam: cam.resolution=(1024,768) cam.start_preview() time.sleep(5) cam.capture('/home/pi/book/output/still.jpg') We have to import time and picamera modules first. cam.start_preview()will start the preview, and time.sleep(5) will wait for 5 seconds before cam.capture() captures and saves image in the specified file. There is a built-in function in picamera for timelapse photography. The following program demonstrates its usage: import picamera import time with picamera.PiCamera() as cam: cam.resolution=(1024,768) cam.start_preview() time.sleep(3) for count, imagefile in enumerate(cam.capture_continuous ('/home/pi/book/output/image{counter: 02d}.jpg')): print 'Capturing and saving ' + imagefile time.sleep(1) if count == 10: break In the preceding code, cam.capture_continuous()is used to capture the timelapse sequence using the Pi camera module. Checkout more examples and API references for the picamera module at http://picamera.readthedocs.org/. The Pi camera versus the webcam Now, after using the webcam and the Pi camera, it's a good time to understand the differences, the pros, and the cons of using these. The Pi camera board does not use a USB port and is directly interfaced to the Pi. So, it provides better performance than a webcam in terms of the frame rate and resolution. We can directly use the picamera module in Python to work on images and videos. However, the Pi camera cannot be used with any other computer. A webcam uses an USB port for interface, and because of that, it can be used with any computer. However, compared to the Pi camera its performance, it is lower in terms of the frame rate and resolution. Summary In this article, we learned how to use a webcam and the Pi camera. We also learned how to use utilities such as fswebcam, avconv, raspistill, raspivid, mencoder, and omxplayer. We covered how to use crontab. We used the Python picamera module to programmatically work with the Pi camera board. Finally, we compared the Pi camera and the webcam. We will be reusing all the code examples and concepts for some real-life projects soon. Resources for Article: Further resources on this subject: Introduction to the Raspberry Pi's Architecture and Setup [article] Raspberry Pi LED Blueprints [article] Hacking a Raspberry Pi project? Understand electronics first! [article]
Read more
  • 0
  • 0
  • 34678
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-installing-jquery
Packt
04 Jun 2015
25 min read
Save for later

Installing jQuery

Packt
04 Jun 2015
25 min read
 In this article by Alex Libby, author of the book Mastering jQuery, we will examine some of the options available to help develop your skills even further. (For more resources related to this topic, see here.) Local or CDN, I wonder…? Which version…? Do I support old IE…? Installing jQuery is a thankless task that has to be done countless times by any developer—it is easy to imagine that person asking some of the questions. It is easy to imagine why most people go with the option of using a Content Delivery Network (CDN) link, but there is more to installing jQuery than taking the easy route! There are more options available, where we can be really specific about what we need to use—throughout this article, we will. We'll cover a number of topics, which include: Downloading and installing jQuery Customizing jQuery downloads Building from Git Using other sources to install jQuery Adding source map support Working with Modernizr as a fallback Intrigued? Let's get started. Downloading and installing jQuery As with all projects that require the use of jQuery, we must start somewhere—no doubt you've downloaded and installed jQuery a thousand times; let's just quickly recap to bring ourselves up to speed. If we browse to http://www.jquery.com/download, we can download jQuery using one of the two methods: downloading the compressed production version or the uncompressed development version. If we don't need to support old IE (IE6, 7, and 8), then we can choose the 2.x branch. If, however, you still have some diehards who can't (or don't want to) upgrade, then the 1.x branch must be used instead. To include jQuery, we just need to add this link to our page: <script src="http://code.jquery.com/jquery-X.X.X.js"></script> Here, X.X.X marks the version number of jQuery or the Migrate plugin that is being used in the page. Conventional wisdom states that the jQuery plugin (and this includes the Migrate plugin too) should be added to the <head> tag, although there are valid arguments to add it as the last statement before the closing <body> tag; placing it here may help speed up loading times to your site. This argument is not set in stone; there may be instances where placing it in the <head> tag is necessary and this choice should be left to the developer's requirements. My personal preference is to place it in the <head> tag as it provides a clean separation of the script (and the CSS) code from the main markup in the body of the page, particularly on lighter sites. I have even seen some developers argue that there is little perceived difference if jQuery is added at the top, rather than at the bottom; some systems, such as WordPress, include jQuery in the <head> section too, so either will work. The key here though is if you are perceiving slowness, then move your scripts to just before the <body> tag, which is considered a better practice. Using jQuery in a development capacity A useful point to note at this stage is that best practice recommends that CDN links should not be used within a development capacity; instead, the uncompressed files should be downloaded and referenced locally. Once the site is complete and is ready to be uploaded, then CDN links can be used. Adding the jQuery Migrate plugin If you've used any version of jQuery prior to 1.9, then it is worth adding the jQuery Migrate plugin to your pages. The jQuery Core team made some significant changes to jQuery from this version; the Migrate plugin will temporarily restore the functionality until such time that the old code can be updated or replaced. The plugin adds three properties and a method to the jQuery object, which we can use to control its behavior: Property or Method Comments jQuery.migrateWarnings This is an array of string warning messages that have been generated by the code on the page, in the order in which they were generated. Messages appear in the array only once even if the condition has occurred multiple times, unless jQuery.migrateReset() is called. jQuery.migrateMute Set this property to true in order to prevent console warnings from being generated in the debugging version. If this property is set, the jQuery.migrateWarnings array is still maintained, which allows programmatic inspection without console output. jQuery.migrateTrace Set this property to false if you want warnings but don't want traces to appear on the console. jQuery.migrateReset() This method clears the jQuery.migrateWarnings array and "forgets" the list of messages that have been seen already. Adding the plugin is equally simple—all you need to do is add a link similar to this, where X represents the version number of the plugin that is used: <script src="http://code.jquery.com/jquery-migrate- X.X.X.js"></script> If you want to learn more about the plugin and obtain the source code, then it is available for download from https://github.com/jquery/jquery-migrate. Using a CDN We can equally use a CDN link to provide our jQuery library—the principal link is provided by MaxCDN for the jQuery team, with the current version available at http://code.jquery.com. We can, of course, use CDN links from some alternative sources, if preferred—a reminder of these is as follows: Google (https://developers.google.com/speed/libraries/devguide#jquery) Microsoft (http://www.asp.net/ajaxlibrary/cdn.ashx#jQuery_Releases_on_the_CDN_0) CDNJS (http://cdnjs.com/libraries/jquery/) jsDelivr (http://www.jsdelivr.com/#%!jquery) Don't forget though that if you need, we can always save a copy of the file provided on CDN locally and reference this instead. The jQuery CDN will always have the latest version, although it may take a couple of days for updates to appear via the other links. Using other sources to install jQuery Right. Okay, let's move on and develop some code! "What's next?" I hear you ask. Aha! If you thought downloading and installing jQuery from the main site was the only way to do this, then you are wrong! After all, this is about mastering jQuery, so you didn't think I will only talk about something that I am sure you are already familiar with, right? Yes, there are more options available to us to install jQuery than simply using the CDN or main download page. Let's begin by taking a look at using Node. Each demo is based on Windows, as this is the author's preferred platform; alternatives are given, where possible, for other platforms. Using Node JS to install jQuery So far, we've seen how to download and reference jQuery, which is to use the download from the main jQuery site or via a CDN. The downside of this method is the manual work required to keep our versions of jQuery up to date! Instead, we can use a package manager to help manage our assets. Node.js is one such system. Let's take a look at the steps that need to be performed in order to get jQuery installed: We first need to install Node.js—head over to http://www.nodejs.org in order to download the package for your chosen platform; accept all the defaults when working through the wizard (for Mac and PC). Next, fire up a Node Command Prompt and then change to your project folder. In the prompt, enter this command: npm install jquery Node will fetch and install jQuery—it displays a confirmation message when the installation is complete: You can then reference jQuery by using this link: <name of drive>:websitenode_modulesjquerydistjquery.min.js. Node is now installed and ready for use—although we've installed it in a folder locally, in reality, we will most likely install it within a subfolder of our local web server. For example, if we're running WampServer, we can install it, then copy it into the /wamp/www/js folder, and reference it using http://localhost/js/jquery.min.js. If you want to take a look at the source of the jQuery Node Package Manager (NPM) package, then check out https://www.npmjs.org/package/jquery. Using Node to install jQuery makes our work simpler, but at a cost. Node.js (and its package manager, NPM) is primarily aimed at installing and managing JavaScript components and expects packages to follow the CommonJS standard. The downside of this is that there is no scope to manage any of the other assets that are often used within websites, such as fonts, images, CSS files, or even HTML pages. "Why will this be an issue?," I hear you ask. Simple, why make life hard for ourselves when we can manage all of these assets automatically and still use Node? Installing jQuery using Bower A relatively new addition to the library is the support for installation using Bower—based on Node, it's a package manager that takes care of the fetching and installing of packages from over the Internet. It is designed to be far more flexible about managing the handling of multiple types of assets (such as images, fonts, and CSS files) and does not interfere with how these components are used within a page (unlike Node). For the purpose of this demo, I will assume that you have already installed it; if not, you will need to revisit it before continuing with the following steps: Bring up the Node Command Prompt, change to the drive where you want to install jQuery, and enter this command: bower install jquery This will download and install the script, displaying the confirmation of the version installed when it has completed. The library is installed in the bower_components folder on your PC. It will look similar to this example, where I've navigated to the jquery subfolder underneath. By default, Bower will install jQuery in its bower_components folder. Within bower_components/jquery/dist/, we will find an uncompressed version, compressed release, and source map file. We can then reference jQuery in our script using this line: <script src="/bower_components/jquery/jquery.js"></script> We can take this further though. If we don't want to install the extra files that come with a Bower installation by default, we can simply enter this in a Command Prompt instead to just install the minified version 2.1 of jQuery: bower install http://code.jquery.com/jquery-2.1.0.min.js Now, we can be really clever at this point; as Bower uses Node's JSON files to control what should be installed, we can use this to be really selective and set Bower to install additional components at the same time. Let's take a look and see how this will work—in the following example, we'll use Bower to install jQuery 2.1 and 1.10 (the latter to provide support for IE6-8). In the Node Command Prompt, enter the following command: bower init This will prompt you for answers to a series of questions, at which point you can either fill out information or press Enter to accept the defaults. Look in the project folder; you should find a bower.json file within. Open it in your favorite text editor and then alter the code as shown here: {"ignore": [ "**/.*", "node_modules", "bower_components","test", "tests" ] ,"dependencies": {"jquery-legacy": "jquery#1.11.1","jquery-modern": "jquery#2.10"}} At this point, you have a bower.json file that is ready for use. Bower is built on top of Git, so in order to install jQuery using your file, you will normally need to publish it to the Bower repository. Instead, you can install an additional Bower package, which will allow you to install your custom package without the need to publish it to the Bower repository: In the Node Command Prompt window, enter the following at the prompt: npm install -g bower-installer When the installation is complete, change to your project folder and then enter this command line: bower-installer The bower-installer command will now download and install both the versions of jQuery. At this stage, you now have jQuery installed using Bower. You're free to upgrade or remove jQuery using the normal Bower process at some point in the future. If you want to learn more about how to use Bower, there are plenty of references online; https://www.openshift.com/blogs/day-1-bower-manage-your-client-side-dependencies is a good example of a tutorial that will help you get accustomed to using Bower. In addition, there is a useful article that discusses both Bower and Node, available at http://tech.pro/tutorial/1190/package-managers-an-introductory-guide-for-the-uninitiated-front-end-developer. Bower isn't the only way to install jQuery though—while we can use it to install multiple versions of jQuery, for example, we're still limited to installing the entire jQuery library. We can improve on this by referencing only the elements we need within the library. Thanks to some extensive work undertaken by the jQuery Core team, we can use the Asynchronous Module Definition (AMD) approach to reference only those modules that are needed within our website or online application. Using the AMD approach to load jQuery In most instances, when using jQuery, developers are likely to simply include a reference to the main library in their code. There is nothing wrong with it per se, but it loads a lot of extra code that is surplus to our requirements. A more efficient method, although one that takes a little effort in getting used to, is to use the AMD approach. In a nutshell, the jQuery team has made the library more modular; this allows you to use a loader such as require.js to load individual modules when needed. It's not suitable for every approach, particularly if you are a heavy user of different parts of the library. However, for those instances where you only need a limited number of modules, then this is a perfect route to take. Let's work through a simple example to see what it looks like in practice. Before we start, we need one additional item—the code uses the Fira Sans regular custom font, which is available from Font Squirrel at http://www.fontsquirrel.com/fonts/fira-sans. Let's make a start using the following steps: The Fira Sans font doesn't come with a web format by default, so we need to convert the font to use the web font format. Go ahead and upload the FiraSans-Regular.otf file to Font Squirrel's web font generator at http://www.fontsquirrel.com/tools/webfont-generator. When prompted, save the converted file to your project folder in a subfolder called fonts. We need to install jQuery and RequireJS into our project folder, so fire up a Node.js Command Prompt and change to the project folder. Next, enter these commands one by one, pressing Enter after each: bower install jquerybower install requirejs We need to extract a copy of the amd.html and amd.css files—it contains some simple markup along with a link to require.js; the amd.css file contains some basic styling that we will use in our demo. We now need to add in this code block, immediately below the link for require.js—this handles the calls to jQuery and RequireJS, where we're calling in both jQuery and Sizzle, the selector engine for jQuery: <script>require.config({paths: {"jquery": "bower_components/jquery/src","sizzle": "bower_components/jquery/src/sizzle/dist/sizzle"}});require(["js/app"]);</script> Now that jQuery has been defined, we need to call in the relevant modules. In a new file, go ahead and add the following code, saving it as app.js in a subfolder marked js within our project folder: define(["jquery/core/init", "jquery/attributes/classes"],function($) {$("div").addClass("decoration");}); We used app.js as the filename to tie in with the require(["js/app"]); reference in the code. If all went well, when previewing the results of our work in a browser. Although we've only worked with a simple example here, it's enough to demonstrate how easy it is to only call those modules we need to use in our code rather than call the entire jQuery library. True, we still have to provide a link to the library, but this is only to tell our code where to find it; our module code weighs in at 29 KB (10 KB when gzipped), against 242 KB for the uncompressed version of the full library! Now, there may be instances where simply referencing modules using this method isn't the right approach; this may apply if you need to reference lots of different modules regularly. A better alternative is to build a custom version of the jQuery library that only contains the modules that we need to use and the rest are removed during build. It's a little more involved but worth the effort—let's take a look at what is involved in the process. Customizing the downloads of jQuery from Git If we feel so inclined, we can really push the boat out and build a custom version of jQuery using the JavaScript task runner, Grunt. The process is relatively straightforward but involves a few steps; it will certainly help if you have some prior familiarity with Git! The demo assumes that you have already installed Node.js—if you haven't, then you will need to do this first before continuing with the exercise. Okay, let's make a start by performing the following steps: You first need to install Grunt if it isn't already present on your system—bring up the Node.js Command Prompt and enter this command: npm install -g grunt-cli Next, install Git—for this, browse to http://msysgit.github.io/ in order to download the package. Double-click on the setup file to launch the wizard, accepting all the defaults is sufficient for our needs. If you want more information on how to install Git, head over and take a look at https://github.com/msysgit/msysgit/wiki/InstallMSysGit for more details. Once Git is installed, change to the jquery folder from within the Command Prompt and enter this command to download and install the dependencies needed to build jQuery: npm install The final stage of the build process is to build the library into the file we all know and love; from the same Command Prompt, enter this command: grunt Browse to the jquery folder—within this will be a folder called dist, which contains our custom build of jQuery, ready for use. If there are modules within the library that we don't need, we can run a custom build. We can set the Grunt task to remove these when building the library, leaving in those that are needed for our project. For a complete list of all the modules that we can exclude, see https://github.com/jquery/jquery#modules. For example, to remove AJAX support from our build, we can run this command in place of step 5, as shown previously: grunt custom:-ajax This results in a file saving on the original raw version of 30 KB as shown in the following screenshot: The JavaScript and map files can now be incorporated into our projects in the usual way. For a detailed tutorial on the build process, this article by Dan Wellman is worth a read (https://www.packtpub.com/books/content/building-custom-version-jquery). Using a GUI as an alternative There is an online GUI available, which performs much the same tasks, without the need to install Git or Grunt. It's available at hhttp://projects.jga.me/jquery-builder/, although it is worth noting that it hasn't been updated for a while! Okay, so we have jQuery installed; let's take a look at one more useful function that will help in the event of debugging errors in our code. Support for source maps has been made available within jQuery since version 1.9. Let's take a look at how they work and see a simple example in action. Adding source map support Imagine a scenario, if you will, where you've created a killer site, which is running well, until you start getting complaints about problems with some of the jQuery-based functionality that is used on the site. Sounds familiar? Using an uncompressed version of jQuery on a production site is not an option; instead we can use source maps. Simply put, these map a compressed version of jQuery against the relevant line in the original source. Historically, source maps have given developers a lot of heartache when implementing, to the extent that the jQuery Team had to revert to disabling the automatic use of maps! For best effects, it is recommended that you use a local web server, such as WAMP (PC) or MAMP (Mac), to view this demo and that you use Chrome as your browser. Source maps are not difficult to implement; let's run through how you can implement them: Extract a copy of the sourcemap folder and save it to your project area locally. Press Ctrl + Shift + I to bring up the Developer Tools in Chrome. Click on Sources, then double-click on the sourcemap.html file—in the code window, and finally click on 17. Now, run the demo in Chrome—we will see it paused; revert back to the developer toolbar where line 17 is highlighted. The relevant calls to the jQuery library are shown on the right-hand side of the screen: If we double-click on the n.event.dispatch entry on the right, Chrome refreshes the toolbar and displays the original source line (highlighted) from the jQuery library, as shown here: It is well worth spending the time to get to know source maps—all the latest browsers support it, including IE11. Even though we've only used a simple example here, it doesn't matter as the principle is exactly the same, no matter how much code is used in the site. For a more in-depth tutorial that covers all the browsers, it is worth heading over to http://blogs.msdn.com/b/davrous/archive/2014/08/22/enhance-your-javascript-debugging-life-thanks-to-the-source-map-support-available-in-ie11-chrome-opera-amp-firefox.aspx—it is worth a read! Adding support for source maps We've just previewed the source map, source map support has already been added to the library. It is worth noting though that source maps are not included with the current versions of jQuery by default. If you need to download a more recent version or add support for the first time, then follow these steps: Source maps can be downloaded from the main site using http://code.jquery.com/jquery-X.X.X.min.map, where X represents the version number of jQuery being used. Open a copy of the minified version of the library and then add this line at the end of the file: //# sourceMappingURL=jquery.min.map Save it and then store it in the JavaScript folder of your project. Make sure you have copies of both the compressed and uncompressed versions of the library within the same folder. Let's move on and look at one more critical part of loading jQuery: if, for some unknown reason, jQuery becomes completely unavailable, then we can add a fallback position to our site that allows graceful degradation. It's a small but crucial part of any site and presents a better user experience than your site simply falling over! Working with Modernizr as a fallback A best practice when working with jQuery is to ensure that a fallback is provided for the library, should the primary version not be available. (Yes, it's irritating when it happens, but it can happen!) Typically, we might use a little JavaScript, such as the following example, in the best practice suggestions. This would work perfectly well but doesn't provide a graceful fallback. Instead, we can use Modernizr to perform the check for us and provide a graceful degradation if all fails. Modernizr is a feature detection library for HTML5/CSS3, which can be used to provide a standardized fallback mechanism in the event of a functionality not being available. You can learn more at http://www.modernizr.com. As an example, the code might look like this at the end of our website page. We first try to load jQuery using the CDN link, falling back to a local copy if that hasn't worked or an alternative if both fail: <body><script src="js/modernizr.js"></script><script type="text/javascript">Modernizr.load([{load: 'http://code.jquery.com/jquery-2.1.1.min.js',complete: function () {// Confirm if jQuery was loaded using CDN link// if not, fall back to local versionif ( !window.jQuery ) {Modernizr.load('js/jquery-latest.min.js');}}},// This script would wait until fallback is loaded, beforeloading{ load: 'jquery-example.js' }]);</script></body> In this way, we can ensure that jQuery either loads locally or from the CDN link—if all else fails, then we can at least make a graceful exit. Best practices for loading jQuery So far, we've examined several ways of loading jQuery into our pages, over and above the usual route of downloading the library locally or using a CDN link in our code. Now that we have it installed, it's a good opportunity to cover some of the best practices we should try to incorporate into our pages when loading jQuery: Always try to use a CDN to include jQuery on your production site. We can take advantage of the high availability and low latency offered by CDN services; the library may already be precached too, avoiding the need to download it again. Try to implement a fallback on your locally hosted library of the same version. If CDN links become unavailable (and they are not 100 percent infallible), then the local version will kick in automatically, until the CDN link becomes available again: <script type="text/javascript" src="//code.jquery.com/jquery-1.11.1.min.js"></script><script>window.jQuery || document.write('<scriptsrc="js/jquery-1.11.1.min.js"></script>')</script> Note that although this will work equally well as using Modernizr, it doesn't provide a graceful fallback if both the versions of jQuery should become unavailable. Although one hopes to never be in this position, at least we can use CSS to provide a graceful exit! Use protocol-relative/protocol-independent URLs; the browser will automatically determine which protocol to use. If HTTPS is not available, then it will fall back to HTTP. If you look carefully at the code in the previous point, it shows a perfect example of a protocol-independent URL, with the call to jQuery from the main jQuery Core site. If possible, keep all your JavaScript and jQuery inclusions at the bottom of your page—scripts block the rendering of the rest of the page until they have been fully rendered. Use the jQuery 2.x branch, unless you need to support IE6-8; in this case, use jQuery 1.x instead—do not load multiple jQuery versions. If you load jQuery using a CDN link, always specify the complete version number you want to load, such as jquery-1.11.1.min.js. If you are using other libraries, such as Prototype, MooTools, Zepto, and so on, that use the $ sign as well, try not to use $ to call jQuery functions and simply use jQuery instead. You can return the control of $ back to the other library with a call to the $.noConflict() function. For advanced browser feature detection, use Modernizr. It is worth noting that there may be instances where it isn't always possible to follow best practices; circumstances may dictate that we need to make allowances for requirements, where best practices can't be used. However, this should be kept to a minimum where possible; one might argue that there are flaws in our design if most of the code doesn't follow best practices! Summary If you thought that the only methods to include jQuery were via a manual download or using a CDN link, then hopefully this article has opened your eyes to some alternatives—let's take a moment to recap what we have learned. We kicked off with a customary look at how most developers are likely to include jQuery before quickly moving on to look at other sources. We started with a look at how to use Node, before turning our attention to using the Bower package manager. Next, we had a look at how we can reference individual modules within jQuery using the AMD approach. We then moved on and turned our attention to creating custom builds of the library using Git. We then covered how we can use source maps to debug our code, with a look at enabling support for them within Google's Chrome browser. To round out our journey of loading jQuery, we saw what might happen if we can't load jQuery at all and how we can get around this, by using Modernizr to allow our pages to degrade gracefully. We then finished the article with some of the best practices that we can follow when referencing jQuery. Resources for Article: Further resources on this subject: Using different jQuery event listeners for responsive interaction [Article] Building a Custom Version of jQuery [Article] Learning jQuery [Article]
Read more
  • 0
  • 0
  • 33889

article-image-digital-forensics-using-autopsy
Savia Lobo
24 May 2018
10 min read
Save for later

Getting started with Digital forensics using Autopsy

Savia Lobo
24 May 2018
10 min read
Digital forensics involves the preservation, acquisition, documentation, analysis, and interpretation of evidence from various storage media types. It is not only limited to laptops, desktops, tablets, and mobile devices but also extends to data in transit which is transmitted across public or private networks. In this tutorial, we will cover how one can carry out digital forensics with Autopsy. Autopsy is a digital forensics platform and graphical interface to the sleuth kit and other digital forensics tools. This article is an excerpt taken from the book, 'Digital Forensics with Kali Linux', written by Shiva V.N. Parasram. Let's proceed with the analysis using the Autopsy browser by first getting acquainted with the different ways to start Autopsy. Starting Autopsy Autopsy can be started in two ways. The first uses the Applications menu by clicking on Applications | 11 - Forensics | autopsy: Alternatively, we can click on the Show applications icon (last item in the side menu) and type autopsy into the search bar at the top-middle of the screen and then click on the autopsy icon: Once the autopsy icon is clicked, a new terminal is opened showing the program information along with connection details for opening The Autopsy Forensic Browser. In the following screenshot, we can see that the version number is listed as 2.24 with the path to the Evidence Locker folder as /var/lib/autopsy: To open the Autopsy browser, position the mouse over the link in the terminal, then right-click and choose Open Link, as seen in the following screenshot: Creating a new case To create a new case, follow the given steps: When the Autopsy Forensic Browser opens, investigators are presented with three options. Click on NEW CASE: Enter details for the Case Name, Description, and Investigator Names. For the Case Name, I've entered SP-8-dftt, as it closely matches the image name (8-jpeg-search.dd), which we will be using for this investigation. Once all information is entered, click NEW CASE: Several investigator name fields are available, as there may be instances where several investigators may be working together. The locations of the Case directory and Configuration file are displayed and shown as created.  It's important to take note of the case directory location, as seen in the screenshot: Case directory (/var/lib/autopsy/SP-8-dftt/) created. Click ADD HOST to continue: Enter the details for the Host Name (name of the computer being investigated) and the Description of the host. Optional settings: Time zone: Defaults to local settings, if not specified Timeskew Adjustment: Adds a value in seconds to compensate for time differences Path of Alert Hash Database: Specifies the path of a created database of known bad hashes Path of Ignore Hash Database: Specifies the path of a created database of known good hashes similar to the NIST NSRL: Click on the ADD HOST button to continue. Once the host is added and directories are created, we add the forensic image we want to analyze by clicking the ADD IMAGE button: Click on the ADD IMAGE FILE button to add the image file: To import the image for analysis, the full path must be specified. On my machine, I've saved the image file (8-jpeg-search.dd) to the Desktop folder. As such, the location of the file would be /root/Desktop/ 8-jpeg-search.dd. For the Import Method, we choose Symlink. This way the image file can be imported from its current location (Desktop) to the Evidence Locker without the risks associated with moving or copying the image file. If you are presented with the following error message, ensure that the specified image location is correct and that the forward slash (/) is used: Upon clicking Next, the Image File Details are displayed. To verify the integrity of the file, select the radio button for Calculate the hash value for this image, and select the checkbox next to Verify hash after importing? The File System Details section also shows that the image is of a ntfs partition. Click on the ADD button to continue: After clicking the ADD button in the previous screenshot, Autopsy calculates the MD5 hash and links the image into the evidence locker. Press OK to continue: At this point, we're just about ready to analyze the image file. If there are multiple cases listed in the gallery area from any previous investigations you may have worked on, be sure to choose the 8-jpeg-search.dd file and case: Before proceeding, we can click on the IMAGE DETAILS option. This screen gives detail such as the image name, volume ID, file format, file system, and also allows for the extraction of ASCII, Unicode, and unallocated data to enhance and provide faster keyword searches. Click on the back button in the browser to return to the previous menu and continue with the analysis: Before clicking on the ANALYZE button to start our investigation and analysis, we can also verify the integrity of the image by creating an MD5 hash, by clicking on the IMAGE INTEGRITY button: Several other options exist such as FILE ACTIVITY TIMELINES, HASH DATABASES, and so on. We can return to these at any point in the investigation. After clicking on the IMAGE INTEGRITY button, the image name and hash are displayed. Click on the VALIDATE button to validate the MD5 hash: The validation results are displayed in the lower-left corner of the Autopsy browser window: We can see that our validation was successful, with matching MD5 hashes displayed in the results. Click on the CLOSE button to continue. To begin our analysis, we click on the ANALYZE button: Analysis using Autopsy Now that we've created our case, added host information with appropriate directories, and added our acquired image, we get to the analysis stage. After clicking on the ANALYZE button (see the previous screenshot), we're presented with several options in the form of tabs, with which to begin our investigation: Let's look at the details of the image by clicking on the IMAGE DETAILS tab. In the following snippet, we can see the Volume Serial Number and the operating system (Version) listed as Windows XP: Next, we click on the FILE ANALYSIS tab. This mode opens into File Browsing Mode, which allows the examination of directories and files within the image. Directories within the image are listed by default in the main view area: In File Browsing Mode, directories are listed with the Current Directory specified as C:/. For each directory and file, there are fields showing when the item was WRITTEN, ACCESSED, CHANGED, and CREATED, along with its size and META data: WRITTEN: The date and time the file was last written to ACCESSED: The date and time the file was last accessed (only the date is accurate) CHANGED: The date and time the descriptive data of the file was modified CREATED: The data and time the file was created META: Metadata describing the file and information about the file: For integrity purposes, MD5 hashes of all files can be made by clicking on the GENERATE MD5 LIST OF FILES button. Investigators can also make notes about files, times, anomalies, and so on, by clicking on the ADD NOTE button: The left pane contains four main features that we will be using: Directory Seek: Allows for the searching of directories File Name Search: Allows for the searching of files by Perl expressions or filenames ALL DELETED FILES: Searches the image for deleted files EXPAND DIRECTORIES: Expands all directories for easier viewing of contents By clicking on EXPAND DIRECTORIES, all contents are easily viewable and accessible within the left pane and main window. The + next to a directory indicates that it can be further expanded to view subdirectories (++) and their contents: To view deleted files, we click on the ALL DELETED FILES button in the left pane. Deleted files are marked in red and also adhere to the same format of WRITTEN, ACCESSED, CHANGED, and CREATED times. From the following screenshot, we can see that the image contains two deleted files: We can also view more information about this file by clicking on its META entry. By viewing the metadata entries of a file (last column to the right), we can also view the hexadecimal entries for the file, which may give the true file extensions, even if the extension was changed. In the preceding screenshot, the second deleted file (file7.hmm) has a peculiar file extension of .hmm. Click on the META entry (31-128-3) to view the metadata: Under the Attributes section, click on the first cluster labelled 1066 to view header information of the file: We can see that the first entry is .JFIF, which is an abbreviation for JPEG File Interchange Format. This means that the file7.hmm file is an image file but had its extension changed to .hmm. Sorting files Inspecting the metadata of each file may not be practical with large evidence files. For such an instance, the FILE TYPE feature can be used. This feature allows for the examination of existing (allocated), deleted (unallocated), and hidden files. Click on the FILE TYPE tab to continue: Click Sort files into categories by type (leave the default-checked options as they are) and then click OK to begin the sorting process: Once sorting is complete, a results summary is displayed. In the following snippet, we can see that there are five Extension Mismatches: To view the sorted files, we must manually browse to the location of the output folder, as Autopsy 2.4 does not support viewing of sorted files. To reveal this location, click on View Sorted Files in the left pane: The output folder locations will vary depending on the information specified by the user when first creating the case, but can usually be found at /var/lib/autopsy/<case name>/<host name>/output/sorter-vol#/index.html. Once the index.html file has been opened, click on the Extension Mismatch link: The five listed files with mismatched extensions should be further examined by viewing metadata content, with notes added by the investigator. Reopening cases in Autopsy Cases are usually ongoing and can easily be restarted by starting Autopsy and clicking on OPEN CASE: In the CASE GALLERY, be sure to choose the correct case name and, from there, continue your examination: To recap, we looked at forensics using the Autopsy Forensic Browser with The Sleuth Kit. Compared to individual tools, Autopsy has case management features and supports various types of file analysis, searching, and sorting of allocated, unallocated, and hidden files. Autopsy can also perform hashing on a file and directory levels to maintain evidence integrity. If you enjoyed reading this article, do check out, 'Digital Forensics with Kali Linux' to take your forensic abilities and investigations to a professional level, catering to all aspects of a digital forensic investigation from hashing to reporting. What is Digital Forensics? IoT Forensics: Security in an always connected world where things talk Working with Forensic Evidence Container Recipes
Read more
  • 0
  • 0
  • 33862

article-image-discovering-network-hosts-with-tcp-syn-and-tcp-ack-ping-scans-in-nmaptutorial
Savia Lobo
09 Nov 2018
8 min read
Save for later

Discovering network hosts with 'TCP SYN' and 'TCP ACK' ping scans in Nmap[Tutorial]

Savia Lobo
09 Nov 2018
8 min read
Ping scans are used for detecting live hosts in networks. Nmap's default ping scan (-sP) sends TCP SYN, TCP ACK, and ICMP packets to determine if a host is responding, but if a firewall is blocking these requests, it will be treated as offline. Fortunately, Nmap supports a scanning technique named the TCP SYN ping scan that is very handy to probe different ports in an attempt to determine if a host is online or at least has more permissive filtering rules. Similar to the TCP SYN ping scan, the TCP ACK ping scan is used to determine if a host is responding. It can be used to detect hosts that block SYN packets or ICMP echo requests, but it will most likely be blocked by modern firewalls that track connection states because it sends bogus TCP ACK packets associated with non-existing connections. This article is an excerpt taken from the book Nmap: Network Exploration and Security Auditing Cookbook - Second Edition written by Paulino Calderon. In this book, you will be introduced to the most powerful features of Nmap and related tools, common security auditing tasks for local and remote networks, web applications, databases, mail servers and much more. This post will talk about the TCP SYN and TCP ACK ping scans and its related options. Discovering network hosts with TCP SYN ping scans How to do it... Open your terminal and enter the following command: # nmap -sn -PS <target> You should see the list of hosts found in the target range using TCP SYN ping scanning: # nmap -sn -PS 192.1.1/24 Nmap scan report for 192.168.0.1 Host is up (0.060s latency). Nmap scan report for 192.168.0.2 Host is up (0.0059s latency). Nmap scan report for 192.168.0.3 Host is up (0.063s latency). Nmap scan report for 192.168.0.5 Host is up (0.062s latency). Nmap scan report for 192.168.0.7 Host is up (0.063s latency). Nmap scan report for 192.168.0.22 Host is up (0.039s latency). Nmap scan report for 192.168.0.59 Host is up (0.00056s latency). Nmap scan report for 192.168.0.60 Host is up (0.00014s latency). Nmap done: 256 IP addresses (8 hosts up) scanned in 8.51 seconds How it works... The -sn option tells Nmap to skip the port scanning phase and only perform host discovery. The -PS flag tells Nmap to use a TCP SYN ping scan. This type of ping scan works in the following way: Nmap sends a TCP SYN packet to port 80. If the port is closed, the host responds with an RST packet. If the port is open, the host responds with a TCP SYN/ACK packet indicating that a connection can be established. Afterward, an RST packet is sent to reset this connection. The CIDR /24 in 192.168.1.1/24 is used to indicate that we want to scan all of the 256 IPs in our local network. There's  more... TCP SYN ping scans can be very effective to determine if hosts are alive on networks. Although Nmap sends more probes by default, it is configurable. Now it is time to learn more about discovering hosts with TCP SYN ping scans. Privileged versus unprivileged TCP SYN ping scan Running a TCP SYN ping scan as an unprivileged user who can't send raw packets makes Nmap use the connect() system call to send the TCP SYN packet. In this case, Nmap distinguishes a SYN/ACK packet when the function returns successfully, and an RST packet when it receives an ECONNREFUSED error message. Firewalls and traffic filtering A lot of systems are protected by some kind of traffic filtering, so it is important to always try different ping scanning techniques. In the following example, we will scan a host online that gets marked as offline, but in fact, was just behind some traffic filtering system that did not allow TCP ACK or ICMP requests: # nmap -sn 0xdeadbeefcafe.com Note: Host seems down. If it is really up, but blocking our ping probes, try -Pn Nmap done: 1 IP address (0 hosts up) scanned in 4.68 seconds # nmap -sn -PS 0xdeadbeefcafe.com Nmap scan report for 0xdeadbeefcafe.com (52.20.139.72) Host is up (0.062s latency). rDNS record for 52.20.139.72: ec2-52-20-139-72.compute- 1.amazonaws.com Nmap done: 1 IP address (1 host up) scanned in 0.10 seconds During a TCP SYN ping scan, Nmap uses the SYN/ACK and RST responses to determine if the host is responding. It is important to note that there are firewalls configured to drop RST packets. In this case, the TCP SYN ping scan will fail unless we send the probes to an open port: # nmap -sn -PS80 <target> You can set the port list to be used with -PS (port list or range) as follows: # nmap -sn -PS80,21,53 <target> # nmap -sn -PS1-1000 <target> # nmap -sn -PS80,100-1000 <target> Discovering hosts with TCP ACK ping scans How to do it... Open your terminal and enter the following command: # nmap -sn -PA <target> The result is a list of hosts that responded to the TCP ACK packets sent, therefore, online: # nmap -sn -PA 192.168.0.1/24 Nmap scan report for 192.168.0.1 Host is up (0.060s latency). Nmap scan report for 192.168.0.60 Host is up (0.00014s latency). Nmap done: 256 IP addresses (2 hosts up) scanned in 6.11 seconds How it works... The -sn option tells Nmap to skip the port scan phase and only perform host discovery. And the -PA flag tells Nmap to use a TCP ACK ping scan. A TCP ACK ping scan works in the following way: Nmap sends an empty TCP packet with the ACK flag set to port 80 (the default port, but an alternate port list can be assigned). If the host is offline, it should not respond to this request. Otherwise, it will return an RST packet and will be treated as online. RST packets are sent because the TCP ACK packet sent is not associated with an existing valid connection. There's more... TCP ACK ping scans use port 80 by default, but this behavior can be configured. This scanning technique also requires privileges to create raw packets. Now we will learn more about the scan limitations and configuration options. Privileged versus unprivileged TCP ACK ping scans TCP ACK ping scans need to run as a privileged user. Otherwise a connect() system call is used to send an empty TCP SYN packet. Hence, TCP ACK ping scans will not use the TCP ACK technique, previously discussed, as an unprivileged user, and it will perform a TCP SYN ping scan instead. Selecting ports in TCP ACK ping scans In addition, you can select the ports to be probed using this technique, by listing them after the -PA flag: # nmap -sn -PA21,22,80 <target> # nmap -sn -PA80-150 <target> # nmap -sn -PA22,1000-65535 <target> Discovering hosts with UDP ping scans Ping scans are used to determine if a host is responding and can be considered online. UDP ping scans have the advantage of being capable of detecting systems behind firewalls with strict TCP filtering but that left UDP exposed. This next recipe describes how to perform a UDP ping scan with Nmap and its related options. How to do it... Open your terminal and enter the following command: # nmap -sn -PU <target> Nmap will determine if the target is reachable using a UDP ping scan: # nmap -sn -PU scanme.nmap.org Nmap scan report for scanme.nmap.org (45.33.32.156) Host is up (0.13s latency). Other addresses for scanme.nmap.org (not scanned): 2600:3c01::f03c:91ff:fe18:bb2f Nmap done: 1 IP address (1 host up) scanned in 7.92 seconds How it works... The -sn option tells Nmap to skip the port scan phase but perform host discovery. In combination with the -PU flag, Nmap uses UDP ping scanning. The technique used by a UDP ping scan works as follows: Nmap sends an empty UDP packet to port 40125. If the host is online, it should return an ICMP port unreachable error. If the host is offline, various ICMP error messages could be returned. There's more... Services that do not respond to empty UDP packets will generate false positives when probed. These services will simply ignore the UDP packets, and the host will be incorrectly marked as offline. Therefore, it is important that we select ports that are closed for better results. Selecting ports in UDP ping scans To specify the ports to be probed, add them after the -PU flag, as follows: # nmap -sn -PU1337,11111 scanme.nmap.org # nmap -sn -PU1337 scanme.nmap.org # nmap -sn -PU1337-1339 scanme.nmap.org This in this post we saw how network hosts can be discovered using TCP SYN and TCP ACK ping scans. If you've enjoyed reading this post and want to learn how to discover hosts using other ping scans such as ICMP, SCTP INIT, IP protocol, and others head over to our book, Nmap: Network Exploration and Security Auditing Cookbook - Second Edition. Docker Multi-Host Networking Experiments on Amazon AWS Hosting the service in IIS using the TCP protocol FreeRTOS affected by 13 vulnerabilities in its TCP/IP stack
Read more
  • 0
  • 0
  • 33600

article-image-design-documentation
Packt
23 Jan 2014
19 min read
Save for later

The Design Documentation

Packt
23 Jan 2014
19 min read
(For more resources related to this topic, see here.) The design documentation provides written documentation of the design factors and the choices the architect has made in the design to satisfy the business and technical requirements. The design documentation also aids in the implementation of the design. In many cases where the design architect is not responsible for the implementation, the design documents ensure the successful implementation of the design by the implementation engineer. Once you have created the documentation for a few designs, you will be able to develop standard processes and templates to aid in the creation of design documentation. Documentation can vary from project to project. Many consulting companies and resellers have standard documentation templates that they use when designing solutions. A properly documented design should include the following information: Architecture design Implementation plan Installation guide Validation test plan Operational procedures This information can be included in a single document or separated into different documents. VMware provides Service Delivery Kits to VMware partners. These kits can be found on the VMware Partner University portal at http://www.vmware.com/go/partneruniversity, which provides documentation templates that can be used as a foundation for creating design documents. If you do not have access to these templates, example outlines are provided in this article to assist you in developing your own design documentation templates. The final steps of the design process include gaining customer approval to begin implementation of the design and the implementation of the design. Creating the architecture design document The architecture design document is a technical document describing the components and specifications required to support the solution and ensure that the specific business and technical requirements of the design are satisfied. An excellent example of an architecture design document is the Cloud Infrastructure Architecture Case Study White Paper article that can be found at http://www.vmware.com/files/pdf/techpaper/cloud-infrastructure-achitecture-case-study.pdf. The architect creates the architecture design document to document the design factors and the specific choices that have been made to satisfy those factors. The document serves as a way for the architect to show his work when making design decisions. The architecture design document includes the conceptual, logical, and physical designs. How to do it... The architecture design document should include the following information: Purpose and overview Executive summary Design methodology Conceptual design Logical management, storage, compute, and network design Physical management, storage, compute, and network design How it works... The Purpose and Overview section of the architecture design includes the Executive Summary section. The Executive Summary section provides a high-level overview of the design and the goals the design will accomplish, and defines the purpose and scope of the architecture design document. The following is an example executive summary in the Cloud Infrastructure Architecture Case Study White Paper : Executive Summary: This architecture design was developed to support a virtualization project to consolidate 100 existing physical servers on to a VMware vSphere 5.x virtual infrastructure. The primary goals this design will accomplish are to increase operational efficiency and to provide high availability of customer-facing applications. This document details the recommended implementation of a VMware virtualization architecture based on specific business requirements and VMware recommended practices. The document provides both logical and physical design considerations for all related infrastructure components including servers, storage, networking, management, and virtual machines. The scope of this document is specific to the design of the virtual infrastructure and the supporting components. The purpose and overview section should also include details of the design methodology the architect has used in creating the architecture design. This should include the processes followed to determine the business and technical requirements along with definitions of the infrastructure qualities that influenced the design decisions. Design factors, requirements, constraints, and assumptions are documented as part of the conceptual design. To document the design factors, use a table to organize them and associate them with an ID that can be easily referenced. The following table illustrates an example of how to document the design requirements: ID Requirement R001 Consolidate the existing 100 physical application servers down to five servers R002 Provide capacity to support growth for 25 additional application servers over the next five years R003 Server hardware maintenance should not affect application uptime R004 Provide N+2 redundancy to support a hardware failure during normal and maintenance operations The conceptual design should also include tables documenting any constraints and assumptions. A high-level diagram of the conceptual design can also be included. Details of the logical design are documented in the architecture design document. The logical design of management, storage, network, and compute resources should be included. When documenting the logical design document, any recommended practices that were followed should be included. Also include references to the requirements, constraints, and assumptions that influenced the design decisions. When documenting the logical design, show your work to support your design decisions. Include any formulas used for resource calculations and provide detailed explanations of why design decisions were made. An example table outlining the logical design of compute resource requirements is as follows: Parameter Specification Current CPU resources required 100 GHz *CPU growth 25 GHz CPU required (75 percent utilization) 157 GHz Current memory resources required 525 GB *Memory growth 131 GB Memory required (75 percent utilization) 821 GB Memory required (25 percent TPS savings) 616 GB *CPU and memory growth of 25 additional application servers (R002) Similar tables will be created to document the logical design for storage, network, and management resources. The physical design documents have the details of the physical hardware chosen along with the configurations of both the physical and virtual hardware. Details of vendors and hardware models chosen and the reasons for decisions made should be included as part of the physical design. The configuration of the physical hardware is documented along with the details of why specific configuration options were chosen. The physical design should also include diagrams that document the configuration of physical resources, such as physical network connectivity and storage layout. A sample outline of the architecture design document is as follows: Cover page: It includes the customer and project names Document version log: It contains the log of authors and changes made to the document Document contacts: It includes the subject matter experts involved in the creation of the design Table of contents: It is the index of the document sections for quick reference List of tables: It is the index of tables included in the document for quick reference List of figures: It is the index of figures included in the document for quick reference Purpose and overview: This section consists of an executive summary to provide an overview of the design and the design methodology followed in creating the design Conceptual design: It is the documentation of the design factors: requirements, constraints, and assumptions Logical design: It has the details of the logical management, storage, network, and compute design Physical design: It contains the details of the selected hardware and the configuration of the physical and virtual hardware Writing an implementation plan The implementation plan documents the requirements necessary to complete the implementation of the design. The implementation plan defines the project roles and defines what is expected of the customer and what they can expect during the implementation of the design. This document is sometimes referred to as the statement of work. It defines the key points of contact, the requirements that must be satisfied to start the implementation, any project documentation deliverables, and how changes to the design and implementation will be handled. How to do it... The implementation plan should include the following information: Purpose statement Project contacts Implementation requirements Overview of implementation steps Definition of project documentation deliverables Implementation of change management How it works... The purpose statement defines the purpose and scope of the document. The purpose statement of the implementation plan should define what is included in the document and provide a brief overview of the goals of the project. The purpose statement is simply an introduction so that someone reading the document can gain a quick understanding of what the document contains. The following is an example purpose statement: This document serves as the implementation plan and defines the scope of the virtualization project. This document identifies points of contact for the project, lists implementation requirements, provides a brief description of each of the document deliverables, deliverables, and provides an overview of the implementation process for the data-center virtualization project. The scope of this document is specific to the implementation of the virtual data-center implementation and the supporting components as defined in the Architecture Design. Key project contacts, their roles, and their contact information should be included as part of the implementation plan document. These contacts include customer stakeholders, project managers, project architects, and implementation engineers. The following is a sample table that can be used to document project contacts for the implementation plan: Role Name Contact information Customer project sponsor     Customer technical resource     Project manager     Design architect     Implementation engineer     QA engineer     Support contacts for hardware and software used in the implementation plan may also be included in the table, for example, contact numbers for VMware support or other vendor support. Implementation requirements contain the implementation dependencies to include the access and facility requirements. Any hardware, software, and licensing that must be available to implement the design is also documented here. Access requirements include the following: Physical access to the site. Credentials necessary for access to resources. These include active directory credentials and VPN credentials (if remote access is required). Facility requirements include the following: Power and cooling to support the equipment that will be deployed as part of the design Rack space requirements Hardware, software, and licensing requirements include the following: vSphere licensing Windows or other operating system licensing Other third-party application licensing Software (ISO, physical media, and so on) Physical hardware (hosts, array, network switches, cables, and so on) A high-level overview of the steps required to complete the implementation is also documented. The details of each step are not a part of this document; only the steps that need to be performed will be included. For example: Procurement of hardware, software, and licensing. Scheduling of engineering resources. Verification of access and facility requirements. Performance of an inventory check for the required hardware, software, and licensing. Installation and configuration of storage array. Rack, cable, and burn-in of physical server hardware. Installation of ESXi on physical servers. Installation of vCenter Server. Configuration of ESXi and vCenter. Testing and verification of implementation plan. Migration of physical workloads to virtual machines. Operational verification of the implementation plan. The implementation overview may also include an implementation timeline documenting the time required to complete each of the steps. Project documentation deliverables are defined as part of the implementation plan. Any documentation that will be delivered to the customer once the implementation has been completed should be detailed here. Details include the name of the document and a brief description of the purpose of the document. The following table provides example descriptions of the project documentation deliverables: Document Description Architecture design This is a technical document describing the vSphere components and specifications required to achieve a solution that addresses the specific business and technical requirements of the design. Implementation plan This identifies implementation roles and requirements. It provides a high-level map of the implementation and deliverables detailed in the design. It documents change management procedures. Installation guide This document provides detailed, step-by-step instructions on how to install and configure the products specified in the architecture design document. Validation test plan This document provides an overview of the procedures to be executed post installation to verify whether or not the infrastructure is installed correctly. It can also be used at any point subsequent to the installation to verify whether or not the infrastructure continues to function correctly. Operational procedures This document provides detailed, step-by-step instructions on how to perform common operational tasks after the design is implemented. How changes are made to the design, specifically changes made to the design factors, must be well documented. Even a simple change to a requirement or an assumption that cannot be verified can have a tremendous effect on the design and implementation. The process for submitting a change, researching the impact of the change, and approving the change should be documented in detail. The following is an example outline for an implementation plan: Cover page: It includes the customer and project names Document version log: It contains the log of authors and changes made to the document Document contacts: It includes the subject matter experts involved in the creation of the design Table of contents: It is the index of document sections for quick reference List of tables: It is the index of tables included in the document for quick reference List of figures: It is the index of figures included in the document for quick reference Purpose statement: It defines the purpose of the document Project contacts: It is the documentation of key project points of contact Implementation requirements: It provides the access, facilities, hardware, software, and licensing required to complete the implementation Implementation overview: It is the overview of the steps required to complete the implementation Project deliverables: It consists of the documents that will be provided as deliverables once implementation has been completed Developing an installation guide The installation guide provides step-by-step instructions for the implementation of the architecture design. This guide should include detailed information about how to implement and configure all the resources associated with the virtual datacenter project. In many projects, the person creating the design is not the person responsible for implementing the design. The installation guide outlines the steps necessary to implement the physical design outlined in the architecture design document. The installation guide should provide details about the installation of all components, including the storage and network configurations required to support the design. In a complex design, multiple installation guides may be created to document the installation of the various components required to support the design. For example, separate installation guides may be created for the storage, network, and vSphere installation and configuration. How to do it... The installation guide should include the following information: Purpose statement Assumption statement Step-by-step instructions to implement the design How it works... The purpose statement simply states the purpose of the document. The assumption statement describes any assumptions the document's author has made. Commonly, an assumption statement simply states that the document has been written, assuming that the reader is familiar with virtualization concepts and the architecture design. The following is an example of a basic purpose and assumption statement that can be used for an installation guide: Purpose: This document provides a guide for installing and configuring the virtual infrastructure design defined in the Architecture Design. Assumptions: This guide is written for an implementation engineer or administrator who is familiar with vSphere concepts and terminologies. The guide is not intended for administrators who have no prior knowledge of vSphere concepts and terminology. The installation guide should include details on implementing all areas of the design. It should include configuration of the storage array, physical servers, physical network components, and vSphere components. The following are just a few examples of installation tasks to include instructions for: Storage array configurations Physical network configurations Physical host configurations ESXi installation vCenter Server installation and configuration Virtual network configuration Datastore configuration High availability, distributed resource scheduler, storage DRS, and other vSphere components installation and configuration The installation guide should provide as much detail as possible. Along with the step-by-step procedures, screenshots can be used to provide installation guidance. The following screenshot is an example taken from an installation guide that details enabling and configuring the Software iSCSI adapter: The following is an example outline for an installation guide: Cover page: It includes the customer and project names Document version log: It contains the log of authors and changes made to the document Document contacts: It includes the subject matter experts involved in the creation of the design Table of contents: It is the index of document sections for quick reference List of tables: It is the index of tables included in the document for quick reference List of figures: It is the index of figures included in the document for quick reference Purpose statement: It defines the purpose of the document Assumption statement: It defines any assumptions made in creating the document Installation guide: It provides the step-by-step installation instructions to be followed when implementing the design Creating a validation test plan The validation test plan documents how the implementation will be verified. It documents the criteria that must be met to determine the success of the implementation and the test procedures that should be followed when validating the environment. The criteria and procedures defined in the validation test plan determine whether or not the design requirements have been successfully met. How to do it... The validation test plan should include the following information: Purpose statement Assumption statement Success criteria Test procedures How it works... The purpose statement defines the purpose of the validation test plan and the assumption statement documents any assumptions the author of the plan has made in developing the test plan. Typically, the assumptions are that the testing and validation will be performed by someone who is familiar with the concepts and the design. The following is an example of a purpose and assumption statement for a validation test plan: Purpose: This document contains testing procedures to verify that the implemented configurations specified in the Architecture Design document successfully addresses the customer requirements. Assumptions: This document assumes that the person performing these tests has a basic understanding of VMware vSphere and is familiar with the accompanying design documentation. This document is not intended for administrators or testers who have no prior knowledge of vSphere concepts and terminology. The success criteria determines whether or not the implemented design is operating as expected. More importantly, these criteria determine whether or not the design requirements have been met. Success is measured based on whether or not the criteria satisfies the design requirements. The following table shows some examples of success criteria defined in the validation test plan: Description Measurement Members of the active directory group vSphere administrators are able to access vCenter as administrators Yes/No Access is denied to users outside the vSphere administrators active directory group Yes/No Access to a host using the vSphere Client is permitted when lockdown mode is disabled Yes/No Access to a host using the vSphere Client is denied when lockdown mode is enabled Yes/No Cluster resource utilization is less than 75 percent. Yes/No If the success criteria are not met, the design does not satisfy the design factors. This can be due to a misconfiguration or error in the design. Troubleshooting will need to be done to identify the issue or modifications to the design may need to be made. Test procedures are performed to determine whether or not the success criteria have been met. Test procedures should include testing of usability, performance, and recoverability. Test procedures should include the test description, the tasks to perform the test, and the expected results of the test. The following table provides some examples of usability testing procedures: Test description Tasks to perform test Expected result vCenter administrator access Use the vSphere Web Client to access the vCenter Server. Log in as a user who is a member of the vSphere administrators AD group. Administrator access to the inventory of the vCenter Server vCenter access: No permissions Use the vSphere Web Client to access the vCenter Server. Log in as a user who is not a member of the vSphere administrators AD group. Access is denied Host access: lockdown mode disabled Disable lockdown mode through the DCUI. Use the vSphere Client to access the host and log in as root. Direct access to the host using the vSphere Client is successful Host access: lockdown mode enabled Re-enable lockdown mode through the DCUI. Use the vSphere Client to access the host and log in as root. Direct access to the host using the vSphere Client is denied The following table provides some examples of reliability testing procedures: Test description Tasks to perform test Expected result Host storage path failure Disconnect a vmnic providing IP storage connectivity from the host The disconnected path fails, but I/O continues to be processed on the surviving paths. A network connectivity alarm should be triggered and an e-mail should be sent to the configured e-mail address. Host storage path restore Reconnect the vmnic providing IP storage connectivity The failed path should become active and begin processing the I/O. Network connectivity alarms should clear. Array storage path failure Disconnect one network connection from the active SP The disconnected paths fail on all hosts, but I/O continues to be processed on the surviving paths. Management network redundancy Disconnect the active management network vmnic The stand-by adapter becomes active. Management access to the host is not interrupted. A loss-of-network redundancy alarm should be triggered and an e-mail should be sent to the configured e-mail address. These are just a few examples of test procedures. The actual test procedures will depend on the requirements defined in the conceptual design. The following is an example outline of a validation test plan: Cover page: It includes the customer and project names Document version log: It contains the log of authors and changes made to the document Document contacts: It includes the subject matter experts involved in the creation of the design Table of contents: It is the index of document sections for quick reference List of tables: It is the index of tables included in the document for quick reference List of figures: It is the index of figures included in the document for quick reference Purpose statement: It defines the purpose of the document Assumption statement: It defines any assumptions made in creating the document Success criteria: It is a list of criteria that must be met to validate the successful implementation of the design Test Procedures: It is a list of test procedures to follow, including the steps to follow and the expected results
Read more
  • 0
  • 2
  • 33569
article-image-is-web-development-dying
Richard Gall
23 May 2018
7 min read
Save for later

Is web development dying?

Richard Gall
23 May 2018
7 min read
It's not hard to find people asking whether web development is dying. A quick search throws up questions on Quora, Reddit, and other forums. "Is web development a dying profession or does it just smell funny?" asks one Reddit user. The usual suspects in the world of content (Forbes et al) have responded with their own takes and think pieces on whether web development is dead. And why wouldn't they? I, for one, would never miss out on an opportunity to write something with an outlandish and provocative headline for clicks. So, is web development dying or simply very unwell? Why do people think web development is dying? The question might seem a bit overwrought, but there are good reasons for people to ask the question. One reason is that getting a website has never been easier or cheaper. Think about it: if you want to create a content site, it doesn't take much to set one up with WordPress. You barely need to be technically literate, let alone a developer. Similarly, if you want an eCommerce store there are plenty of off-the-shelf solutions that allow people to start running an online business with very little work at all. Even if you do want a custom solution, you can now do that pretty cheaply. On the Treehouse forums, one user comments that thanks to sites like SquareSpace, businesses can now purchase a website for less than £100 (about $135). The commenter remarks that whereas he'd typically charge around £3000 for a complete website build, potential clients are coming back puzzled as to why he would think they'd spend so much when they could get the same result for a fraction of the price. From a professional perspective, this sort of anecdotal evidence indicates that it's becoming more and more difficult to be successful in web development. For all the talk around 'learning to code' and the digital economy, maybe building websites isn't the best area to get into. Web development is getting easier When people say web development is dying, they might actually be saying that there isn't as much money in it any more. If freelancers are struggling to charge the rates that they used to, that's because there is someone out there who is going to do it for a lot less money. The reason for this isn't that there's a new generation of web developers able to subsist on a paltry sum of money. It's actually getting a lot easier. Aside from solutions like WordPress and Shopify, the task of building websites from scratch (sort of scratch) is now easier than it has ever been. Are templates killing web development? Templates make everything easy for web developers and designers. Why would you want to do much more than drag and drop templates if you could? If the result looks good and does the job, then why spend time doing more? The more you do yourself, the more you're likely to break things. And the more you break things the more you've got to fix. Of course, templates are lowering the barrier to entry into web development and design. And while we shouldn't be precious about new web developers entering the industry, it is understandable that many experienced web developers are anxious about what the future might hold. From this perspective, templates aren't killing web development, but they are changing what the profession looks like. And without wishing to sound euphemistic, this is both a challenge and an opportunity for everyone in web development. Whether you're experienced or new to the industry, these changes mean people are going to have to adapt. Web development isn't dying, it's fragmenting The way web developers are going to have to adapt is by choosing what path they want to take in their career. Web development as we've always known it is, perhaps well and truly dead. Instead, it's fragmenting into specialized areas; design on the one hand, and full-stack on the other. This means your skill set needs to be unique. In a world where building websites takes very little skill or technical knowledge, specific expertise is vital. This is something journalist Andrew Pierno noted in a blog post on Medium. Pierno writes:  ...we are in a scenario where the web developer no longer has the skill set to build that interesting differentiator anymore, particularly if the main value prop is around A.I, computer vision, machine learning, AR, VR, blockchain, etc. Building websites is no longer remarkable - as we've seen, people that can do it are ubiquitous. But building a native application; that's not quite so easy. Building a mobile app that uses computer vision to compare you to Renaissance paintings - that's even harder to do. These are the sorts of things that are going to be valuable - and these are the sorts of things that web developers are going to need to learn how to do. Full-stack development and the expansion of the developer skill set In his piece, Pierno argues that the scope of the web developers role is shrinking. However, I don't think that's quite right. Yes, it might be fragmenting, but the scope of, say, full-stack development, is huge. In fact, full-stack developers need to know a huge range of technologies and tools. If they're to differentiate themselves in the job market, as Pierno suggests they should, they need to know machine learning, they need to know mobile, databases, and maybe even Blockchain. From this perspective, it's not hard to see how the 'web' part of web development might be dying. To some extent, as the web becomes more ubiquitous and less of a rarefied 'space' in people's lives, the more we have to get into the detail of how we utilize the technologies around it. Web development's decline is design's gain If web development as a discipline is dying, that's only going to make design more important. If, as we saw earlier, building websites is going to become a free for all for just about anyone with an internet connection and enough confidence, standards and quality might start to slip. That means the value of someone who understands good design will be higher than ever. As a web developer you might disappear into the ether of everyone else out there. But if you market yourself as a designer, someone who understands the intricacies of UI and UX implicitly, you immediately start to look a little different. Think of it like a sandwich shop - anyone can start making sandwiches. But to make a great sandwich shop, the type that wins awards and the type that people want to Instagram, requires extra attention to detail. It demands more skill and more culinary awareness. Maybe web development is dying, but maybe it just needs to change Clearly, what we call web development is very different in 2018 than what it was 5 years ago. There are a huge number of reasons for this, but perhaps the most important is that it doesn't really make sense to talk about 'the web' any more. Because 'the web' is now an outdated concept, perhaps web development needs to die. Maybe we're holding on to something which is only going to play into the hands of poor design and poor quality software. It's might even damage the careers of talented engineers and designers. You could make a pretty good comparison between 'the web' and 'big data'. Even reading those words feels oddly outdated today, but they're still at the center of the tech landscape. Big data, for example, is everywhere - it's hard to imagine our lives outside of it, but it doesn't make sense to talk about it in the abstract. Instead, what's interesting is how it's applied, how engineers make data accessible, usable and secure. The same is true of the web. It's not dead, but it has certainly assumed a slightly different form. And web development might well be dying, but the world will always need developers and designers. It's simply time to adapt. Read next Why is everyone talking about JavaScript fatigue? Is novelty ruining web development?
Read more
  • 0
  • 4
  • 33547

article-image-drones-everything-you-wanted-know
Aarthi Kumaraswamy
11 Apr 2018
10 min read
Save for later

Drones: Everything you ever wanted to know!

Aarthi Kumaraswamy
11 Apr 2018
10 min read
When you were a kid, did you have fun with paper planes? They were so much fun. So, what is a gliding drone? Well, before answering this, let me be clear that there are other types of drones, too. We will know all common types of drones soon, but before doing that, let's find out what a drone first. Drones are commonly known as Unmanned Aerial Vehicles (UAV). A UAV is a flying thing without a human pilot on it. Here, by thing we mean the aircraft. For drones, there is the Unmanned Aircraft System (UAS), which allows you to communicate with the physical drone and the controller on the ground. Drones are usually controlled by a human pilot, but they can also be autonomously controlled by the system integrated on the drone itself. So what the UAS does, is it communicates between the UAS and UAV. Simply, the system that communicates between the drone and the controller, which is done by the commands of a person from the ground control station, is known as the UAS. Drones are basically used for doing something where humans cannot go or carrying out a mission that is impossible for humans. Drones have applications across a wide spectrum of industries from military, scientific research, agriculture, surveillance, product delivery, aerial photography, recreations, to traffic control. And of course, like any technology or tool it can do great harm when used for malicious purposes like for terrorist attacks and smuggling drugs. Types of drones Classifying drones based on their application Drones can be categorized into the following six types based on their mission: Combat: Combat drones are used for attacking in the high-risk missions. They are also known as Unmanned Combat Aerial Vehicles (UCAV). They carry missiles for the missions. Combat drones are much like planes. The following is a picture of a combat drone: Logistics: Logistics drones are used for delivering goods or cargo. There are a number of famous companies, such as Amazon and Domino's, which deliver goods and pizzas via drones. It is easier to ship cargo with drones when there is a lot of traffic on the streets, or the route is not easy to drive. The following diagram shows a logistic drone: Civil: Civil drones are for general usage, such as monitoring the agriculture fields, data collection, and aerial photography. The following picture is of an aerial photography drone: Reconnaissance: These kinds of drones are also known as mission-control drones. A drone is assigned to do a task and it does it automatically, and usually returns to the base by itself, so they are used to get information from the enemy on the battlefield. These kinds of drones are supposed to be small and easy to hide. The following diagram is a reconnaissance drone for your reference, they may vary depending on the usage: Target and decoy: These kinds of drones are like combat drones, but the difference is, the combat drone provides the attack capabilities for the high-risk mission and the target and decoy drones provide the ground and aerial gunnery with a target that simulates the missile or enemy aircrafts. You can look at the following figure to get an idea what a target and decoy drone looks like: Research and development: These types of drones are used for collecting data from the air. For example, some drones are used for collecting weather data or for providing internet. [box type="note" align="" class="" width=""]Also read this interesting news piece on Microsoft committing $5 billion to IoT projects.[/box] Classifying drones based on wing types We can also classify drones by their wing types. There are three types of drones depending on their wings or flying mechanism: Fixed wing: A fixed wing drone has a rigid wing. They look like airplanes. These types of drones have a very good battery life, as they use only one motor (or less than the multiwing). They can fly at a high altitude. They can carry more weight because they can float on air for the wings. There are also some disadvantages of fixed wing drones. They are expensive and require a good knowledge of aerodynamics. They break a lot and training is required to fly them. The launching of the drone is hard and the landing of these types of drones is difficult. The most important thing you should know about the fixed wing drones is they can only move forward. To change the directions to left or right, we need to create air pressure from the wing. We will build one fixed wing drone in this book. I hope you would like to fly one. Single rotor: Single rotor drones are simply like helicopter. They are strong and the propeller is designed in a way that it helps to both hover and change directions. Remember, the single rotor drones can only hover vertically in the air. They are good with battery power as they consume less power than a multirotor. The payload capacity of a single rotor is good. However, they are difficult to fly. Their wing or the propeller can be dangerous if it loosens. Multirotor: Multirotor drones are the most common among the drones. They are classified depending on the number of wings they have, such as tricopter (three propellers or rotors), quadcopter (four rotors), hexacopter (six rotors), and octocopter (eight rotors). The most common multirotor is the quadcopter. The multirotors are easy to control. They are good with payload delivery. They can take off and land vertically, almost anywhere. The flight is more stable than the single rotor and the fixed wing. One of the disadvantages of the multirotor is power consumption. As they have a number of motors, they consume a lot of power. Classifying drones based on body structure We can also classify multirotor drones by their body structure. They can be known by the number of propellers used on them. Some drones have three propellers. They are called tricopters. If there are four propellers or rotors, they are called quadcopters. There are hexacopters and octacopters with six and eight propellers, respectively. The gliding drones or fixed wings do not have a structure like copters. They look like the airplane. The shapes and sizes of the drones vary from purpose to purpose. If you need a spy drone, you will not make a big octacopter right? If you need to deliver a cargo to your friend's house, you can use a multirotor or a single rotor: The Ready to Fly (RTF) drones do not require any assembly of the parts after buying. You can fly them just after buying them. RTF drones are great for the beginners. They require no complex setup or programming knowledge. The Bind N Fly (BNF) drones do not come with a transmitter. This means, if you have bought a transmitter for yourother drone, you can bind it with this type of drone and fly. The problem is that an old model of transmitter might not work with them and the BNF drones are for experienced flyers who have already flown drones with safety, and had the transmitter to test with other drones. The Almost Ready to Fly (ARF) drones come with everything needed to fly, but a few parts might be missing that might keep it from flying properly. Just kidding! They come with all the parts, but you have to assemble them together before flying. You might lose one or two things while assembling. So be careful if you buy ARF drones. I always lose screws or spare small parts of the drones while I assemble. From the name of these types of drones, you can imagine why they are called by this name. The ARF drones require a lot of patience to assemble and bind to fly. Just be calm while assembling. Don't throw away the user manuals like me. You might end up with either pocket screws or lack of screws or parts. Key components for building a drone To build a drone, you will need a drone frame, motors, radio transmitter and reciever, battery, battery adapters/chargers, connectors and modules to make the drone smarter. Drone frames Basically, the drone frame is the most important component to build a drone. It helps to mount the motors, battery, and other parts on it. If you want to build a copter or a glide, you first need to decide what frame you will buy or build. For example, if you choose a tricopter, your drone will be smaller, the number of motors will be three, the number of propellers will be three, the number of ESC will be three, and so on. If you choose a quadcopter it will require four of each of the earlier specifications. For the gliding drone, the number of parts will vary. So, choosing a frame is important as the target of making the drone depends on the body of the drone. And a drone's body skeleton is the frame. In this book, we will build a quadcopter, as it is a medium size drone and we can implement all the things we want on it. If you want to buy the drone frame, there are lots of online shops who sell ready-made drone frames. Make sure you read the specification before buying the frames. While buying frames, always double check the motor mount and the other screw mountings. If you cannot mount your motors firmly, you will lose the stability of the drone in the air. About the aerodynamics of the drone flying, we will discuss them soon. The following figure shows a number of drone frames. All of them are pre-made and do not need any calculation to assemble. You will be given a manual which is really easy to follow: You should also choose a material which light but strong. My personal choice is carbon fiber. But if you want to save some money, you can buy strong plastic frames. You can also buy acrylic frames. When you buy the frame, you will get all the parts of the frame unassembled, as mentioned earlier. The following picture shows how the frame will be shipped to you, if you buy from the online shop: If you want to build your own frame, you will require a lot of calculations and knowledge about the materials. You need to focus on how the assembling will be done, if you build a frame by yourself. The thrust of the motor after mounting on the frame is really important. It will tell you whether your drone will float in the air or fall down or become imbalanced. To calculate the thrust of the motor, you can follow the equation that we will speak about next. If P is the payload capacity of your drone (how much your drone can lift. I'll explain how you can find it), M is the number of motors, W is the weight of the drone itself, and H is the hover throttle % (will be explained later). Then, our thrust of the motors T will be as follows: The drone's payload capacity can be found with the following equation: [box type="note" align="" class="" width=""]Remember to keep the frame balanced and the center of gravity remains in the hands of the drone.[/box] Check out the book, Building Smart Drones with ESP8266 and Arduino by Syed Omar Faruk Towaha to read about the other components that go into making a drone and then build some fun drone projects from follow me drones, to drone that take selfies to those that race and glide. Check out other posts on IoT: How IoT is going to change tech teams AWS Sydney Summit 2018 is all about IoT 25 Datasets for Deep Learning in IoT  
Read more
  • 0
  • 0
  • 33340

article-image-squid-proxy-server-fine-tuning-achieve-better-performance
Packt
25 Apr 2011
12 min read
Save for later

Squid Proxy Server: Fine Tuning to Achieve Better Performance

Packt
25 Apr 2011
12 min read
  Squid Proxy Server 3.1: Beginner's Guide Improve the performance of your network using the caching and access control capabilities of Squid         Read more about this book       Whether you only run one site, or are in charge of a whole network, Squid is an invaluable tool which improves performance immeasurably. Caching and performance optimization usually requires a lot of work on the developer's part, but Squid does all that for you. In this article we will learn to fine-tune our cache to achieve a better HIT ratio to save bandwidth and reduce the average page load time. In this article by Kulbir Saini, author of Squid Proxy Server 3 Beginners Guide, we will take a look at the following: Cache peers or neighbors Caching the web documents in the main memory and hard disk Tuning Squid to enhance bandwidth savings and reduce latency (For more resources on Proxy Servers, see here.) Cache peers or neighbors Cache peers or neighbors are the other proxy servers with which our Squid proxy server can: Share its cache with to reduce bandwidth usage and access time Use it as a parent or sibling proxy server to satisfy its clients' requests Use it as a parent or sibling proxy server We normally deploy more than one proxy server in the same network to share the load of a single server for better performance. The proxy servers can use each other's cache to retrieve the cached web documents locally to improve performance. Let's have a brief look at the directives provided by Squid for communication among different cache peers. Declaring cache peers The directive cache_peer is used to tell Squid about proxy servers in our neighborhood. Let's have a quick look at the syntax for this directive: cache_peer HOSTNAME_OR_IP_ADDRESS TYPE PROXY_PORT ICP_PORT [OPTIONS] In this code, HOSTNAME_OR_IP_ADDRESS is the hostname or IP address of the target proxy server or cache peer. TYPE specifies the type of the proxy server, which in turn, determines how that proxy server will be used by our proxy server. The other proxy servers can be used as a parent, sibling, or a member of a multicast group. Time for action – adding a cache peer Let's add a proxy server (parent.example.com) that will act as a parent proxy to our proxy server: cache_peer parent.example.com parent 3128 3130 default proxy-only 3130 is the standard ICP port. If the other proxy server is not using the standard ICP port, we should change the code accordingly. This code will direct Squid to use parent.example.com as a proxy server to satisfy client requests in case it's not able to do so itself. The option default specifies that this cache peer should be used as a last resort in the scenario where other peers can't be contacted. The option proxy-only specifies that the content fetched using this peer should not be cached locally. This is helpful when we don't want to replicate cached web documents, especially when the two peers are connected with a high bandwidth backbone. What just happened? We added parent.example.com as a cache peer or parent proxy to our Squid proxy server. We also used the option proxy-only, which means the requests fetched using this cache peer will not be cached on our proxy server. There are several other options in which you can add cache peers, for various purposes, such as, a hierarchy. Quickly restricting access to domains using peers If we have added a few proxy servers as cache peers to our Squid server, we may have the desire to have a little bit of control over the requests being forwarded to the peers. The directive cache_peer_domain is a quick way to achieve the desired control. The syntax of this directive is quite simple: cache_peer_domain CACHE_PEER_HOSTNAME [!]DOMAIN1 [[!]DOMAIN2 ...] In the code, CACHE_PEER_HOSTNAME is the hostname or IP address of the cache peer being used when declaring it as a cache peer, using the cache_peer directive. We can specify any number of domains which may be fetched through this cache peer. Adding a bang (!) as a prefix to the domain name will prevent the use of this cache peer for that particular domain. Let's say we want to use the videoproxy.example.com cache peer for browsing video portals like Youtube, Netflix, Metacafe, and so on. cache_peer_domain videoproxy.example.com .youtube.com .netflix.comcache_peer_domain videoproxy.example.com .metacafe.com These two lines will configure Squid to use the videoproxy.example.com cache peer for requests to the domains youtube.com, netflix.com, and metacafe.com only. Requests to other domains will not be forwarded using this peer. Advanced control on access using peers We just learned about cache_peer_domain, which provides a way to control access using cache peers. However, it's not really flexible in granting or revoking access. That's when cache_peer_access comes into the picture, which provides a very flexible way to control access using cache peers using ACLs. The syntax and implications are similar to other access directives such as http_access. cache_peer_access CACHE_PEER_HOSTNAME allow|deny [!]ACL_NAME Let's write the following configuration lines, which will allow only the clients on the network 192.0.2.0/24 to use the cache peer acadproxy.example.com for accessing Youtube, Netflix, and Metacafe. acl my_network src 192.0.2.0/24acl video_sites dstdomain .youtube.com .netflix.com .metacafe.comcache_peer_access acadproxy.example.com allow my_network video_sitescache_peer_access acadproxy.example.com deny all In the same way, we can use other ACL types to achieve better control over access to various websites using cache peers. Caching web documents All this time, we have been talking about the caching of web documents and how it helps in saving bandwidth and improving the end user experience, now it's time to learn how and where Squid actually keeps these cached documents so that they can be served on demand. Squid uses main memory (RAM) and hard disks for storing or caching the web documents. Caching is a complex process but Squid handles it beautifully and exposes the directives using squid.conf, so that we can control how much should be cached and what should be given the highest priority while caching. Let's have a brief look at the caching-related directives provided by Squid. Using main memory (RAM) for caching The web documents cached in the main memory or RAM can be served very quickly as data read/write speeds of RAM are very high compared to hard disks with mechanical parts. However, as the amount of space available in RAM for caching is very low compared to the cache space available on hard disks, only very popular objects or the documents with a very high probability of being requested again, are stored in cache space available in RAM. As the cache space in memory is precious, the documents are stored on a priority basis. Let's have a look at the different types of objects which can be cached. In-transit objects or current requests These are the objects related to the current requests and they have the highest priority to be kept in the cache space in RAM. These objects must be kept in RAM and if there is a situation where the incoming request rate is quite high and we are about to overflow the cache space in RAM, Squid will try to keep the served part (the part which has already been sent to the client) on the disk to create free space in RAM. Hot or popular objects These objects or web documents are popular and are requested quite frequently compared to others. These are stored in the cache space left after storing the in-transit objects as these have a lower priority than in-transit objects. These objects are generally pushed to disk when there is a need to generate more in RAM cache space for storing the in-transit objects. Negatively cached objects Negatively cached objects are error messages which Squid has encountered while fetching a page or web document on behalf of a client. For example, if a request to a web page has resulted in a HTTP error 404 (page not found), and Squid receives a subsequent request for the same web page, then Squid will check if the response is still fresh and will return a reply from the cache itself. If there is a request for the same page after the negatively cached object corresponding to that page has expired, Squid will check again if the page is available. Negatively cached objects have the same priority as hot or popular objects and they can be pushed to disk at any time in favor of in-transit objects. Specifying cache space in RAM So far we have learned about how the available cache space is utilized for storing or caching different types of objects with different priorities. Now, it's time to learn about specifying the amount of RAM space we want to dedicate for caching. While deciding the RAM space for caching, we should be neither greedy nor paranoid. If we specify a large percentage of RAM for caching, the overall system performance will suffer as the system will start swapping processes in case there is no free RAM left for other processes. If we use a very low percentage of RAM for caching, then we'll not be able to take full advantage of Squid's caching mechanism. The default size of the memory cache is 256 MB. Time for action – specifying space for memory caching We can use extra RAM space available on a running system after sparing a chunk of memory that can be utilized by the running process under heavy load. To find out the amount of free RAM available on our system, we can use either the top or free command. To find out the free RAM in Megabytes, we can use the free command as follows: $ free -m For more details, please check the top(1) and free(1) man pages. Now, let's say we have 4 GB of total RAM on the server and all the processes are running comfortably in 1 GB of RAM space. After securing another 512 MB for emergency situations where running processes may take extra memory, we can safely allocate 2.5 GB of RAM for caching. To specify the cache size in the main memory, we use the directive cache_mem. It has a very simple format. As we have learned before, we can specify the memory size in bytes, KB, MB, or GB. Let's specify the cache memory size for the previous example: cache_mem 2500 MB The previous value specified with cache_mem is in Megabytes. What just happened? We learned about calculating the approximate space in the main memory, which can be used to cache web documents and therefore enhance the performance of the Squid server by a significant margin. Have a go hero – calculating cache_mem for your machine Note down the total RAM on your machine and calculate the approximate space in megabytes that you can allocate for memory caching. Maximum object size in memory As we have limited space in memory available for caching objects, we need to use the space in an optimized way. We should plan to set this a bit low, as setting it to a too larger size will mean that there will be a lesser number of cached objects in the memory and the HIT (being found in cache) rate will suffer significantly. The default maximum size used by Squid is 512 KB, but we can change it depending on our value for cache_mem. So, if we want to set it to 1 MB, as we have a lot of RAM available for caching (as in the previous example), we can use the maximum_object_size_in_memory directive as follows: maximum_object_size_in_memory 1 MB This command will set the allowed maximum object size in memory cache to 1 MB. Memory cache mode With the newer versions of Squid, we can control which objects we want to keep in the memory cache for optimizing the performance. Squid offers the directive memory_cache_mode to set the mode that Squid should use to utilize the space available in memory cache. There are three different modes available: Mode Description always The mode always is used to keep all the most recently fetched objects that can fit in the available space. This is the default mode used by Squid. disk When the disk mode is set, only the objects which are already cached on a hard disk and have received a HIT (meaning they were requested subsequently after being cached), will be stored in the memory cache. network Only the objects which have been fetched from the network (including neighbors) are kept in the memory cache, if the network mode is set. Setting the mode is easy and can be set using the memory_cache_mode directive as shown: memory_cache_mode always This configuration line will set memory cache mode to always; this means that most recently fetched objects will be kept in the memory.  
Read more
  • 0
  • 2
  • 33130
article-image-implementing-autocompletion-in-a-react-material-ui-application-tutorial
Bhagyashree R
16 May 2019
14 min read
Save for later

Implementing autocompletion in a React Material UI application [Tutorial]

Bhagyashree R
16 May 2019
14 min read
Web applications typically provide autocomplete input fields when there are too many choices to select from. Autocomplete fields are like text input fields—as users start typing, they are given a smaller list of choices based on what they've typed. Once the user is ready to make a selection, the actual input is filled with components called Chips—especially relevant when the user needs to be able to make multiple selections. In this article, we will start by building an Autocomplete component. Then we will move on to implementing multi-value selection and see how to better serve the autocomplete data through an API. To help our users better understand the results we will also implement a feature that highlights the matched portion of the string value. This article is taken from the book React Material-UI Cookbook by Adam Boduch. This book will serve as your ultimate guide to building compelling user interfaces with React and Material Design. To follow along with the examples implemented in this article, you can download the code from the book’s GitHub repository. Building an Autocomplete component Material-UI doesn't actually come with an Autocomplete component. The reason is that, since there are so many different implementations of autocomplete selection components in the React ecosystem already, it doesn't make sense to provide another one. Instead, you can pick an existing implementation and augment it with Material-UI components so that it can integrate nicely with your Material-UI application. How to do it? You can use the Select component from the react-select package to provide the autocomplete functionality that you need. You can use Select properties to replace key autocomplete components with Material-UI components so that the autocomplete matches the look and feel of the rest of your app. Let's make a reusable Autocomplete component. The Select component allows you to replace certain aspects of the autocomplete experience. In particular, the following are the components that you'll be replacing: Control: The text input component to use Menu: A menu with suggestions, displayed when the user starts typing NoOptionsMessage: The message that's displayed when there aren't any suggestions to display Option: The component used for each suggestion in Menu Placeholder: The placeholder text component for the text input SingleValue: The component for showing a value once it's selected ValueContainer: The component that wraps SingleValue IndicatorSeparator: Separates buttons on the right side of the autocomplete ClearIndicator: The component used for the button that clears the current value DropdownIndicator: The component used for the button that shows Menu Each of these components is replaced with Material-UI components that change the look and feel of the autocomplete. Moreover, you'll have all of this as new Autocomplete components that you can reuse throughout your app. Let's look at the result before diving into the implementation of each replacement component. Following is what you'll see when the screen first loads: If you click on the down arrow, you'll see a menu with all the values, as follows: Try typing tor into the autocomplete text field, as follows: If you make a selection, the menu is closed and the text field is populated with the selected value, as follows: You can change your selection by opening the menu and selecting another value, or you can clear the selection by clicking on the clear button to the right of the text. How does it work? Let's break down the source by looking at the individual components that make up the Autocomplete component and replacing pieces of the Select component. Then, we'll look at the final Autocomplete component. Text input control Here's the source for the Control component: const inputComponent = ({ inputRef, ...props }) => ( <div ref={inputRef} {...props} /> ); const Control = props => ( <TextField fullWidth InputProps={{ inputComponent, inputProps: { className: props.selectProps.classes.input, inputRef: props.innerRef, children: props.children, ...props.innerProps } }} {...props.selectProps.textFieldProps} /> ); The inputComponent() function is a component that passes the inputRef value—a reference to the underlying input element—to the ref prop. Then, inputComponent is passed to InputProps to set the input component used by TextField. This component is a little bit confusing because it's passing references around and it uses a helper component for this purpose. The important thing to remember is that the job of Control is to set up the Select component to use a Material-UITextField component. Options menu Here's the component that displays the autocomplete options when the user starts typing or clicks on the down arrow: const Menu = props => ( <Paper square className={props.selectProps.classes.paper} {...props.innerProps} > {props.children} </Paper> ); The Menu component renders a Material-UI Paper component so that the element surrounding the options is themed accordingly. No options available Here's the NoOptionsMessage component. It is rendered when there aren't any autocomplete options to display, as follows: const NoOptionsMessage = props => ( <Typography color="textSecondary" className={props.selectProps.classes.noOptionsMessage} {...props.innerProps} > {props.children} </Typography> ); This renders a Typography component with textSecondary as the color property value. Individual option Individual options that are displayed in the autocomplete menu are rendered using the MenuItem component, as follows: const Option = props => ( <MenuItem buttonRef={props.innerRef} selected={props.isFocused} component="div" style={{ fontWeight: props.isSelected ? 500 : 400 }} {...props.innerProps} > {props.children} </MenuItem> ); The selected and style properties alter the way that the item is displayed, based on the isSelected and isFocused properties. The children property sets the value of the item. Placeholder text The Placeholder text of the Autocomplete component is shown before the user types anything or makes a selection, as follows: const Placeholder = props => ( <Typography color="textSecondary" className={props.selectProps.classes.placeholder} {...props.innerProps} > {props.children} </Typography> ); The Material-UI Typography component is used to theme the Placeholder text. SingleValue Once again, the Material-UI Typography component is used to render the selected value from the menu within the autocomplete input, as follows: const SingleValue = props => ( <Typography className={props.selectProps.classes.singleValue} {...props.innerProps} > {props.children} </Typography> ); ValueContainer The ValueContainer component is used to wrap the SingleValue component with a div and the valueContainer CSS class, as follows: const ValueContainer = props => ( <div className={props.selectProps.classes.valueContainer}> {props.children} </div> ); IndicatorSeparator By default, the Select component uses a pipe character as a separator between the buttons on the right side of the autocomplete menu. Since they're going to be replaced by Material-UI button components, this separator is no longer necessary, as follows: const IndicatorSeparator = () => null; By having the component return null, nothing is rendered. Clear option indicator This button is used to clear any selection made previously by the user, as follows: const ClearIndicator = props => ( <IconButton {...props.innerProps}> <CancelIcon /> </IconButton> ); The purpose of this component is to use the Material-UI IconButton component and to render a Material-UI icon. The click handler is passed in through innerProps. Show menu indicator Just like the ClearIndicator component, the DropdownIndicator component replaces the button used to show the autocomplete menu with an icon from Material-UI, as follows: const DropdownIndicator = props => ( <IconButton {...props.innerProps}> <ArrowDropDownIcon /> </IconButton> ); Styles Here are the styles used by the various sub-components of the autocomplete: const useStyles = makeStyles(theme => ({ root: { flexGrow: 1, height: 250 }, input: { display: 'flex', padding: 0 }, valueContainer: { display: 'flex', flexWrap: 'wrap', flex: 1, alignItems: 'center', overflow: 'hidden' }, noOptionsMessage: { padding: `${theme.spacing(1)}px ${theme.spacing(2)}px` }, singleValue: { fontSize: 16 }, placeholder: { position: 'absolute', left: 2, fontSize: 16 }, paper: { position: 'absolute', zIndex: 1, marginTop: theme.spacing(1), left: 0, right: 0 } })); The Autocomplete Finally, following is the Autocomplete component that you can reuse throughout your application: Autocomplete.defaultProps = { isClearable: true, components: { Control, Menu, NoOptionsMessage, Option, Placeholder, SingleValue, ValueContainer, IndicatorSeparator, ClearIndicator, DropdownIndicator }, options: [ { label: 'Boston Bruins', value: 'BOS' }, { label: 'Buffalo Sabres', value: 'BUF' }, { label: 'Detroit Red Wings', value: 'DET' }, { label: 'Florida Panthers', value: 'FLA' }, { label: 'Montreal Canadiens', value: 'MTL' }, { label: 'Ottawa Senators', value: 'OTT' }, { label: 'Tampa Bay Lightning', value: 'TBL' }, { label: 'Toronto Maple Leafs', value: 'TOR' }, { label: 'Carolina Hurricanes', value: 'CAR' }, { label: 'Columbus Blue Jackets', value: 'CBJ' }, { label: 'New Jersey Devils', value: 'NJD' }, { label: 'New York Islanders', value: 'NYI' }, { label: 'New York Rangers', value: 'NYR' }, { label: 'Philadelphia Flyers', value: 'PHI' }, { label: 'Pittsburgh Penguins', value: 'PIT' }, { label: 'Washington Capitals', value: 'WSH' }, { label: 'Chicago Blackhawks', value: 'CHI' }, { label: 'Colorado Avalanche', value: 'COL' }, { label: 'Dallas Stars', value: 'DAL' }, { label: 'Minnesota Wild', value: 'MIN' }, { label: 'Nashville Predators', value: 'NSH' }, { label: 'St. Louis Blues', value: 'STL' }, { label: 'Winnipeg Jets', value: 'WPG' }, { label: 'Anaheim Ducks', value: 'ANA' }, { label: 'Arizona Coyotes', value: 'ARI' }, { label: 'Calgary Flames', value: 'CGY' }, { label: 'Edmonton Oilers', value: 'EDM' }, { label: 'Los Angeles Kings', value: 'LAK' }, { label: 'San Jose Sharks', value: 'SJS' }, { label: 'Vancouver Canucks', value: 'VAN' }, { label: 'Vegas Golden Knights', value: 'VGK' } ] }; The piece that ties all of the previous components together is the components property that's passed to Select. This is actually set as a default property in Autocomplete, so it can be further overridden. The value passed to components is a simple object that maps the component name to its implementation. Selecting autocomplete suggestions In the previous section, you built an Autocomplete component capable of selecting a single value. Sometimes, you need the ability to select multiple values from an Autocomplete component. The good news is that, with a few small additions, the component that you created in the previous section already does most of the work. How to do it? Let's walk through the additions that need to be made in order to support multi-value selection in the Autocomplete component, starting with the new MultiValue component, as follows: const MultiValue = props => ( <Chip tabIndex={-1} label={props.children} className={clsx(props.selectProps.classes.chip, { [props.selectProps.classes.chipFocused]: props.isFocused })} onDelete={props.removeProps.onClick} deleteIcon={<CancelIcon {...props.removeProps} />} /> ); The MultiValue component uses the Material-UI Chip component to render a selected value. In order to pass MultiValue to Select, add it to the components object that's passed to Select: components: { Control, Menu, NoOptionsMessage, Option, Placeholder, SingleValue, MultiValue, ValueContainer, IndicatorSeparator, ClearIndicator, DropdownIndicator }, Now you can use your Autocomplete component for single value selection, or for multi-value selection. You can add the isMulti property with a default value of true to defaultProps, as follows: isMulti: true, Now, you should be able to select multiple values from the autocomplete. How does it work? Nothing looks different about the autocomplete when it's first rendered, or when you show the menu. When you make a selection, the Chip component is used to display the value. Chips are ideal for displaying small pieces of information like this. Furthermore, the close button integrates nicely with it, making it easy for the user to remove individual selections after they've been made. Here's what the autocomplete looks like after multiple selections have been made: API-driven Autocomplete You can't always have your autocomplete data ready to render on the initial page load. Imagine trying to load hundreds or thousands of items before the user can interact with anything. The better approach is to keep the data on the server and supply an API endpoint with the autocomplete text as the user types. Then you only need to load a smaller set of data returned by the API. How to do it? Let's rework the example from the previous section. We'll keep all of the same autocomplete functionality, except that, instead of passing an array to the options property, we'll pass in an API function that returns a Promise. Here's the API function that mocks an API call that resolves a Promise: const someAPI = searchText => new Promise(resolve => { setTimeout(() => { const teams = [ { label: 'Boston Bruins', value: 'BOS' }, { label: 'Buffalo Sabres', value: 'BUF' }, { label: 'Detroit Red Wings', value: 'DET' }, ... ]; resolve( teams.filter( team => searchText && team.label .toLowerCase() .includes(searchText.toLowerCase()) ) ); }, 1000); }); This function takes a search string argument and returns a Promise. The same data that would otherwise be passed to the Select component in the options property is filtered here instead. Think of anything that happens in this function as happening behind an API in a real app. The returned Promise is then resolved with an array of matching items following a simulated latency of one second. You also need to add a couple of components to the composition of the Select component (we're up to 13 now!), as follows: const LoadingIndicator = () => <CircularProgress size={20} />; const LoadingMessage = props => ( <Typography color="textSecondary" className={props.selectProps.classes.noOptionsMessage} {...props.innerProps} > {props.children} </Typography> ); The LoadingIndicator component is shown on the right the autocomplete text input. It's using the CircularProgress component from Material-UI to indicate that the autocomplete is doing something. The LoadingMessage component follows the same pattern as the other text replacement components used with Select in this example. The loading text is displayed when the menu is shown, but the Promise that resolves the options is still pending. Lastly, there's the Select component. Instead of using Select, you need to use the AsyncSelect version, as follows: import AsyncSelect from 'react-select/lib/Async'; Otherwise, AsyncSelect works the same as Select, as follows: <AsyncSelect value={value} onChange={value => setValue(value)} textFieldProps={{ label: 'Team', InputLabelProps: { shrink: true } }} {...{ ...props, classes }} /> How does it work? The only difference between a Select autocomplete and an AsyncSelect autocomplete is what happens while the request to the API is pending. Here is what the autocomplete looks like while this is happening: As the user types the CircularProgress component is rendered to the right, while the loading message is rendered in the menu using a Typography component. Highlighting search results When the user starts typing in an autocomplete and the results are displayed in the dropdown, it isn't always obvious how a given item matches the search criteria. You can help your users better understand the results by highlighting the matched portion of the string value. How to do it? You'll want to use two functions from the autosuggest-highlight package to help highlight the text presented in the autocomplete dropdown, as follows: import match from 'autosuggest-highlight/match'; import parse from 'autosuggest-highlight/parse'; Now, you can build a new component that will render the item text, highlighting as and when necessary, as follows: const ValueLabel = ({ label, search }) => { const matches = match(label, search); const parts = parse(label, matches); return parts.map((part, index) => part.highlight ? ( <span key={index} style={{ fontWeight: 500 }}> {part.text} </span> ) : ( <span key={index}>{part.text}</span> ) ); }; The end result is that ValueLabel renders an array of span elements, determined by the parse() and match() functions. One of the spans will be bolded if part.highlight is true. Now, you can use ValueLabel in the Option component, as follows: const Option = props => ( <MenuItem buttonRef={props.innerRef} selected={props.isFocused} component="div" style={{ fontWeight: props.isSelected ? 500 : 400 }} {...props.innerProps} > <ValueLabel label={props.children} search={props.selectProps.inputValue} /> </MenuItem> ); How does it work? Now, when you search for values in the autocomplete text input, the results will highlight the search criteria in each item, as follows: This article helped you implement autocompletion in your Material UI React application.  Then we implemented multi-value selection and saw how to better serve the autocomplete data through an API endpoint. If you found this post useful, do check out the book, React Material-UI Cookbook by Adam Boduch.  This book will help you build modern-day applications by implementing Material Design principles in React applications using Material-UI. How to create a native mobile app with React Native [Tutorial] Reactive programming in Swift with RxSwift and RxCocoa [Tutorial] How to build a Relay React App [Tutorial]
Read more
  • 0
  • 0
  • 33041

article-image-how-to-attack-an-infrastructure-using-voip-exploitation-tutorial
Savia Lobo
03 Nov 2018
9 min read
Save for later

How to attack an infrastructure using VoIP exploitation [Tutorial]

Savia Lobo
03 Nov 2018
9 min read
Voice over IP (VoIP) is pushing business communications to a new level of efficiency and productivity. VoIP-based systems are facing security risks on a daily basis. Although a lot of companies are focusing on the VoIP quality of service, they ignore the security aspects of the VoIP infrastructure, which makes them vulnerable to dangerous attacks. This tutorial is an extract taken from the book Advanced Infrastructure Penetration Testing written by Chiheb Chebbi. In this book, you will explore exploitation abilities such as offensive PowerShell tools and techniques, CI servers, database exploitation, Active Directory delegation, and much more. In today's post, you will learn how to penetrate the VoIP infrastructure. Like any other penetration testing, to exploit the VoIP infrastructure, we need to follow a strategic operation based on a number of steps. Before attacking any infrastructure, we've learned that we need to perform footprinting, scanning, and enumeration before exploiting it, and that is exactly what we are going to do with VoIP. To perform VoIP information gathering, we need to collect as much useful information as possible about the target. As a start, you can do a simple search online. For example, job announcements could be a valuable source of information. For example, the following job description gives the attacker an idea about the VoIP: Later, an attacker could search for vulnerabilities out there to try exploiting that particular system. Searching for phone numbers could also be a smart move, to have an idea of the target based on its voicemail, because each vendor has a default one. If the administrator has not changed it, listening to the voicemail can let you know about your target. If you want to have a look at some of the default voicemails, check http://www.hackingvoip.com/voicemail.html. It is a great resource for learning a great deal about hacking VoIP. Google hacking is an amazing technique for searching for information and online portals. We discussed Google hacking using Dorks. The following demonstration is the output of this Google Dork—in  URL: Network Configuration Cisco: You can find connected VoIP devices using the Shodan.io search engine: VoIP devices are generally connected to the internet. Thus, they can be reached by an outsider. They can be exposed via their web interfaces; that is why, sometimes leaving installation files exposed could be dangerous, because using a search engine can lead to indexing the portal. The following screenshot is taken from an online Asterisk management portal: And this screenshot is taken from a configuration page of an exposed website, using a simple search engine query: After collecting juicy information about the target, from an attacker perspective, we usually should perform scanning. Using scanning techniques is necessary during this phase. Carrying out Host Discovery and Nmap scanning is a good way of scanning the infrastructure to search for VoIP devices. Scanning can lead us to discover VoIP services. For example, we saw the -sV option in Nmap to check services. In VoIP, if port 2000 is open, it is a Cisco CallManager because the SCCP protocol uses that port as default, or if there is a UDP 5060 port, it is SIP. The -O Nmap option could be useful for identifying the running operating system, as there are a lot of VoIP devices that are running on a specific operating system, such as Cisco embedded. You know what to do now. After footprinting and scanning, we need to enumerate the target. As you can see, when exploiting an infrastructure we generally follow the same methodological steps. Banner grabbing is a well-known technique in enumeration, and the first step to enumerate a VoIP infrastructure is by starting a banner grabbing move. In order to do that, using the Netcat utility would help you grab the banner easily, or you can simply use the Nmap script named banner: nmap -sV --script=banner <target> For a specific vendor, there are a lot of enumeration tools you can use; EnumIAX is one of them. It is a built-in enumeration tool in Kali Linux to brute force Inter-Asterisk Exchange protocol usernames: Automated Corporate Enumerator (ACE) is another built-in enumeration tool in Kali Linux: svmap is an open source built-in tool in Kali Linux for identifying SIP devices. Type svmap -h and you will get all the available options for this amazing tool: VoIP attacks By now, you have learned the required skills to perform VoIP footprinting, scanning, and enumeration. Let's discover the major VoIP attacks. VoIP is facing multiple threats from different attack vectors. Denial-of-Service Denial-of-Service (DoS) is a threat to the availability of a network. DoS could be dangerous too for VoIP, as ensuring the availability of calls is vital in modern organizations. Not only the availability but also the clearness of calls is a necessity nowadays. To monitor the QoS of VoIP, you can use many tools that are out there; one of them is CiscoWorks QoS Policy Manager 4.1: To measure the quality of VoIP, there are some scoring systems, such as the Mean Opinion Score (MOS)  or the R-value based on several parameters (jitter, latency, and packet loss). Scores of the mean opinion score range from 1 to 5 (bad to very clear) and scores of R-value range from 1 to 100 (bad to very clear). The following screenshot is taken from an analysis of an RTP packet downloaded from the Wireshark website: You can also analyze the RTP jitter graph: VoIP infrastructure can be attacked by the classic DoS attacks. We saw some of them previously: Smurf flooding attack TCP SYN flood attack UDP flooding attack One of the DoS attack tools is iaxflood. It is available in Kali Linux to perform DoS attacks. IAX stands for  Inter-Asterisk Exchange. Open a Kali terminal and type  iaxflood <Source IP> <Destination IP>  <Number of packets>: The VoIP infrastructure can not only be attacked by the previous attacks attackers can perform packet Fragmentation and Malformed Packets to attack the infrastructure, using fuzzing tools. Eavesdropping Eavesdropping is one of the most serious VoIP attacks. It lets attackers take over your privacy, including your calls. There are many eavesdropping techniques; for example, an attacker can sniff the network for TFTP configuration files while they contain a password. The following screenshot describes an analysis of a TFTP capture: Also, an attacker can harvest phone numbers and build a valid phone numbers databases, after recording all the outgoing and ongoing calls. Eavesdropping does not stop there, attackers can record your calls and even know what you are typing using the Dual-Tone Multi-Frequency (DTMF). You can use the DTMF decoder/encoder from this link http://www.polar-electric.com/DTMF/: Voice Over Misconfigured Internet Telephones (VOMIT) is a great utility to convert Cisco IP Phone conversations into WAV files. You can download it from its official website http://vomit.xtdnet.nl/: SIP attacks Another attacking technique is SIP rogues. We can perform two types of SIP rogues. From an attacker's perspective, we can implement the following: Rogue SIP B2BUA: In  this attacking technique, the attacker mimics SIP B2BUA: SIP rogue as a proxy: Here, the attacker mimics a SIP proxy:   SIP registration hijacking SIP registration hijacking is a serious VoIP security problem. Previously, we saw that before establishing a SIP session, there is a registration step. Registration can be hijacked by attackers. During a SIP registration hijacking attack, the attacker disables a normal user by a Denial of Service, for example, and simply sends a registration request with his own IP address instead of that users because, in SIP, messages are transferred clearly, so SIP does not ensure the integrity of signalling messages: If you are a Metasploit enthusiast, you can try many other SIP modules. Open a Metasploit console by typing msfconsole and search SIP modules using search SIP: To use a specific SIP module, simply type use <module >. The following interface is an example of SIP module usage: Spam over Internet Telephony Spam over Internet Telephony (SPIT), sometimes called Voice spam, is like email spam, but it affects VoIP. To perform a SPIT attack, you can use a generation tool called spitter. Embedding malware Malware is a major threat to VoIP infrastructure. Your insecure VoIP endpoints can be exploited by different types of malware, such as Worms and VoIP Botnets. Softphones are also a highly probable target for attackers. Compromising your softphone could be very dangerous because if an attacker exploits it, they can compromise your VoIP network. Malware is not the only threat against VoIP endpoints. VoIP firmware is a potential attack vector for hackers. Firmware hacking can lead to phones being compromised. Viproy – VoIP penetration testing kit Viproy VoIP penetration testing kit (v4)  is a VoIP and unified communications services pentesting tool presented at Black Hat Arsenal USA 2014 by Fatih Ozavci: To download this project, clone it from its official repository, https://github.com/fozavci/viproy-voipkit: # git clone https://github.com/fozavci/viproy-voipkit. The following project contains many modules to test SIP and Skinny protocols: To use them, copy the lib, modules, and data folders to a Metasploit folder in your system. Thus, in  this article, we demonstrated how to exploit the VoIP infrastructure. We explored the major VoIP attacks and how to defend against them, in addition to the tools and utilities most commonly used by penetration testers. If you've enjoyed reading this, do check out Advanced Infrastructure Penetration Testing to discover post-exploitation tips, tools, and methodologies to help your organization build an intelligent security system. Managing a VoIP Solution with Active Directory Depends On Your Needs Opus 1.3, a popular FOSS audio codec with machine learning and VR support, is now generally available Approaching a Penetration Test Using Metasploit
Read more
  • 0
  • 0
  • 32438