Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials

7017 Articles
article-image-arrays-lists-dictionaries-unity-3d-game-development
Amarabha Banerjee
16 May 2018
14 min read
Save for later

How to use arrays, lists, and dictionaries in Unity for 3D game development

Amarabha Banerjee
16 May 2018
14 min read
A key ingredient in scripting 3D games with Unity is the ability to work with C# to create arrays, lists, objects and dictionaries within the Unity platform. In this tutorial, we help you to get started with creating arrays, lists, and dictionaries effectively. This article is an excerpt from Learning C# by Developing Games with Unity 2017. Read more here. You can also read the latest edition of the book here.  An array stores a sequential collection of values of the same type, in the simplest terms. We can use arrays to store lists of values in a single variable. Imagine we want to store a number of student names. Simple! Just create a few variables and name them student1, student2, and so on: public string student1 = "Greg"; public string student2 = "Kate"; public string student3 = "Adam"; public string student4 = "Mia"; There's nothing wrong with this. We can print and assign new values to them. The problem starts when you don't know how many student names you will be storing. The name variable suggests that it's a changing element. There is a much cleaner way of storing lists of data. Let's store the same names using a C# array variable type: public string[ ] familyMembers = new string[ ]{"Greg", "Kate", "Adam", "Mia"} ; As you can see, all the preceding values are stored in a single variable called familyMembers. Declaring an array To declare a C# array, you must first say what type of data will be stored in the array. As you can see in the preceding example, we are storing strings of characters. After the type, we have an open square bracket and then immediately a closed square bracket, [ ]. This will make the variable an actual array. We also need to declare the size of the array. It simply means how many places are there in our variable to be accessed. The minimum code required to declare a variable looks similar to this: public string[] myArrayName = new string[4]; The array size is set during assignment. As you have learned before, all code after the variable declaration and the equal sign is an assignment. To assign empty values to all places in the array, simply write the new keyword followed by the type, an open square bracket, a number describing the size of the array, and then a closed square bracket. If you feel confused, give yourself a bit more time. Then you will fully understand why arrays are helpful. Take a look at the following examples of arrays; don't worry about testing how they work yet: string[ ] familyMembers = new string[]{"John", "Amanda", "Chris", "Amber"} ; string[ ] carsInTheGarage = new string[] {"VWPassat", "BMW"} ; int[ ] doorNumbersOnMyStreet = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 }; GameObject[ ] carsInTheScene = GameObject.FindGameObjectsWithTag("car"); As you can see, we can store different types of data as long as the elements in the array are of the same type. You are probably wondering why the last example, shown here, looks different: GameObject[ ] carsInTheScene = GameObject.FindGameObjectsWithTag("car"); In fact, we are just declaring the new array variable to store a collection of GameObject in the scene using the "car" tag. Jump into the Unity scripting documentation and search for GameObject.FindGameObjectsWithTag: As you can see, GameObject.FindGameObjectsWithTag is a special built-in Unity function that takes a string parameter (tag) and returns an array of GameObjects using this tag. Storing items in the List Using a List instead of an array can be so easier to work with in a script. Look at some forum sites related to C# and Unity, and you'll discover that plenty of programmers simply don't use an array unless they have to; they prefer to use a List. It is up to the developer's preference and task. Let's stick to lists for now. Here are the basics of why a List is better and easier to use than an array: An array is of fixed size and unchangeable The size of a List is adjustable You can easily add and remove elements from a List To mimic adding a new element to an array, we would need to create a whole new array with the desired number of elements and then copy the old elements The first thing to understand is that a List has the ability to store any type of object, just like an array. Also, like an array, we must specify which type of object we want a particular List to store. This means that if you want a List of integers of the int type then you can create a List that will store only the int type. Let's go back to the first array example and store the same data in a List. To use a List in C#, you need to add the following line at the beginning of your script: using System.Collections.Generic; As you can see, using Lists is slightly different from using arrays. Line 9 is a declaration and assignment of the familyMembers List. When declaring the list, there is a requirement for a type of objects that you will be storing in the List. Simply write the type between the < > characters. In this case, we are using string. As we are adding the actual elements later in lines 14 to 17, instead of assigning elements in the declaration line, we need to assign an empty List to be stored temporarily in the familyMembers variable. Confused? If so, just take a look at the right-hand side of the equal sign on line 9. This is how you create a new instance of the List for a given type, string for this example: new List<string>(); Lines 14 to 17 are very simple to understand. Each line adds an object at the end of the List, passing the string value in the parentheses. In various documentation, Lists of type look like this: List< T >. Here, T stands for the type of data. This simply means that you can insert any type in place of T and the List will become a list of that specific type. From now on, we will be using it. Common operations with Lists List<T> is very easy to use. There is a huge list of different operations that you can perform with it. We have already spoken about adding an element at the end of a List. Very briefly, let's look at the common ones that we will be possibly using at later stages: Add: This adds an object at the end of List<T>. Remove: This removes the first occurrence of a specific object from List<T>. Clear: This removes all elements from List<T>. Contains: This determines whether an element is in List<T> or not. It is very useful to check whether an element is stored in the list. Insert: This inserts an element into List<T> at the specified index. ToArray: This copies the elements of List<T> to a new array. You don't need to understand all of these at this stage. All I want you to know is that there are many out-of-the-box operations that you can use. If you want to see them all, I encourage you to dive into the C# documentation and search for the List<T> class. List <T> versus arrays Now you are probably thinking, "Okay, which one should I use?" There isn't a general rule for this. Arrays and List<T> can serve the same purpose. You can find a lot of additional information online to convince you to use one or the other. Arrays are generally faster. For what we are doing at this stage, we don't need to worry about processing speeds. Some time from now, however, you might need a bit more speed if your game slows down, so this is good to remember. List<T> offers great flexibility. You don't need to know the size of the list during declaration. There is a massive list of out-of-the-box operations that you can use with List, so it is my recommendation. Array is faster, List<T> is more flexible. Retrieving the data from the Array or List<T> Declaring and storing data in the array or list is very clear to us now. The next thing to learn is how to get stored elements from an array. To get a stored element from the array, write an array variable name followed by square brackets. You must write an int value within the brackets. That value is called an index. The index is simply a position in the array. So, to get the first element stored in the array, we will write the following code: myArray[0]; Unity will return the data stored in the first place in myArray. It works exactly the same way as the return type methods. So, if myArray stores a string value on index 0, that string will be returned to the place where you are calling it. Complex? It's not. Let's show you by example. The index value starts at 0, no 1, so the first element in an array containing 10 elements will be accessible through an index value of 0 and last one through a value of 9. Let's extend the familyMembers example: I want to talk about line 20. The rest of it is pretty obvious for you, isn't it? Line 20 creates a new variable called thirdFamilyMember and assigns the third value stored in the familyMembers list. We are using an index value of 2 instead of 3 because in programming counting starts at 0. Try to memorize this; it is a common mistake made by beginners in programming. Go ahead and click Play. You will see the name Adam being printed in the Unity Console. While accessing objects stored in an array, make sure you use an index value between zero and the size of the array. In simpler words, we cannot access data from index 10 in an array that contains only four objects. Makes sense? Checking the size This is very common; we need to check the size of the array or list. There is a slight difference between a C# array and List<T>. To get the size as an integer value, we write the name of the variable, then a dot, and then Length of an array or Count for List<T>: arrayName.Length: This returns an integer value with the size of the array listName.Count: This returns an integer value with the size of the list As we need to focus on one of the choices here and move on, from now on we will be using List<T>. ArrayList We definitely know how to use lists now. We also know how to declare a new list and add, remove, and retrieve elements. Moreover, you have learned that the data stored in List<T> must be of the same type across all elements. Let's throw a little curveball. ArrayList is basically List<T> without a specified type of data. This means that we can store whatever objects we want. Storing elements of different types is also possible. ArrayList is very flexible. Take a look at the following example to understand what ArrayList can look like: You have probably noticed that ArrayList also supports all common operations, such as .Add(). Lines 12 to 15 add different elements into the array. The first two are of the integer type, the third is a string type, and the last one is a GameObject. All mixed types of elements in one variable! When using ArrayList, you might need to check what type of element is under a specific index to know how to treat it in code. Unity provides a very useful function that you can use on virtually any type of object. Its GetType() method returns the type of the object, not the value. We are using it in lines 18 and 19 to print the types of the second and third elements. Go ahead, write the preceding code, and click Play. You should get the following output in the Console window: Dictionaries When we talk about collection data, we need to mention Dictionaries. A Dictionary is similar to a List. However, instead of accessing a certain element by index value, we use a string called key. The Dictionary that you will probably be using the most often is called Hashtable. Feel free to dive into the C# documentation after reading this chapter to discover all the bits of this powerful class. Here are a few key properties of Hashtable: Hashtable can be resized dynamically, like List<T> and ArrayList Hashtable can store multiple data types at the same type, like ArrayList A public member Hashtable isn't visible in the Unity Inspector panel due to default inspector limitations I want to make sure that you won't feel confused, so I will go straight to a simple example: Accessing values To access a specific key in the Hashtable, you must know the string key the value is stored under. Remember, the key is the first value in the brackets when adding an element to Hashtable. Ideally, you should also know the type of data you are trying to access. In most cases, that would not be an issue. Take a look at this line: Debug.Log((string)personalDetails["firstName"]); We already know that using Debug.Log serves to display a message on the Unity console, so what are we trying to display? A string value (it's one that can contain letters and numbers), then we specify where that value is stored. In this case, the information is stored under Hashtable personalDetails and the content that we want to display is firstName. Now take a look at the script once again and see if you can display the age, remember that the value that we are trying to access here is a number, so we should use int instead of string: Similar to ArrayList, we can store mixed-type data in Hashtable. Unity requires the developer to specify how an accessed element should be treated. To do this, we need to cast the element into a specific data type. The syntax is very simple. There are brackets with the data type inside, followed by the Hashtable variable name. Then, in square brackets, we have to enter the key string the value is stored under. Ufff, confusing! As you can see in the preceding line, we are casting to string (inside brackets). If we were to access another type of data, for example, an integer number, the syntax would look like this: (int)personalDetails["age"]; I hope that this is clear now. If it isn't, why not search for more examples on the Unity forums? How do I know what's inside my Hashtable? Hashtable, by default, isn't displayed in the Unity Inspector panel. You cannot simply look at the Inspector tab and preview all keys and values in your public member Hashtable. We can do this in code, however. You know how to access a value and cast it. What if you are trying to access the value under a key that isn't stored in the Hashtable? Unity will spit out a null reference error and your program is likely to crash. To check whether an element exists in the Hashtable, we can use the .Contains(object) method, passing the key parameter: This determines whether the array contains the item and if so, the code will continue; otherwise, it will stop there, preventing any error. We discussed how to use C# to create arrays, lists, dictionaries and objects in Unity. The code samples and the examples will help you implement these from the platform. Do check out this book Learning C# by Developing Games with Unity 2017  to develop your first interactive 2D and 3D platform game. Read More Unity 2D & 3D game kits simplify Unity game development for beginners How to create non-player Characters (NPC) with Unity 2018 Game Engine Wars: Unity vs Unreal Engine
Read more
  • 0
  • 4
  • 126419

article-image-customizing-elgg-themes
Packt
27 Oct 2009
8 min read
Save for later

Customizing Elgg Themes

Packt
27 Oct 2009
8 min read
Why Customize? Elgg ships with a professional looking and slick grayish-blue default theme. Depending on how you want to use Elgg, you'd like your network to have a personalized look. If you are using the network for your own personal use, you really don't care a lot about how it looks. But if you are using the network as part of your existing online infrastructure, would you really want it to look like every other default Elgg installation on the planet? Of course not! Visitors to your network should easily identify your network and relate it to you. But theming isn't just about glitter. If you thought themes were all about gloss, think again. A theme helps you brand your network. As the underlying technology is the same, a theme is what really separates your network from the others out there. What Makes Up a Theme? There are several components that define your network. Let's take a look at the main components and some practices to follow while using them. Colors: Colors are an important part of your website's identity. If you have an existing website, you'd want your Elgg network to have the same family of colors as your main website. If the two (your website and social network) are very different, the changeover from one to another could be jarring. While this isn't necessary, maintaining color consistency is a good practice. Graphics: Graphics help you brand the network to make it your own. Every institution has a logo. Using a logo in your Elgg theme is probably the most basic change you'd want to make. But make sure the logo blends in with the theme, that is, it has the same background color. Code: It takes a little bit of HTML, a sprinkle of PHP, and some CSS magic to Code: It takes a little bit of HTML, a sprinkle of PHP, and some CSS magic to manipulate and control a theme. A CSS file: As the name suggests, this file contains all the CSS decorations. You can choose to alter colors and fonts and other elements in this file. A Pageshell file: In this pageshell file, you define the structure of your Elgg network. If you want to change the position of the Search bar or instead of the standard two-column format, move to a three-column display, this is the file you need to modify. Front page files: Two files control how the landing page of your Elgg network appears to logged out or logged in users. Optional images folder: This folder houses all the logos and other artwork that'll be directly used by the theme. Please note that this folder does not include any other graphic elements we've covered in previous tutorials such as your picture, or icons to communities, and so on. Controlling Themes Rather than being single humongous files, themes in Elgg are a bunch of small manageable files. The CSS decoration is separated from the placement code. Before getting our hands dirty creating a theme, let's take a look at the files that control the look and feel of your network. All themes must have these files: The Default Template Elgg ships with a default template that you can find under your Elgg installation. This is the structure of the files and folders that make up the default template. Before we look at the individual files and examine their contents in detail, let's first understand their content in general. All three files, pageshell, frontpage_logedin, and frontpage_loggedout are made up of two types of components. Keywords are used to pull content from the database and display them on the page. Arranging these keywords are the div<.em> and span tags along with several others like h1, ul, and so on that have been defined in the CSS file. What are <div> and <span>? The <div> and <span> are two very important tags especially when it comes to handling CSS files. In a snap, these two tags are used to style arbitrary sections of HTML. <div> does much the same thing as a paragraph tag <p>, but it divides the page up into larger sections. With <div>, you can also define the style of whole sections of HTML. This is especially useful when you want to give particular sections a different style from the surrounding text. The <span> tag is similar to the <div> tag. It is also used to change the style of the text it encloses. The difference between <span> and <div> is that a span element is in-line and usually used for a small chunk of in-line HTML. Both <div> and <span> work with two important attributes, class and id. The most common use of these containers together with the class or id attributes is when this is done with CSS to apply layout, color, and other presentation attributes to the page's content. In forthcoming sections, we'll see how the two container items use their two attributes to influence themes. The pageshell Now, let's dive into understanding the themes. Here's an exact replica of the pageshell of the Default template. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <head><title>{{title}}</title> {{metatags}}</head> <body>{{toolbar}} <div id="container"><!-- open container which wraps the header, maincontent and sidebar --><div id="header"><!-- open div header --><div id="header-inner"> <div id="logo"><!-- open div logo --> <h1><a href="{{url}}">{{sitename}}</a></h1> <h2>{{tagline}}</h2> </div><!-- close div logo --> {{searchbox}}</div> </div><!-- close div header --> <div id="content-holder"><!-- open content-holder div --><div id="content-holder-inner"> <div id="splitpane-sidebar"><!-- open splitpane-sidebar div --> <ul><!-- open sidebar lists --> {{sidebar}} </ul><!-- close sidebar lists --></div><!-- close splitpane-sidebar div --> <div id="splitpane-content"><!-- open splitpane-content div --> {{messageshell}} {{mainbody}}</div><!-- close open splitpane-content div --></div> </div><!-- close content-holder div --> <div id="footer"><!-- start footer --><div id="footer-inner"><span style="color:#FF934B">{{sitename}}</span> <a href="{{url}}content/terms.php">Terms and conditions</a> | <a href="{{url}}content/privacy.php">Privacy Policy</a><br /><a href="http://elgg.org/"><img src="{{url}}mod/template/ icons/elgg_powered.png" title="Elgg powered" border="0" alt="Elgg powered"/></a><br /> {{perf}}</div> </div><!-- end footer --> </div><!-- close container div --> </body> </html> CSS Elements in the pageshell Phew! That's a lot of mumbo-jumbo. But wait a second! Don't jump to a conclusion! Browse through this section, where we disassemble the file into easy-to-understand chunks. First, we'll go over the elements that control the layout of the pageshell. <div id="container">: This container wraps the complete page and all its elements, including the header, main content, and sidebar. In the CSS file, this is defined as: div#container {width:940px;margin:0 auto;padding:0;background:#fff;border-top:1px solid #fff;} <div id="header">: This houses all the header content including the logo and search box. The CSS definition for the header element: div#header {margin:0;padding:0;text-align:left;background:url({{url}}mod/template/templates/Default_Template/images/header-bg.gif) repeat-x;position:relative;width:100%;height:120px;} The CSS definition for the logo: div#header #logo{margin: 0px;padding:10px;float:left;} The search box is controlled by the search-header element: div#header #search-header {float:right;background:url({{url}}mod/template/templates/Default_Template/images/search_icon.gif) no-repeat left top;width:330px;margin:0;padding:0;position:absolute;top:10px;right:0;} <div id="header-inner">: While the CSS file of the default template doesn't define the header-inner element, you can use it to control the area allowed to the elements in the header. When this element isn't defined, the logo and search box take up the full area of the header. But if you want padding in the header around all the elements it houses, specify that using this element. <div id="content-holder">: This wraps the main content area. #content-holder {padding:0px;margin:0px;width:100%;min-height:500px;overflow:hidden;position:relative;} <div id="splitpane-sidebar">: In the default theme, the main content area has a two-column layout, split between the content and the sidebar area. div#splitpane-sidebar {width: 220px;margin:0;padding:0;background:#fff url({{url}}mod/template/templates/Default_Template/images/side-back.gif) repeat-y;margin:0;float: right;}div#splitpane-content {margin: 0;padding: 0 0 0 10px;width:690px;text-align: left;color:#000;overflow:hidden;min-height:500px;} <div id="single-page">: While not used in the Default template, the content area can also have a simple single page layout, without the sidebar. div#single-page {margin: 0;padding: 0 15px 0 0;width:900px;text-align: left;border:1px solid #eee;} <div id="content-holder-inner">: Just like header-inner, is used only if you would like a full page layout but a defined width for the actual content. <div id="footer">: Wraps the footer of the page including the links to the terms and conditions and the privacy policy, along with the Elgg powered icon. div#footer {clear: both;position: relative;background:url({{url}}mod/template/templates/Default_Template/images/footer.gif) repeat-x top;text-align: center;padding:10px 0 0 0;font-size:1em;height:80px;margin:0;color:#fff;width:100%;} <div id="footer-inner">: Like the other inner elements, this is only used if you would like a full page layout but restrict the width for the actual footer content.
Read more
  • 0
  • 101
  • 95259

article-image-using-python-automation-to-interact-with-network-devices-tutorial
Melisha Dsouza
21 Feb 2019
15 min read
Save for later

Using Python Automation to interact with network devices [Tutorial]

Melisha Dsouza
21 Feb 2019
15 min read
In this tutorial, we will learn new ways to interact with network devices using Python. We will understand how to configure network devices using configuration templates, and also write a modular code to ensure high reusability of the code to perform repetitive tasks. We will also see the benefits of parallel processing of tasks and the efficiency that can be gained through multithreading. This Python tutorial has been taken from the second edition of Practical Network Automation. Read it here. Interacting with network devices Python is widely used to perform network automation. With its wide set of libraries (such as Netmiko and Paramiko), there are endless possibilities for network device interactions for different vendors. Let us understand one of the most widely used libraries for network interactions. We will be using Netmiko to perform our network interactions. Python provides a well-documented reference for each of the modules, and, for our module, the documentation can be found at pypi.org.  For installation, all we have to do is go into the folder from the command line where python.exe is installed or is present. There is a subfolder in that location called scripts. Inside the folder, we have two options that can be used for installing the easy_install.exe or pip.exe modules. Installing the library for Python can be done in two ways: The syntax of easy_install is as follows: easy_install <name of module> For example, to install Netmiko, the following command is run: easy_install netmiko The syntax of pip install is as follows: pip install <name of module> For example: pip install netmiko Here's an example of a simple script to log in to the router (an example IP is 192.168.255.249 with a username and password of cisco) and show the version: from netmiko import ConnectHandler device = ConnectHandler(device_type='cisco_ios', ip='192.168.255.249', username='cisco', password='cisco') output = device.send_command("show version") print (output) device.disconnect() The output of the execution of code against a router is as follows: As we can see in the sample code, we call the ConnectHandler function from the Netmiko library, which takes four inputs (platform type, IP address of device, username, and password): Netmiko works with a variety of vendors. Some of the supported platform types and their abbreviations to be called in Netmiko are as follows: a10: A10SSH, accedian: AccedianSSH, alcatel_aos: AlcatelAosSSH, alcatel_sros: AlcatelSrosSSH, arista_eos: AristaSSH, aruba_os: ArubaSSH, avaya_ers: AvayaErsSSH, avaya_vsp: AvayaVspSSH, brocade_fastiron: BrocadeFastironSSH, brocade_netiron: BrocadeNetironSSH, brocade_nos: BrocadeNosSSH, brocade_vdx: BrocadeNosSSH, brocade_vyos: VyOSSSH, checkpoint_gaia: CheckPointGaiaSSH, ciena_saos: CienaSaosSSH, cisco_asa: CiscoAsaSSH, cisco_ios: CiscoIosBase, cisco_nxos: CiscoNxosSSH, cisco_s300: CiscoS300SSH, cisco_tp: CiscoTpTcCeSSH, cisco_wlc: CiscoWlcSSH, cisco_xe: CiscoIosBase, cisco_xr: CiscoXrSSH, dell_force10: DellForce10SSH, dell_powerconnect: DellPowerConnectSSH, eltex: EltexSSH, enterasys: EnterasysSSH, extreme: ExtremeSSH, extreme_wing: ExtremeWingSSH, f5_ltm: F5LtmSSH, fortinet: FortinetSSH, generic_termserver: TerminalServerSSH, hp_comware: HPComwareSSH, hp_procurve: HPProcurveSSH, huawei: HuaweiSSH, juniper: JuniperSSH, juniper_junos: JuniperSSH, linux: LinuxSSH, mellanox_ssh: MellanoxSSH, mrv_optiswitch: MrvOptiswitchSSH, ovs_linux: OvsLinuxSSH, paloalto_panos: PaloAltoPanosSSH, pluribus: PluribusSSH, quanta_mesh: QuantaMeshSSH, ubiquiti_edge: UbiquitiEdgeSSH, vyatta_vyos: VyOSSSH, vyos: VyOSSSH Depending upon the selection of the platform type, Netmiko can understand the returned prompt and the correct way to SSH into the specific device. Once the connection is made, we can send commands to the device using the send_command method. Once we get the return value, the value stored in the output variable is displayed, which is the string output of the command that we sent to the device. The last line, which uses the disconnect function, ensures that the connection is terminated cleanly once we are done with our task. For configuration (for example, we need to provide a description to the FastEthernet 0/0 router interface), we use Netmiko, as shown in the following example: from netmiko import ConnectHandler print ("Before config push") device = ConnectHandler(device_type='cisco_ios', ip='192.168.255.249', username='cisco', password='cisco') output = device.send_command("show running-config interface fastEthernet 0/0") print (output) configcmds=["interface fastEthernet 0/0", "description my test"] device.send_config_set(configcmds) print ("After config push") output = device.send_command("show running-config interface fastEthernet 0/0") print (output) device.disconnect() The output of the execution of the preceding code is as follows: As we can see, for config push, we do not have to perform any additional configurations but just specify the commands in the same order as we send them manually to the router in a list, and pass that list as an argument to the send_config_set function. The output in Before config push is a simple output of the FastEthernet0/0 interface, but the output under After config push now has the description that we configured using the list of commands. In a similar way, we can pass multiple commands to the router, and Netmiko will go into configuration mode, write those commands to the router, and exit config mode. If we want to save the configuration, we use the following command after the send_config_set command: device.send_command("write memory") This ensures that the router writes the newly pushed configuration in memory. Additionally, for reference purposes across the book, we will be referring to the following GNS3 simulated network: In this topology, we have connected four routers with an Ethernet switch. The switch is connected to the local loopback interface of the computer, which provides the SSH connectivity to all the routers. We can simulate any type of network device and create topology based upon our specific requirements in GNS3 for testing and simulation. This also helps in creating complex simulations of any network for testing, troubleshooting, and configuration validations. The IP address schema used is the following: rtr1: 192.168.20.1 rtr2: 192.168.20.2 rtr3: 192.168.20.3 rtr4: 192.168.20.4 Loopback IP of computer: 192.168.20.5 The credentials used for accessing these devices are the following: Username: test Password: test Let us start from the first step by pinging all the routers to confirm their reachability from the computer. The code is as follows: import socket import os s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) for n in range(1, 5): server_ip="192.168.20.{0}".format(n) rep = os.system('ping ' + server_ip) if rep == 0: print ("server is up" ,server_ip) else: print ("server is down" ,server_ip) The output of running the preceding code is as follows: As we can see in the preceding code, we use the range command to iterate over the IPs 192.168.20.1-192.168.20.4. The server_ip variable in the loop is provided as an input to the ping command, which is executed for the response. The response stored in the rep variable is validated with a value of 0 stating that the router can be reached, and a value of 1 means the router is not reachable. As a next step, to validate whether the routers can successfully respond to SSH, let us fetch the value of uptime from the show version command: show version | in uptime The code is as follows: from netmiko import ConnectHandler username = 'test' password="test" for n in range(1, 5): ip="192.168.20.{0}".format(n) device = ConnectHandler(device_type='cisco_ios', ip=ip, username='test', password='test') output = device.send_command("show version | in uptime") print (output) device.disconnect() The output of running the preceding command is as follows: Using Netmiko, we fetched the output of the command from each of the routers and printed a return value. A return value for all the devices confirms SSH attainability, whereas failure would have returned an exception, causing the code to abruptly end for that particular router. If we want to save the configuration, we use the following command after the send_config_set command: device.send_command("write memory") This ensures that the router writes the newly pushed configuration in memory. Network device configuration using template With all the routers reachable and accessible through SSH, let us configure a base template that sends the Syslog to a Syslog server and additionally ensures that only information logs are sent to the Syslog server. Also, after configuration, a validation needs to be performed to ensure that logs are being sent to the Syslog server. The logging server info is as follows: Logging server IP: 192.168.20.5  Logging port: 514 Logging protocol: TCP Additionally, a loopback interface (loopback 30) needs to be configured with the {rtr} loopback interface description. The code lines for the template are as follows: logging host 192.168.20.5 transport tcp port 514 logging trap 6 interface loopback 30 description "{rtr} loopback interface" To validate that the Syslog server is reachable and that the logs sent are informational, use the show logging command. In the event that  the output of the command contains the text: Trap logging: level informational: This confirms that the logs are sent as informational Encryption disabled, link up: This confirms that the Syslog server is reachable The code to create the configuration, push it on to the router and perform the validation, is as follows: from netmiko import ConnectHandler template="""logging host 192.168.20.5 transport tcp port 514 logging trap 6 interface loopback 30 description "{rtr} loopback interface\"""" username = 'test' password="test" #step 1 #fetch the hostname of the router for the template for n in range(1, 5): ip="192.168.20.{0}".format(n) device = ConnectHandler(device_type='cisco_ios', ip=ip, username='test', password='test') output = device.send_command("show run | in hostname") output=output.split(" ") hostname=output[1] generatedconfig=template.replace("{rtr}",hostname) #step 2 #push the generated config on router #create a list for generateconfig generatedconfig=generatedconfig.split("\n") device.send_config_set(generatedconfig) #step 3: #perform validations print ("********") print ("Performing validation for :",hostname+"\n") output=device.send_command("show logging") if ("encryption disabled, link up"): print ("Syslog is configured and reachable") else: print ("Syslog is NOT configured and NOT reachable") if ("Trap logging: level informational" in output): print ("Logging set for informational logs") else: print ("Logging not set for informational logs") print ("\nLoopback interface status:") output=device.send_command("show interfaces description | in loopback interface") print (output) print ("************\n") The output of running the preceding command is as follows: Another key aspect to creating network templates is understanding the type of infrastructure device for which the template needs to be applied. As we generate the configuration form templates, there are times when we want to save the generated configurations to file, instead of directly pushing on devices. This is needed when we want to validate the configurations or even keep a historic repository for the configurations that are to be applied on the router. Let us look at the same example, only this time, the configuration will be saved in files instead of writing back directly to routers. The code to generate the configuration and save it as a file is as follows: from netmiko import ConnectHandler import os template="""logging host 192.168.20.5 transport tcp port 514 logging trap 6 interface loopback 30 description "{rtr} loopback interface\"""" username = 'test' password="test" #step 1 #fetch the hostname of the router for the template for n in range(1, 5): ip="192.168.20.{0}".format(n) device = ConnectHandler(device_type='cisco_ios', ip=ip, username='test', password='test') output = device.send_command("show run | in hostname") output=output.split(" ") hostname=output[1] generatedconfig=template.replace("{rtr}",hostname) #step 2 #create different config files for each router ready to be pushed on routers. configfile=open(hostname+"_syslog_config.txt","w") configfile.write(generatedconfig) configfile.close() #step3 (Validation) #read files for each of the router (created as routername_syslog_config.txt) print ("Showing contents for generated config files....") for file in os.listdir('./'): if file.endswith(".txt"): if ("syslog_config" in file): hostname=file.split("_")[0] fileconfig=open(file) print ("\nShowing contents of "+hostname) print (fileconfig.read()) fileconfig.close() The output of running the preceding command is as follows: In a similar fashion to the previous example, the configuration is now generated. However, this time, instead of being pushed directly on routers, it is stored in different files with filenames based upon router names for all the routers that were provided in input. In each case, a .txt file is created (here is a sample filename that will be generated during execution of the script: rtr1_syslog_config.txt for the rtr1 router). As a final validation step, we read all the .txt files and print the generated configuration for each of the text files that has the naming convention containing syslog_config in the filename. There are times when we have a multi-vendor environment, and to manually create a customized configuration is a difficult task. Let us see an example in which we leverage a library (PySNMP) to fetch details regarding the given devices in the infrastructure using Simple Network Management Protocol (SNMP). For our test, we are using the SNMP community key mytest on the routers to fetch their model/version. The code to get the version and model of router, is as follows: #snmp_python.py from pysnmp.hlapi import * for n in range(1, 3): server_ip="192.168.20.{0}".format(n) errorIndication, errorStatus, errorIndex, varBinds = next( getCmd(SnmpEngine(), CommunityData('mytest', mpModel=0), UdpTransportTarget((server_ip, 161)), ContextData(), ObjectType(ObjectIdentity('SNMPv2-MIB', 'sysDescr', 0))) ) print ("\nFetching stats for...", server_ip) for varBind in varBinds: print (varBind[1]) The output of running the preceding command is as follows: As we see in this, the SNMP query was performed on a couple of routers (192.168.20.1 and 192.168.20.2). The SNMP query was performed using the standard Management Information Base (MIB), sysDescr. The return value of the routers against this MIB request is the make and model of the router and the current OS version it is running on. Using SNMP, we can fetch many vital statistics of the infrastructure and can generate configurations based upon the return values. This ensures that we have standard configurations even with a multi-vendor environment. As a sample, let us use the SNMP approach to determine the number of interfaces that a particular router has and, based upon the return values, we can dynamically generate a configuration irrespective of any number of interfaces available on the device. The code to fetch the available interfaces in a router is as follows: #snmp_python_interfacestats.py from pysnmp.entity.rfc3413.oneliner import cmdgen cmdGen = cmdgen.CommandGenerator() for n in range(1, 3): server_ip="192.168.20.{0}".format(n) print ("\nFetching stats for...", server_ip) errorIndication, errorStatus, errorIndex, varBindTable = cmdGen.bulkCmd( cmdgen.CommunityData('mytest'), cmdgen.UdpTransportTarget((server_ip, 161)), 0,25, '1.3.6.1.2.1.2.2.1.2' ) for varBindTableRow in varBindTable: for name, val in varBindTableRow: print('%s = Interface Name: %s' % (name.prettyPrint(), val.prettyPrint())) The output of running the preceding command is as follows: Using the snmpbulkwalk, we query for the interfaces on the router. The result from the query is a list that is parsed to fetch the SNMP MIB ID for the interfaces, along with the description of the interface. Multithreading A key focus area while performing operations on multiple devices is how quickly we can perform the actions. To put this into perspective, if each router takes around 10 seconds to log in, gather the output, and log out, and we have around 30 routers that we need to get this information from, we would need 10*30 = 300 seconds for the program to complete the execution. If we are looking for more advanced or complex calculations on each output, which might take up to a minute, then it will take 30 minutes for just 30 routers. This starts becoming very inefficient when our complexity and scalability grows. To help with this, we need to add parallelism to our programs.  Let us log in to each of the routers and fetch the show version using a parallel calling (or multithreading): #parallel_query.py from netmiko import ConnectHandler from datetime import datetime from threading import Thread startTime = datetime.now() threads = [] def checkparallel(ip): device = ConnectHandler(device_type='cisco_ios', ip=ip, username='test', password='test') output = device.send_command("show run | in hostname") output=output.split(" ") hostname=output[1] print ("\nHostname for IP %s is %s" % (ip,hostname)) for n in range(1, 5): ip="192.168.20.{0}".format(n) t = Thread(target=checkparallel, args= (ip,)) t.start() threads.append(t) #wait for all threads to completed for t in threads: t.join() print ("\nTotal execution time:") print(datetime.now() - startTime) The output of running the preceding command is as follows: The calling to the same set of routers being done in parallel takes approximately 8 seconds to fetch the results. Summary In this tutorial, we learned how to interact with Network devices through Python and got familiar with an extensively used library of Python (Netmiko) for network interactions. You also learned how to interact with multiple network devices using a simulated lab in GNS3 and got to know the device interaction through SNMP. Additionally, we also touched base on multithreading, which is a key component in scalability through various examples. To learn how to make your network robust by leveraging the power of Python, Ansible and other network automation tools, check out our book Practical Network Automation - Second Edition. AWS announces more flexibility its Certification Exams, drops its exam prerequisites Top 10 IT certifications for cloud and networking professionals in 2018 What matters on an engineering resume? Hacker Rank report says skills, not certifications
Read more
  • 0
  • 0
  • 68242

article-image-basics-jupyter-notebook-python
Packt Editorial Staff
11 Oct 2015
28 min read
Save for later

Basics of Jupyter Notebook and Python

Packt Editorial Staff
11 Oct 2015
28 min read
In this article by Cyrille Rossant, coming from his book, Learning IPython for Interactive Computing and Data Visualization - Second Edition, we will see how to use IPython console, Jupyter Notebook, and we will go through the basics of Python. Originally, IPython provided an enhanced command-line console to run Python code interactively. The Jupyter Notebook is a more recent and more sophisticated alternative to the console. Today, both tools are available, and we recommend that you learn to use both. [box type="note" align="alignleft" class="" width=""]The first chapter of the book, Chapter 1, Getting Started with IPython, contains all installation instructions. The main step is to download and install the free Anaconda distribution at https://www.continuum.io/downloads (the version of Python 3 64-bit for your operating system).[/box] Launching the IPython console To run the IPython console, type ipython in an OS terminal. There, you can write Python commands and see the results instantly. Here is a screenshot: IPython console The IPython console is most convenient when you have a command-line-based workflow and you want to execute some quick Python commands. You can exit the IPython console by typing exit. [box type="note" align="alignleft" class="" width=""]Let's mention the Qt console, which is similar to the IPython console but offers additional features such as multiline editing, enhanced tab completion, image support, and so on. The Qt console can also be integrated within a graphical application written with Python and Qt. See http://jupyter.org/qtconsole/stable/ for more information.[/box] Launching the Jupyter Notebook To run the Jupyter Notebook, open an OS terminal, go to ~/minibook/ (or into the directory where you've downloaded the book's notebooks), and type jupyter notebook. This will start the Jupyter server and open a new window in your browser (if that's not the case, go to the following URL: http://localhost:8888). Here is a screenshot of Jupyter's entry point, the Notebook dashboard: The Notebook dashboard [box type="note" align="alignleft" class="" width=""]At the time of writing, the following browsers are officially supported: Chrome 13 and greater; Safari 5 and greater; and Firefox 6 or greater. Other browsers may work also. Your mileage may vary.[/box] The Notebook is most convenient when you start a complex analysis project that will involve a substantial amount of interactive experimentation with your code. Other common use-cases include keeping track of your interactive session (like a lab notebook), or writing technical documents that involve code, equations, and figures. In the rest of this section, we will focus on the Notebook interface. [box type="note" align="alignleft" class="" width=""]Closing the Notebook server To close the Notebook server, go to the OS terminal where you launched the server from, and press Ctrl + C. You may need to confirm with y.[/box] The Notebook dashboard The dashboard contains several tabs which are as follows: Files: shows all files and notebooks in the current directory Running: shows all kernels currently running on your computer Clusters: lets you launch kernels for parallel computing A notebook is an interactive document containing code, text, and other elements. A notebook is saved in a file with the .ipynb extension. This file is a plain text file storing a JSON data structure. A kernel is a process running an interactive session. When using IPython, this kernel is a Python process. There are kernels in many languages other than Python. [box type="note" align="alignleft" class="" width=""]We follow the convention to use the term notebook for a file, and Notebook for the application and the web interface.[/box] In Jupyter, notebooks and kernels are strongly separated. A notebook is a file, whereas a kernel is a process. The kernel receives snippets of code from the Notebook interface, executes them, and sends the outputs and possible errors back to the Notebook interface. Thus, in general, the kernel has no notion of the Notebook. A notebook is persistent (it's a file), whereas a kernel may be closed at the end of an interactive session and it is therefore not persistent. When a notebook is re-opened, it needs to be re-executed. In general, no more than one Notebook interface can be connected to a given kernel. However, several IPython consoles can be connected to a given kernel. The Notebook user interface To create a new notebook, click on the New button, and select Notebook (Python 3). A new browser tab opens and shows the Notebook interface as follows: A new notebook Here are the main components of the interface, from top to bottom: The notebook name, which you can change by clicking on it. This is also the name of the .ipynb file. The Menu bar gives you access to several actions pertaining to either the notebook or the kernel. To the right of the menu bar is the Kernel name. You can change the kernel language of your notebook from the Kernel menu. The Toolbar contains icons for common actions. In particular, the dropdown menu showing Code lets you change the type of a cell. Following is the main component of the UI: the actual Notebook. It consists of a linear list of cells. We will detail the structure of a cell in the following sections. Structure of a notebook cell There are two main types of cells: Markdown cells and code cells, and they are described as follows: A Markdown cell contains rich text. In addition to classic formatting options like bold or italics, we can add links, images, HTML elements, LaTeX mathematical equations, and more. A code cell contains code to be executed by the kernel. The programming language corresponds to the kernel's language. We will only use Python in this book, but you can use many other languages. You can change the type of a cell by first clicking on a cell to select it, and then choosing the cell's type in the toolbar's dropdown menu showing Markdown or Code. Markdown cells Here is a screenshot of a Markdown cell: A Markdown cell The top panel shows the cell in edit mode, while the bottom one shows it in render mode. The edit mode lets you edit the text, while the render mode lets you display the rendered cell. We will explain the differences between these modes in greater detail in the following section. Code cells Here is a screenshot of a complex code cell: Structure of a code cell This code cell contains several parts, as follows: The Prompt number shows the cell's number. This number increases every time you run the cell. Since you can run cells of a notebook out of order, nothing guarantees that code numbers are linearly increasing in a given notebook. The Input area contains a multiline text editor that lets you write one or several lines of code with syntax highlighting. The Widget area may contain graphical controls; here, it displays a slider. The Output area can contain multiple outputs, here: Standard output (text in black) Error output (text with a red background) Rich output (an HTML table and an image here) The Notebook modal interface The Notebook implements a modal interface similar to some text editors such as vim. Mastering this interface may represent a small learning curve for some users. Use the edit mode to write code (the selected cell has a green border, and a pen icon appears at the top right of the interface). Click inside a cell to enable the edit mode for this cell (you need to double-click with Markdown cells). Use the command mode to operate on cells (the selected cell has a gray border, and there is no pen icon). Click outside the text area of a cell to enable the command mode (you can also press the Esc key). Keyboard shortcuts are available in the Notebook interface. Type h to show them. We review here the most common ones (for Windows and Linux; shortcuts for Mac OS X may be slightly different). Keyboard shortcuts available in both modes Here are a few keyboard shortcuts that are always available when a cell is selected: Ctrl + Enter: run the cell Shift + Enter: run the cell and select the cell below Alt + Enter: run the cell and insert a new cell below Ctrl + S: save the notebook Keyboard shortcuts available in the edit mode In the edit mode, you can type code as usual, and you have access to the following keyboard shortcuts: Esc: switch to command mode Ctrl + Shift + -: split the cell Keyboard shortcuts available in the command mode In the command mode, keystrokes are bound to cell operations. Don't write code in command mode or unexpected things will happen! For example, typing dd in command mode will delete the selected cell! Here are some keyboard shortcuts available in command mode: Enter: switch to edit mode Up or k: select the previous cell Down or j: select the next cell y / m: change the cell type to code cell/Markdown cell a / b: insert a new cell above/below the current cell x / c / v: cut/copy/paste the current cell dd: delete the current cell z: undo the last delete operation Shift + =: merge the cell below h: display the help menu with the list of keyboard shortcuts Spending some time learning these shortcuts is highly recommended. References Here are a few references: Main documentation of Jupyter at http://jupyter.readthedocs.org/en/latest/ Jupyter Notebook interface explained at http://jupyter-notebook.readthedocs.org/en/latest/notebook.html A crash course on Python If you don't know Python, read this section to learn the fundamentals. Python is a very accessible language and is even taught to school children. If you have ever programmed, it will only take you a few minutes to learn the basics. Hello world Open a new notebook and type the following in the first cell: In [1]: print("Hello world!") Out[1]: Hello world! Here is a screenshot: "Hello world" in the Notebook [box type="note" align="alignleft" class="" width=""]Prompt string Note that the convention chosen in this article is to show Python code (also called the input) prefixed with In [x]: (which shouldn't be typed). This is the standard IPython prompt. Here, you should just type print("Hello world!") and then press Shift + Enter.[/box] Congratulations! You are now a Python programmer. Variables Let's use Python as a calculator. In [2]: 2 * 2 Out[2]: 4 Here, 2 * 2 is an expression statement. This operation is performed, the result is returned, and IPython displays it in the notebook cell's output. [box type="note" align="alignleft" class="" width=""]Division In Python 3, 3 / 2 returns 1.5 (floating-point division), whereas it returns 1 in Python 2 (integer division). This can be source of errors when porting Python 2 code to Python 3. It is recommended to always use the explicit 3.0 / 2.0 for floating-point division (by using floating-point numbers) and 3 // 2 for integer division. Both syntaxes work in Python 2 and Python 3. See http://python3porting.com/differences.html#integer-division for more details.[/box] Other built-in mathematical operators include +, -, ** for the exponentiation, and others. You will find more details at https://docs.python.org/3/reference/expressions.html#the-power-operator. Variables form a fundamental concept of any programming language. A variable has a name and a value. Here is how to create a new variable in Python: In [3]: a = 2 And here is how to use an existing variable: In [4]: a * 3 Out[4]: 6 Several variables can be defined at once (this is called unpacking): In [5]: a, b = 2, 6 There are different types of variables. Here, we have used a number (more precisely, an integer). Other important types include floating-point numbers to represent real numbers, strings to represent text, and booleans to represent True/False values. Here are a few examples: In [6]: somefloat = 3.1415 sometext = 'pi is about' # You can also use double quotes. print(sometext, somefloat) # Display several variables. Out[6]: pi is about 3.1415 Note how we used the # character to write comments. Whereas Python discards the comments completely, adding comments in the code is important when the code is to be read by other humans (including yourself in the future). String escaping String escaping refers to the ability to insert special characters in a string. For example, how can you insert ' and ", given that these characters are used to delimit a string in Python code? The backslash is the go-to escape character in Python (and in many other languages too). Here are a few examples: In [7]: print("Hello "world"") print("A list:n* item 1n* item 2") print("C:pathonwindows") print(r"C:pathonwindows") Out[7]: Hello "world" A list: * item 1 * item 2 C:pathonwindows C:pathonwindows The special character n is the new line (or line feed) character. To insert a backslash, you need to escape it, which explains why it needs to be doubled as . You can also disable escaping by using raw literals with a r prefix before the string, like in the last example above. In this case, backslashes are considered as normal characters. This is convenient when writing Windows paths, since Windows uses backslash separators instead of forward slashes like on Unix systems. A very common error on Windows is forgetting to escape backslashes in paths: writing "C:path" may lead to subtle errors. You will find the list of special characters in Python at https://docs.python.org/3.4/reference/lexical_analysis.html#string-and-bytes-literals. Lists A list contains a sequence of items. You can concisely instruct Python to perform repeated actions on the elements of a list. Let's first create a list of numbers as follows: In [8]: items = [1, 3, 0, 4, 1] Note the syntax we used to create the list: square brackets [], and commas , to separate the items. The built-in function len() returns the number of elements in a list: In [9]: len(items) Out[9]: 5 [box type="note" align="alignleft" class="" width=""]Python comes with a set of built-in functions, including print(), len(), max(), functional routines like filter() and map(), and container-related routines like all(), any(), range(), and sorted(). You will find the full list of built-in functions at https://docs.python.org/3.4/library/functions.html.[/box] Now, let's compute the sum of all elements in the list. Python provides a built-in function for this: In [10]: sum(items) Out[10]: 9 We can also access individual elements in the list, using the following syntax: In [11]: items[0] Out[11]: 1 In [12]: items[-1] Out[12]: 1 Note that indexing starts at 0 in Python: the first element of the list is indexed by 0, the second by 1, and so on. Also, -1 refers to the last element, -2, to the penultimate element, and so on. The same syntax can be used to alter elements in the list: In [13]: items[1] = 9 items Out[13]: [1, 9, 0, 4, 1] We can access sublists with the following syntax: In [14]: items[1:3] Out[14]: [9, 0] Here, 1:3 represents a slice going from element 1 included (this is the second element of the list) to element 3 excluded. Thus, we get a sublist with the second and third element of the original list. The first-included/last-excluded asymmetry leads to an intuitive treatment of overlaps between consecutive slices. Also, note that a sublist refers to a dynamic view of the original list, not a copy; changing elements in the sublist automatically changes them in the original list. Python provides several other types of containers: Tuples are immutable and contain a fixed number of elements: In [15]: my_tuple = (1, 2, 3) my_tuple[1] Out[15]: 2 Dictionaries contain key-value pairs. They are extremely useful and common: In [16]: my_dict = {'a': 1, 'b': 2, 'c': 3} print('a:', my_dict['a']) Out[16]: a: 1 In [17]: print(my_dict.keys()) Out[17]: dict_keys(['c', 'a', 'b']) There is no notion of order in a dictionary. However, the native collections module provides an OrderedDict structure that keeps the insertion order (see https://docs.python.org/3.4/library/collections.html). Sets, like mathematical sets, contain distinct elements: In [18]: my_set = set([1, 2, 3, 2, 1]) my_set Out[18]: {1, 2, 3} A Python object is mutable if its value can change after it has been created. Otherwise, it is immutable. For example, a string is immutable; to change it, a new string needs to be created. A list, a dictionary, or a set is mutable; elements can be added or removed. By contrast, a tuple is immutable, and it is not possible to change the elements it contains without recreating the tuple. See https://docs.python.org/3.4/reference/datamodel.html for more details. Loops We can run through all elements of a list using a for loop: In [19]: for item in items: print(item) Out[19]: 1 9 0 4 1 There are several things to note here: The for item in items syntax means that a temporary variable named item is created at every iteration. This variable contains the value of every item in the list, one at a time. Note the colon : at the end of the for statement. Forgetting it will lead to a syntax error! The statement print(item) will be executed for all items in the list. Note the four spaces before print: this is called the indentation. You will find more details about indentation in the next subsection. Python supports a concise syntax to perform a given operation on all elements of a list, as follows: In [20]: squares = [item * item for item in items] squares Out[20]: [1, 81, 0, 16, 1] This is called a list comprehension. A new list is created here; it contains the squares of all numbers in the list. This concise syntax leads to highly readable and Pythonic code. Indentation Indentation refers to the spaces that may appear at the beginning of some lines of code. This is a particular aspect of Python's syntax. In most programming languages, indentation is optional and is generally used to make the code visually clearer. But in Python, indentation also has a syntactic meaning. Particular indentation rules need to be followed for Python code to be correct. In general, there are two ways to indent some text: by inserting a tab character (also referred to as t), or by inserting a number of spaces (typically, four). It is recommended to use spaces instead of tab characters. Your text editor should be configured such that the Tab key on the keyboard inserts four spaces instead of a tab character. In the Notebook, indentation is automatically configured properly; so you shouldn't worry about this issue. The question only arises if you use another text editor for your Python code. Finally, what is the meaning of indentation? In Python, indentation delimits coherent blocks of code, for example, the contents of a loop, a conditional branch, a function, and other objects. Where other languages such as C or JavaScript use curly braces to delimit such blocks, Python uses indentation. Conditional branches Sometimes, you need to perform different operations on your data depending on some condition. For example, let's display all even numbers in our list: In [21]: for item in items: if item % 2 == 0: print(item) Out[21]: 0 4 Again, here are several things to note: An if statement is followed by a boolean expression. If a and b are two integers, the modulo operand a % b returns the remainder from the division of a by b. Here, item % 2 is 0 for even numbers, and 1 for odd numbers. The equality is represented by a double equal sign == to avoid confusion with the assignment operator = that we use when we create variables. Like with the for loop, the if statement ends with a colon :. The part of the code that is executed when the condition is satisfied follows the if statement. It is indented. Indentation is cumulative: since this if is inside a for loop, there are eight spaces before the print(item) statement. Python supports a concise syntax to select all elements in a list that satisfy certain properties. Here is how to create a sublist with only even numbers: In [22]: even = [item for item in items if item % 2 == 0] even Out[22]: [0, 4] This is also a form of list comprehension. Functions Code is typically organized into functions. A function encapsulates part of your code. Functions allow you to reuse bits of functionality without copy-pasting the code. Here is a function that tells whether an integer number is even or not: In [23]: def is_even(number): """Return whether an integer is even or not.""" return number % 2 == 0 There are several things to note here: A function is defined with the def keyword. After def comes the function name. A general convention in Python is to only use lowercase characters, and separate words with an underscore _. A function name generally starts with a verb. The function name is followed by parentheses, with one or several variable names called the arguments. These are the inputs of the function. There is a single argument here, named number. No type is specified for the argument. This is because Python is dynamically typed; you could pass a variable of any type. This function would work fine with floating point numbers, for example (the modulo operation works with floating point numbers in addition to integers). The body of the function is indented (and note the colon : at the end of the def statement). There is a docstring wrapped by triple quotes """. This is a particular form of comment that explains what the function does. It is not mandatory, but it is strongly recommended to write docstrings for the functions exposed to the user. The return keyword in the body of the function specifies the output of the function. Here, the output is a Boolean, obtained from the expression number % 2 == 0. It is possible to return several values; just use a comma to separate them (in this case, a tuple of Booleans would be returned). Once a function is defined, it can be called like this: In [24]: is_even(3) Out[24]: False In [25]: is_even(4) Out[25]: True Here, 3 and 4 are successively passed as arguments to the function. Positional and keyword arguments A Python function can accept an arbitrary number of arguments, called positional arguments. It can also accept optional named arguments, called keyword arguments. Here is an example: In [26]: def remainder(number, divisor=2): return number % divisor The second argument of this function, divisor, is optional. If it is not provided by the caller, it will default to the number 2, as shown here: In [27]: remainder(5) Out[27]: 1 There are two equivalent ways of specifying a keyword argument when calling a function. They are as follows: In [28]: remainder(5, 3) Out[28]: 2 In [29]: remainder(5, divisor=3) Out[29]: 2 In the first case, 3 is understood as the second argument, divisor. In the second case, the name of the argument is given explicitly by the caller. This second syntax is clearer and less error-prone than the first one. Functions can also accept arbitrary sets of positional and keyword arguments, using the following syntax: In [30]: def f(*args, **kwargs): print("Positional arguments:", args) print("Keyword arguments:", kwargs) In [31]: f(1, 2, c=3, d=4) Out[31]: Positional arguments: (1, 2) Keyword arguments: {'c': 3, 'd': 4} Inside the function, args is a tuple containing positional arguments, and kwargs is a dictionary containing keyword arguments. Passage by assignment When passing a parameter to a Python function, a reference to the object is actually passed (passage by assignment): If the passed object is mutable, it can be modified by the function If the passed object is immutable, it cannot be modified by the function Here is an example: In [32]: my_list = [1, 2] def add(some_list, value): some_list.append(value) add(my_list, 3) my_list Out[32]: [1, 2, 3] The add() function modifies an object defined outside it (in this case, the object my_list); we say this function has side-effects. A function with no side-effects is called a pure function: it doesn't modify anything in the outer context, and it deterministically returns the same result for any given set of inputs. Pure functions are to be preferred over functions with side-effects. Knowing this can help you spot out subtle bugs. There are further related concepts that are useful to know, including function scopes, naming, binding, and more. Here are a couple of links: Passage by reference at https://docs.python.org/3/faq/programming.html#how-do-i-write-a-function-with-output-parameters-call-by-reference Naming, binding, and scope at https://docs.python.org/3.4/reference/executionmodel.html Errors Let's discuss errors in Python. As you learn, you will inevitably come across errors and exceptions. The Python interpreter will most of the time tell you what the problem is, and where it occurred. It is important to understand the vocabulary used by Python so that you can more quickly find and correct your errors. Let's see the following example: In [33]: def divide(a, b): return a / b In [34]: divide(1, 0) Out[34]: --------------------------------------------------------- ZeroDivisionError Traceback (most recent call last) <ipython-input-2-b77ebb6ac6f6> in <module>() ----> 1 divide(1, 0) <ipython-input-1-5c74f9fd7706> in divide(a, b) 1 def divide(a, b): ----> 2 return a / b ZeroDivisionError: division by zero Here, we defined a divide() function, and called it to divide 1 by 0. Dividing a number by 0 is an error in Python. Here, a ZeroDivisionError exception was raised. An exception is a particular type of error that can be raised at any point in a program. It is propagated from the innards of the code up to the command that launched the code. It can be caught and processed at any point. You will find more details about exceptions at https://docs.python.org/3/tutorial/errors.html, and common exception types at https://docs.python.org/3/library/exceptions.html#bltin-exceptions. The error message you see contains the stack trace, the exception type, and the exception message. The stack trace shows all function calls between the raised exception and the script calling point. The top frame, indicated by the first arrow ---->, shows the entry point of the code execution. Here, it is divide(1, 0), which was called directly in the Notebook. The error occurred while this function was called. The next and last frame is indicated by the second arrow. It corresponds to line 2 in our function divide(a, b). It is the last frame in the stack trace: this means that the error occurred there. Object-oriented programming Object-oriented programming (OOP) is a relatively advanced topic. Although we won't use it much in this book, it is useful to know the basics. Also, mastering OOP is often essential when you start to have a large code base. In Python, everything is an object. A number, a string, or a function is an object. An object is an instance of a type (also known as class). An object has attributes and methods, as specified by its type. An attribute is a variable bound to an object, giving some information about it. A method is a function that applies to the object. For example, the object 'hello' is an instance of the built-in str type (string). The type() function returns the type of an object, as shown here: In [35]: type('hello') Out[35]: str There are native types, like str or int (integer), and custom types, also called classes, that can be created by the user. In IPython, you can discover the attributes and methods of any object with the dot syntax and tab completion. For example, typing 'hello'.u and pressing Tab automatically shows us the existence of the upper() method: In [36]: 'hello'.upper() Out[36]: 'HELLO' Here, upper() is a method available to all str objects; it returns an uppercase copy of a string. A useful string method is format(). This simple and convenient templating system lets you generate strings dynamically, as shown in the following example: In [37]: 'Hello {0:s}!'.format('Python') Out[37]: Hello Python The {0:s} syntax means "replace this with the first argument of format(), which should be a string". The variable type after the colon is especially useful for numbers, where you can specify how to display the number (for example, .3f to display three decimals). The 0 makes it possible to replace a given value several times in a given string. You can also use a name instead of a position—for example 'Hello {name}!'.format(name='Python'). Some methods are prefixed with an underscore _; they are private and are generally not meant to be used directly. IPython's tab completion won't show you these private attributes and methods unless you explicitly type _ before pressing Tab. In practice, the most important thing to remember is that appending a dot . to any Python object and pressing Tab in IPython will show you a lot of functionality pertaining to that object. Functional programming Python is a multi-paradigm language; it notably supports imperative, object-oriented, and functional programming models. Python functions are objects and can be handled like other objects. In particular, they can be passed as arguments to other functions (also called higher-order functions). This is the essence of functional programming. Decorators provide a convenient syntax construct to define higher-order functions. Here is an example using the is_even() function from the previous Functions section: In [38]: def show_output(func): def wrapped(*args, **kwargs): output = func(*args, **kwargs) print("The result is:", output) return wrapped The show_output() function transforms an arbitrary function func() to a new function, named wrapped(), that displays the result of the function, as follows: In [39]: f = show_output(is_even) f(3) Out[39]: The result is: False Equivalently, this higher-order function can also be used with a decorator, as follows: In [40]: @show_output def square(x): return x * x In [41]: square(3) Out[41]: The result is: 9 You can find more information about Python decorators at https://en.wikipedia.org/wiki/Python_syntax_and_semantics#Decorators and at http://www.thecodeship.com/patterns/guide-to-python-function-decorators/. Python 2 and 3 Let's finish this section with a few notes about Python 2 and Python 3 compatibility issues. There are still some Python 2 code and libraries that are not compatible with Python 3. Therefore, it is sometimes useful to be aware of the differences between the two versions. One of the most obvious differences is that print is a statement in Python 2, whereas it is a function in Python 3. Therefore, print "Hello" (without parentheses) works in Python 2 but not in Python 3, while print("Hello") works in both Python 2 and Python 3. There are several non-mutually exclusive options to write portable code that works with both versions: futures: A built-in module supporting backward-incompatible Python syntax 2to3: A built-in Python module to port Python 2 code to Python 3 six: An external lightweight library for writing compatible code Here are a few references: Official Python 2/3 wiki page at https://wiki.python.org/moin/Python2orPython3 The Porting to Python 3 book, by CreateSpace Independent Publishing Platform at http://www.python3porting.com/bookindex.html 2to3 at https://docs.python.org/3.4/library/2to3.html six at https://pythonhosted.org/six/ futures at https://docs.python.org/3.4/library/__future__.html The IPython Cookbook contains an in-depth recipe about choosing between Python 2 and 3, and how to support both. Going beyond the basics You now know the fundamentals of Python, the bare minimum that you will need in this book. As you can imagine, there is much more to say about Python. Following are a few further basic concepts that are often useful and that we cannot cover here, unfortunately. You are highly encouraged to have a look at them in the references given at the end of this section: range and enumerate pass, break, and, continue, to be used in loops Working with files Creating and importing modules The Python standard library provides a wide range of functionality (OS, network, file systems, compression, mathematics, and more) Here are some slightly more advanced concepts that you might find useful if you want to strengthen your Python skills: Regular expressions for advanced string processing Lambda functions for defining small anonymous functions Generators for controlling custom loops Exceptions for handling errors with statements for safely handling contexts Advanced object-oriented programming Metaprogramming for modifying Python code dynamically The pickle module for persisting Python objects on disk and exchanging them across a network Finally, here are a few references: Getting started with Python: https://www.python.org/about/gettingstarted/ A Python tutorial: https://docs.python.org/3/tutorial/index.html The Python Standard Library: https://docs.python.org/3/library/index.html Interactive tutorial: http://www.learnpython.org/ Codecademy Python course: http://www.codecademy.com/tracks/python Language reference (expert level): https://docs.python.org/3/reference/index.html Python Cookbook, by David Beazley and Brian K. Jones, O'Reilly Media (advanced level, highly recommended if you want to become a Python expert) Summary In this article, we have seen how to launch the IPython console and Jupyter Notebook, the different aspects of the Notebook and its user interface, the structure of the notebook cell, keyboard shortcuts that are available in the Notebook interface, and the basics of Python. Introduction to Data Analysis and Libraries Hand Gesture Recognition Using a Kinect Depth Sensor The strange relationship between objects, functions, generators and coroutines
Read more
  • 0
  • 0
  • 65189

article-image-opencv-detecting-edges-lines-shapes
Oli Huggins
17 Sep 2015
19 min read
Save for later

OpenCV: Detecting Edges, Lines, and Shapes

Oli Huggins
17 Sep 2015
19 min read
Edges play a major role in both human and computer vision. We, as humans, can easily recognize many object types and their positons just by seeing a backlit silhouette or a rough sketch. Indeed, when art emphasizes edges and pose, it often seems to convey the idea of an archetype, such as Rodin's The Thinker or Joe Shuster's Superman. Software, too, can reason about edges, poses, and archetypes. This OpenCV tutorial has been taken from Learning OpenCV 3 Computer Vision with Python. If you want to learn more, click here. OpenCV provides many edge-finding filters, including Laplacian(), Sobel(), and Scharr(). These filters are supposed to turn non-edge regions to black, while turning edge regions to white or saturated colors. However, they are prone to misidentifying noise as edges. This flaw can be mitigated by blurring an image before trying to find its edges. OpenCV also provides many blurring filters, including blur() (simple average), medianBlur(), and GaussianBlur(). The arguments for the edge-finding and blurring filters vary, but always include ksize, an odd whole number that represents the width and height (in pixels) of the filter's kernel. For the purpose of blurring, let's use medianBlur(), which is effective in removing digital video noise, especially in color images. For the purpose of edge-finding, let's use Laplacian(), which produces bold edge lines, especially in grayscale images. After applying medianBlur(), but before applying Laplacian(), we should convert the BGR to grayscale. Once we have the result of Laplacian(), we can invert it to get black edges on a white background. Then, we can normalize (so that its values range from 0 to 1) and multiply it with the source image to darken the edges. Let's implement this approach in filters.py: def strokeEdges(src, dst, blurKsize = 7, edgeKsize = 5): if blurKsize >= 3: blurredSrc = cv2.medianBlur(src, blurKsize) graySrc = cv2.cvtColor(blurredSrc, cv2.COLOR_BGR2GRAY) else: graySrc = cv2.cvtColor(src, cv2.COLOR_BGR2GRAY) cv2.Laplacian(graySrc, cv2.CV_8U, graySrc, ksize = edgeKsize) normalizedInverseAlpha = (1.0 / 255) * (255 - graySrc) channels = cv2.split(src) for channel in channels: channel[:] = channel * normalizedInverseAlpha cv2.merge(channels, dst) Note that we allow kernel sizes to be specified as arguments to strokeEdges(). The blurKsizeargument is used as ksize for medianBlur(), while edgeKsize is used as ksize for Laplacian(). With my webcams, I find that a blurKsize value of 7 and an edgeKsize value of 5 look best. Unfortunately, medianBlur() is expensive with a large ksize, such as 7. [box type="info" align="" class="" width=""]If you encounter performance problems when running strokeEdges(), try decreasing the blurKsize value. To turn off the blur option, set it to a value less than 3.[/box] Custom kernels – getting convoluted As we have just seen, many of OpenCV's predefined filters use a kernel. Remember that a kernel is a set of weights that determine how each output pixel is calculated from a neighborhood of input pixels. Another term for a kernel is a convolution matrix. It mixes up or convolvesthe pixels in a region. Similarly, a kernel-based filter may be called a convolution filter. OpenCV provides a very versatile function, filter2D(), which applies any kernel or convolution matrix that we specify. To understand how to use this function, let's first learn the format of a convolution matrix. This is a 2D array with an odd number of rows and columns. The central element corresponds to a pixel of interest and the other elements correspond to this pixel's neighbors. Each element contains an integer or floating point value, which is a weight that gets applied to an input pixel's value. Consider this example: kernel = numpy.array([[-1, -1, -1], [-1, 9, -1], [-1, -1, -1]]) Here, the pixel of interest has a weight of 9 and its immediate neighbors each have a weight of -1. For the pixel of interest, the output color will be nine times its input color, minus the input colors of all eight adjacent pixels. If the pixel of interest was already a bit different from its neighbors, this difference becomes intensified. The effect is that the image looks sharperas the contrast between neighbors is increased. Continuing our example, we can apply this convolution matrix to a source and destination image, respectively, as follows: cv2.filter2D(src, -1, kernel, dst) The second argument specifies the per-channel depth of the destination image (such as cv2.CV_8U for 8 bits per channel). A negative value (as used here) means that the destination image has the same depth as the source image. [box type="info" align="" class="" width=""]For color images, note that filter2D() applies the kernel equally to each channel. To use different kernels on different channels, we would also have to use the split()and merge() functions.[/box] Based on this simple example, let's add two classes to filters.py. One class, VConvolutionFilter, will represent a convolution filter in general. A subclass, SharpenFilter, will specifically represent our sharpening filter. Let's edit filters.py to implement these two new classes as follows: class VConvolutionFilter(object): """A filter that applies a convolution to V (or all of BGR).""" def __init__(self, kernel): self._kernel = kernel def apply(self, src, dst): """Apply the filter with a BGR or gray source/destination.""" cv2.filter2D(src, -1, self._kernel, dst) class SharpenFilter(VConvolutionFilter): """A sharpen filter with a 1-pixel radius.""" def __init__(self): kernel = numpy.array([[-1, -1, -1], [-1, 9, -1], [-1, -1, -1]]) VConvolutionFilter.__init__(self, kernel) Note that the weights sum up to 1. This should be the case whenever we want to leave the image's overall brightness unchanged. If we modify a sharpening kernel slightly so that its weights sum up to 0 instead, then we have an edge detection kernel that turns edges white and non-edges black. For example, let's add the following edge detection filter to filters.py: class FindEdgesFilter(VConvolutionFilter): """An edge-finding filter with a 1-pixel radius.""" def __init__(self): kernel = numpy.array([[-1, -1, -1], [-1, 8, -1], [-1, -1, -1]]) VConvolutionFilter.__init__(self, kernel) Next, let's make a blur filter. Generally, for a blur effect, the weights should sum up to 1 and should be positive throughout the neighborhood. For example, we can take a simple average of the neighborhood as follows: class BlurFilter(VConvolutionFilter): """A blur filter with a 2-pixel radius.""" def __init__(self): kernel = numpy.array([[0.04, 0.04, 0.04, 0.04, 0.04], [0.04, 0.04, 0.04, 0.04, 0.04], [0.04, 0.04, 0.04, 0.04, 0.04], [0.04, 0.04, 0.04, 0.04, 0.04], [0.04, 0.04, 0.04, 0.04, 0.04]]) VConvolutionFilter.__init__(self, kernel) Our sharpening, edge detection, and blur filters use kernels that are highly symmetric. Sometimes, though, kernels with less symmetry produce an interesting effect. Let's consider a kernel that blurs on one side (with positive weights) and sharpens on the other (with negative weights). It will produce a ridged or embossed effect. Here is an implementation that we can add to filters.py: class EmbossFilter(VConvolutionFilter): """An emboss filter with a 1-pixel radius.""" def __init__(self): kernel = numpy.array([[-2, -1, 0], [-1, 1, 1], [ 0, 1, 2]]) VConvolutionFilter.__init__(self, kernel) This set of custom convolution filters is very basic. Indeed, it is more basic than OpenCV's ready-made set of filters. However, with a bit of experimentation, you will be able to write your own kernels that produce a unique look. Modifying an application Now that we have high-level functions and classes for several filters, it is trivial to apply any of them to the captured frames in Cameo. Let's edit cameo.py and add the lines that appear in bold face in the following excerpt: import cv2 import filters from managers import WindowManager, CaptureManager class Cameo(object): def __init__(self): self._windowManager = WindowManager('Cameo', self.onKeypress) self._captureManager = CaptureManager( cv2.VideoCapture(0), self._windowManager, True) self._curveFilter = filters.BGRPortraCurveFilter() def run(self): """Run the main loop.""" self._windowManager.createWindow() while self._windowManager.isWindowCreated: self._captureManager.enterFrame() frame = self._captureManager.frame filters.strokeEdges(frame, frame) self._curveFilter.apply(frame, frame) self._captureManager.exitFrame() self._windowManager.processEvents() Here, I have chosen to apply two effects: stroking the edges and emulating Portra film colors. Feel free to modify the code to apply any filters you like. Here is a screenshot from Cameo, with stroked edges and Portra-like colors: Edge detection with Canny OpenCV also offers a very handy function, called Canny, (after the algorithm's inventor, John F. Canny) which is very popular not only because of its effectiveness, but also the simplicity of its implementation in an OpenCV program as it is a one-liner: import cv2 import numpy as np img = cv2.imread("../images/statue_small.jpg", 0) cv2.imwrite("canny.jpg", cv2.Canny(img, 200, 300)) cv2.imshow("canny", cv2.imread("canny.jpg")) cv2.waitKey() cv2.destroyAllWindows() The result is a very clear identification of the edges: The Canny edge detection algorithm is quite complex but also interesting: it's a five-step process that denoises the image with a Gaussian filter, calculates gradients, applies nonmaximum suppression (NMS) on edges and a double threshold on all the detected edges to eliminate false positives, and, lastly, analyzes all the edges and their connection to each other to keep the real edges and discard weaker ones. Contours detection Another vital task in computer vision is contour detection, not only because of the obvious aspect of detecting contours of subjects contained in an image or video frame, but because of the derivative operations connected with identifying contours. These operations are, namely computing bounding polygons, approximating shapes, and, generally, calculating regions of interest, which considerably simplifies the interaction with image data. This is because a rectangular region with numpy is easily defined with an array slice. We will be using this technique a lot when exploring the concept of object detection (including faces) and object tracking. Let's go in order and familiarize ourselves with the API first with an example: import cv2 import numpy as np img = np.zeros((200, 200), dtype=np.uint8) img[50:150, 50:150] = 255 ret, thresh = cv2.threshold(img, 127, 255, 0) image, contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) color = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) img = cv2.drawContours(color, contours, -1, (0,255,0), 2) cv2.imshow("contours", color) cv2.waitKey() cv2.destroyAllWindows() Firstly, we create an empty black image that is 200x200 pixels size. Then, we place a white square in the center of it, utilizing ndarray's ability to assign values for a slice. We then threshold the image, and call the findContours() function. This function takes three parameters: the input image, hierarchy type, and the contour approximation method. There are a number of aspects of particular interest about this function: The function modifies the input image, so it would be advisable to use a copy of the original image (for example, by passing img.copy()). Secondly, the hierarchy tree returned by the function is quite important: cv2.RETR_TREE will retrieve the entire hierarchy of contours in the image, enabling you to establish "relationships" between contours. If you only want to retrieve the most external contours, use cv2.RETR_EXTERNAL. This is particularly useful when you want to eliminate contours that are entirely contained in other contours (for example, in a vast majority of cases, you won't need to detect an object within another object of the same type). The findContours function returns three elements: the modified image, contours, and their hierarchy. We use the contours to draw on the color version of the image (so we can draw contours in green) and eventually display it. The result is a white square, with its contour drawn in green. Spartan, but effective in demonstrating the concept! Let's move on to more meaningful examples. Contours – bounding box, minimum area rectangle and minimum enclosing circle Finding the contours of a square is a simple task; irregular, skewed, and rotated shapes bring the best out of the cv2.findContours utility function of OpenCV. Let's take a look at the following image: In a real-life application, we would be most interested in determining the bounding box of the subject, its minimum enclosing rectangle, and circle. The cv2.findContours function in conjunction with another few OpenCV utilities makes this very easy to accomplish: import cv2 import numpy as np img = cv2.pyrDown(cv2.imread("hammer.jpg", cv2.IMREAD_UNCHANGED)) ret, thresh = cv2.threshold(cv2.cvtColor(img.copy(), cv2.COLOR_BGR2GRAY) , 127, 255, cv2.THRESH_BINARY) image, contours, hier = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) for c in contours: # find bounding box coordinates x,y,w,h = cv2.boundingRect(c) cv2.rectangle(img, (x,y), (x+w, y+h), (0, 255, 0), 2) # find minimum area rect = cv2.minAreaRect(c) # calculate coordinates of the minimum area rectangle box = cv2.boxPoints(rect) # normalize coordinates to integers box = np.int0(box) # draw contours cv2.drawContours(img, [box], 0, (0,0, 255), 3) # calculate center and radius of minimum enclosing circle (x,y),radius = cv2.minEnclosingCircle(c) # cast to integers center = (int(x),int(y)) radius = int(radius) # draw the circle img = cv2.circle(img,center,radius,(0,255,0),2) cv2.drawContours(img, contours, -1, (255, 0, 0), 1) cv2.imshow("contours", img) After the initial imports, we load the image, and then apply a binary threshold on a grayscale version of the original image. By doing this, we operate all find-contours calculations on a grayscale copy, but we draw on the original so that we can utilize color information. Firstly, let's calculate a simple bounding box: x,y,w,h = cv2.boundingRect(c) This is a pretty straightforward conversion of contour information to x and y coordinates, plus the height and width of the rectangle. Drawing this rectangle is an easy task: cv2.rectangle(img, (x,y), (x+w, y+h), (0, 255, 0), 2) Secondly, let's calculate the minimum area enclosing the subject: rect = cv2.minAreaRect(c) box = cv2.boxPoints(rect) box = np.int0(box) The mechanism here is particularly interesting: OpenCV does not have a function to calculate the coordinates of the minimum rectangle vertexes directly from the contour information. Instead, we calculate the minimum rectangle area, and then calculate the vertexes of this rectangle. Note that the calculated vertexes are floats, but pixels are accessed with integers (you can't access a "portion" of a pixel), so we'll need to operate this conversion. Next, we draw the box, which gives us the perfect opportunity to introduce the cv2.drawContours function: cv2.drawContours(img, [box], 0, (0,0, 255), 3) Firstly, this function—like all drawing functions—modifies the original image. Secondly, it takes an array of contours in its second parameter so that you can draw a number of contours in a single operation. So, if you have a single set of points representing a contour polygon, you need to wrap this into an array, exactly like we did with our box in the preceding example. The third parameter of this function specifies the index of the contour array that we want to draw: a value of -1 will draw all contours; otherwise, a contour at the specified index in the contour array (the second parameter) will be drawn. Most drawing functions take the color of the drawing and its thickness as the last two parameters. The last bounding contour we're going to examine is the minimum enclosing circle: (x,y),radius = cv2.minEnclosingCircle(c) center = (int(x),int(y)) radius = int(radius) img = cv2.circle(img,center,radius,(0,255,0),2) The only peculiarity of the cv2.minEnclosingCircle function is that it returns a two-element tuple, of which, the first element is a tuple itself, representing the coordinates of a circle's center, and the second element is the radius of this circle. After converting all these values to integers, drawing the circle is quite a trivial operation. The final result on the original image looks like this: Contours – convex contours and the Douglas-Peucker algorithm Most of the time, when working with contours, subjects will have the most diverse shapes, including convex ones. A convex shape is defined as such when there exists two points within that shape whose connecting line goes outside the perimeter of the shape itself. The first facility OpenCV offers to calculate the approximate bounding polygon of a shape is cv2.approxPolyDP. This function takes three parameters: A contour. An "epsilon" value representing the maximum discrepancy between the original contour and the approximated polygon (the lower the value, the closer the approximated value will be to the original contour). A boolean flag signifying that the polygon is closed. The epsilon value is of vital importance to obtain a useful contour, so let's understand what it represents. Epsilon is the maximum difference between the approximated polygon's perimeter and the perimeter of the original contour. The lower this difference is, the more the approximated polygon will be similar to the original contour. You may ask yourself why we need an approximate polygon when we have a contour that is already a precise representation. The answer is that a polygon is a set of straight lines, and the importance of being able to define polygons in a region for further manipulation and processing is paramount in many computer vision tasks. Now that we know what an epsilon is, we need to obtain contour perimeter information as a reference value; this is obtained with the cv2.arcLength function of OpenCV: epsilon = 0.01 * cv2.arcLength(cnt, True) approx = cv2.approxPolyDP(cnt, epsilon, True) Effectively, we're instructing OpenCV to calculate an approximated polygon whose perimeter can only differ from the original contour in an epsilon ratio. OpenCV also offers a cv2.convexHull function to obtain processed contour information for convex shapes, and this is a straightforward one-line expression: hull = cv2.convexHull(cnt) Let's combine the original contour, approximated polygon contour, and the convex hull in one image to observe the difference. To simplify things, I've applied the contours to a black image so that the original subject is not visible, but its contours are: As you can see, the convex hull surrounds the entire subject, the approximated polygon is the innermost polygon shape, and in between the two is the original contour, mainly composed of arcs. Detecting lines and circles Detecting edges and contours are not only common and important tasks, they also constitute the basis for other—more complex—operations. Lines and shape detection walk hand in hand with edge and contour detection, so let's examine how OpenCV implements these. The theory behind line and shape detection has its foundations in a technique called Hough transform, invented by Richard Duda and Peter Hart, extending (generalizing) the work done by Paul Hough in the early 1960s. Let's take a look at OpenCV's API for Hough transforms. Line detection First of all, let's detect some lines, which is done with the HoughLines and HoughLinesP functions. The only difference between the two functions is that one uses the standard Hough transform, and the second uses the probabilistic Hough transform (hence the P in the name). The probabilistic version is called as such because it only analyzes lines as subset of points and estimates the probability of these points to all belong to the same line. This implementation is an optimized version of the standard Hough transform, in that, it's less computationally intensive and executes faster. Let's take a look at a very simple example: import cv2 import numpy as np img = cv2.imread('lines.jpg') gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) edges = cv2.Canny(gray,50,120) minLineLength = 20 maxLineGap = 5 lines = cv2.HoughLinesP(edges,1,np.pi/180,100,minLineLength,maxLineGap) for x1,y1,x2,y2 in lines[0]: cv2.line(img,(x1,y1),(x2,y2),(0,255,0),2) cv2.imshow("edges", edges) cv2.imshow("lines", img) cv2.waitKey() cv2.destroyAllWindows() The crucial point of this simple script—aside from the HoughLines function call—is the setting of the minimum line length (shorter lines will be discarded) and maximum line gap, which is the maximum size of a gap in a line before the two segments start being considered as separate lines. Also, note that the HoughLines function takes a single channel binary image, processed through the Canny edge detection filter. Canny is not a strict requirement, but an image that's been denoised and only represents edges is the ideal source for a Hough transform, so you will find this to be a common practice. The parameters of HoughLinesP are the image, MinLineLength and MaxLineGap, which we mentioned previously, rho and theta which refers to the geometrical representations of the lines, which are usually 1 and np.pi/180, threshold which represents the threshold below which a line is discarded. The Hough transform works with a system of bins and votes, with each bin representing a line, so any line with a minimum of <threshold> votes is retained, and the rest are discarded. Circle detection OpenCV also has a function used to detect circles, called HoughCircles. It works in a very similar fashion to HoughLines, but where minLineLength and maxLineGap were the parameters to discard or retain lines, HoughCircles has a minimum distance between the circles' centers and the minimum and maximum radius of the circles. Here's the obligatory example: import cv2 import numpy as np planets = cv2.imread('planet_glow.jpg') gray_img = cv2.cvtColor(planets, cv2.COLOR_BGR2GRAY) img = cv2.medianBlur(gray_img, 5) cimg = cv2.cvtColor(img,cv2.COLOR_GRAY2BGR) circles = cv2.HoughCircles(img,cv2.HOUGH_GRADIENT,1,120, param1=100,param2=30,minRadius=0,maxRadius=0) circles = np.uint16(np.around(circles)) for i in circles[0,:]: # draw the outer circle cv2.circle(planets,(i[0],i[1]),i[2],(0,255,0),2) # draw the center of the circle cv2.circle(planets,(i[0],i[1]),2,(0,0,255),3) cv2.imwrite("planets_circles.jpg", planets) cv2.imshow("HoughCirlces", planets) cv2.waitKey() cv2.destroyAllWindows() Here's a visual representation of the result: Detecting shapes The detection of shapes using the Hough transform is limited to circles; however, we've already implicitly explored the detection of shapes of any kind, specifically, when we talked about approxPolyDP. This function allows the approximation of polygons, so if your image contains polygons, they will be quite accurately detected combining the usage of cv2.findContours and cv2.approxPolyDP. Summary At this point, you should have gained a good understanding of color spaces, the Fourier transform, and several kinds of filters made available by OpenCV to process images. You should also be proficient in detecting edges, lines, circles and shapes in general, additionally you should be able to find contours and exploit the information they provide about the subjects contained in an image. These concepts will serve as the ideal background to explore the topics in the next chapter, Image Segmentation and Depth Estimation. Further resources on this subject: OpenCV: Basic Image Processing OpenCV: Camera Calibration OpenCV: Tracking Faces with Haar Cascades
Read more
  • 0
  • 0
  • 59516

article-image-binary-search-tree-tutorial
Pavan Ramchandani
25 Jul 2018
19 min read
Save for later

Build a C++ Binary search tree [Tutorial]

Pavan Ramchandani
25 Jul 2018
19 min read
A binary tree is a hierarchical data structure whose behavior is similar to a tree, as it contains root and leaves (a node that has no child). The root of a binary tree is the topmost node. Each node can have at most two children, which are referred to as the left child and the right child. A node that has at least one child becomes a parent of its child. A node that has no child is a leaf. In this tutorial, you will be learning about the Binary tree data structures, its principles, and strategies in applying this data structures to various applications. This C++ tutorial has been taken from C++ Data Structures and Algorithms. Read more here.  Take a look at the following binary tree: From the preceding binary tree diagram, we can conclude the following: The root of the tree is the node of element 1 since it's the topmost node The children of element 1 are element 2 and element 3 The parent of elements 2 and 3 is 1 There are four leaves in the tree, and they are element 4, element 5, element 6, and element 7 since they have no child This hierarchical data structure is usually used to store information that forms a hierarchy, such as a file system of a computer. Building a binary search tree ADT A binary search tree (BST) is a sorted binary tree, where we can easily search for any key using the binary search algorithm. To sort the BST, it has to have the following properties: The node's left subtree contains only a key that's smaller than the node's key The node's right subtree contains only a key that's greater than the node's key You cannot duplicate the node's key value By having the preceding properties, we can easily search for a key value as well as find the maximum or minimum key value. Suppose we have the following BST: As we can see in the preceding tree diagram, it has been sorted since all of the keys in the root's left subtree are smaller than the root's key, and all of the keys in the root's right subtree are greater than the root's key. The preceding BST is a balanced BST since it has a balanced left and right subtree. We also can define the preceding BST as a balanced BST since both the left and right subtrees have an equal height (we are going to discuss this further in the upcoming section). However, since we have to put the greater new key in the right subtree and the smaller new key in the left subtree, we might find an unbalanced BST, called a skewed left or a skewed right BST. Please see the following diagram:   The preceding image is a sample of a skewed left BST, since there's no right subtree. Also, we can find a BST that has no left subtree, which is called a skewed right BST, as shown in the following diagram: As we can see in the two skewed BST diagrams, the height of the BST becomes taller since the height equals to N - 1 (where N is the total keys in the BST), which is five. Comparing this with the balanced BST, the root's height is only three. To create a BST in C++, we need to modify our TreeNode class in the preceding binary tree discussion, Building a binary tree ADT. We need to add the Parent properties so that we can track the parent of each node. It will make things easier for us when we traverse the tree. The class should be as follows: class BSTNode { public: int Key; BSTNode * Left; BSTNode * Right; BSTNode * Parent; }; There are several basic operations which BST usually has, and they are as follows: Insert() is used to add a new node to the current BST. If it's the first time we have added a node, the node we inserted will be a root node. PrintTreeInOrder() is used to print all of the keys in the BST, sorted from the smallest key to the greatest key. Search() is used to find a given key in the BST. If the key exists it returns TRUE, otherwise it returns FALSE. FindMin() and FindMax() are used to find the minimum key and the maximum key that exist in the BST. Successor() and Predecessor() are used to find the successor and predecessor of a given key. We are going to discuss these later in the upcoming section. Remove() is used to remove a given key from BST. Now, let's discuss these BST operations further. Inserting a new key into a BST Inserting a key into the BST is actually adding a new node based on the behavior of the BST. Each time we want to insert a key, we have to compare it with the root node (if there's no root beforehand, the inserted key becomes a root) and check whether it's smaller or greater than the root's key. If the given key is greater than the currently selected node's key, then go to the right subtree. Otherwise, go to the left subtree if the given key is smaller than the currently selected node's key. Keep checking this until there's a node with no child so that we can add a new node there. The following is the implementation of the Insert() operation in C++: BSTNode * BST::Insert(BSTNode * node, int key) { // If BST doesn't exist // create a new node as root // or it's reached when // there's no any child node // so we can insert a new node here if(node == NULL) { node = new BSTNode; node->Key = key; node->Left = NULL; node->Right = NULL; node->Parent = NULL; } // If the given key is greater than // node's key then go to right subtree else if(node->Key < key) { node->Right = Insert(node->Right, key); node->Right->Parent = node; } // If the given key is smaller than // node's key then go to left subtree else { node->Left = Insert(node->Left, key); node->Left->Parent = node; } return node; } As we can see in the preceding code, we need to pass the selected node and a new key to the function. However, we will always pass the root node as the selected node when performing the Insert() operation, so we can invoke the preceding code with the following Insert() function: void BST::Insert(int key) { // Invoking Insert() function // and passing root node and given key root = Insert(root, key); } Based on the implementation of the Insert() operation, we can see that the time complexity to insert a new key into the BST is O(h), where h is the height of the BST. However, if we insert a new key into a non-existing BST, the time complexity will be O(1), which is the best case scenario. And, if we insert a new key into a skewed tree, the time complexity will be O(N), where N is the total number of keys in the BST, which is the worst case scenario. Traversing a BST in order We have successfully created a new BST and can insert a new key into it. Now, we need to implement the PrintTreeInOrder() operation, which will traverse the BST in order from the smallest key to the greatest key. To achieve this, we will go to the leftmost node and then to the rightmost node. The code should be as follows: void BST::PrintTreeInOrder(BSTNode * node) { // Stop printing if no node found if(node == NULL) return; // Get the smallest key first // which is in the left subtree PrintTreeInOrder(node->Left); // Print the key std::cout << node->Key << " "; // Continue to the greatest key // which is in the right subtree PrintTreeInOrder(node->Right); } Since we will always traverse from the root node, we can invoke the preceding code as follows: void BST::PrintTreeInOrder() { // Traverse the BST // from root node // then print all keys PrintTreeInOrder(root); std::cout << std::endl; } The time complexity of the PrintTreeInOrder() function will be O(N), where N is the total number of keys for both the best and the worst cases since it will always traverse to all keys. Finding out whether a key exists in a BST Suppose we have a BST and need to find out if a key exists in the BST. It's quite easy to check whether a given key exists in a BST, since we just need to compare the given key with the current node. If the key is smaller than the current node's key, we go to the left subtree, otherwise we go to the right subtree. We will do this until we find the key or when there are no more nodes to find. The implementation of the Search() operation should be as follows: BSTNode * BST::Search(BSTNode * node, int key) { // The given key is // not found in BST if (node == NULL) return NULL; // The given key is found else if(node->Key == key) return node; // The given is greater than // current node's key else if(node->Key < key) return Search(node->Right, key); // The given is smaller than // current node's key else return Search(node->Left, key); } Since we will always search for a key from the root node, we can create another Search() function as follows: bool BST::Search(int key) { // Invoking Search() operation // and passing root node BSTNode * result = Search(root, key); // If key is found, returns TRUE // otherwise returns FALSE return result == NULL ? false : true; } The time complexity to find out a key in the BST is O(h), where h is the height of the BST. If we find a key which lies in the root node, the time complexity will be O(1), which is the best case. If we search for a key in a skewed tree, the time complexity will be O(N), where N is the total number of keys in the BST, which is the worst case. Retrieving the minimum and maximum key values Finding out the minimum and maximum key values in a BST is also quite simple. To get a minimum key value, we just need to go to the leftmost node and get the key value. On the contrary, we just need to go to the rightmost node and we will find the maximum key value. The following is the implementation of the FindMin() operation to retrieve the minimum key value, and the FindMax() operation to retrieve the maximum key value: int BST::FindMin(BSTNode * node) { if(node == NULL) return -1; else if(node->Left == NULL) return node->Key; else return FindMin(node->Left); } int BST::FindMax(BSTNode * node) { if(node == NULL) return -1; else if(node->Right == NULL) return node->Key; else return FindMax(node->Right); } We return -1 if we cannot find the minimum or maximum value in the tree, since we assume that the tree can only have a positive integer. If we intend to store the negative integer as well, we need to modify the function's implementation, for instance, by returning NULL if no minimum or maximum values are found. As usual, we will always find the minimum and maximum key values from the root node, so we can invoke the preceding operations as follows: int BST::FindMin() { return FindMin(root); } int BST::FindMax() { return FindMax(root); } Similar to the Search() operation, the time complexity of the FindMin() and FindMax() operations is O(h), where h is the height of the BST. However, if we find the maximum key value in a skewed left BST, the time complexity will be O(1), which is the best case, since it doesn't have any right subtree. This also happens if we find the minimum key value in a skewed right BST. The worst case will appear if we try to find the minimum key value in a skewed left BST or try to find the maximum key value in a skewed right BST, since the time complexity will be O(N). Finding out the successor of a key in a BST Other properties that we can find from a BST are the successor and the predecessor. We are going to create two functions named Successor() and Predecessor() in C++. But before we create the code, let's discuss how to find out the successor and the predecessor of a key of a BST. In this section, we are going to learn about the successor first, and then we will discuss the predecessor in the upcoming section. There are three rules to find out the successor of a key of a BST. Suppose we have a key, k, that we have searched for using the previous Search() function. We will also use our preceding BST to find out the successor of a specific key. The successor of k can be found as follows: If k has a right subtree, the successor of k will be the minimum integer in the right subtree of k. From our preceding BST, if k = 31, Successor(31) will give us 53 since it's the minimum integer in the right subtree of 31. Please take a look at the following diagram: If k does not have a right subtree, we have to traverse the ancestors of k until we find the first node, n, which is greater than node k. After we find node n, we will see that node k is the maximum element in the left subtree of n. From our preceding BST, if k = 15, Successor(15) will give us 23 since it's the first greater ancestor compared with 15, which is 23. Please take a look at the following diagram: If k is the maximum integer in the BST, there's no successor of k. From the preceding BST, if we run Successor(88), we will get -1, which means no successor has been found, since 88 is the maximum key of the BST. Based on our preceding discussion about how to find out the successor of a given key in a BST, we can create a Successor() function in C++ with the following implementation: int BST::Successor(BSTNode * node) { // The successor is the minimum key value // of right subtree if (node->Right != NULL) { return FindMin(node->Right); } // If no any right subtree else { BSTNode * parentNode = node->Parent; BSTNode * currentNode = node; // If currentNode is not root and // currentNode is its right children // continue moving up while ((parentNode != NULL) && (currentNode == parentNode->Right)) { currentNode = parentNode; parentNode = currentNode->Parent; } // If parentNode is not NULL // then the key of parentNode is // the successor of node return parentNode == NULL ? -1 : parentNode->Key; } } However, since we have to find a given key's node first, we have to run Search() prior to invoking the preceding Successor() function. The complete code for searching for the successor of a given key in a BST is as follows: int BST::Successor(int key) { // Search the key's node first BSTNode * keyNode = Search(root, key); // Return the key. // If the key is not found or // successor is not found, // return -1 return keyNode == NULL ? -1 : Successor(keyNode); } From our preceding Successor() operation, we can say that the average time complexity of running the operation is O(h), where h is the height of the BST. However, if we try to find out the successor of a maximum key in a skewed right BST, the time complexity of the operation is O(N), which is the worst case scenario. Finding out the predecessor of a key in a BST If k has a left subtree, the predecessor of k will be the maximum integer in the left subtree of k. From our preceding BST, if k = 12, Predecessor(12) will be 7 since it's the maximum integer in the left subtree of 12. Please take a look at the following diagram: If k does not have a left subtree, we have to traverse the ancestors of k until we find the first node, n, which is lower than node k. After we find node n, we will see that node n is the minimum element of the traversed elements. From our preceding BST, if k = 29, Predecessor(29) will give us 23 since it's the first lower ancestor compared with 29, which is 23. Please take a look at the following diagram: If k is the minimum integer in the BST, there's no predecessor of k. From the preceding BST, if we run Predecessor(3), we will get -1, which means no predecessor is found since 3 is the minimum key of the BST. Now, we can implement the Predecessor() operation in C++ as follows: int BST::Predecessor(BSTNode * node) { // The predecessor is the maximum key value // of left subtree if (node->Left != NULL) { return FindMax(node->Left); } // If no any left subtree else { BSTNode * parentNode = node->Parent; BSTNode * currentNode = node; // If currentNode is not root and // currentNode is its left children // continue moving up while ((parentNode != NULL) && (currentNode == parentNode->Left)) { currentNode = parentNode; parentNode = currentNode->Parent; } // If parentNode is not NULL // then the key of parentNode is // the predecessor of node return parentNode == NULL ? -1 : parentNode->Key; } } And, similar to the Successor() operation, we have to search for the node of a given key prior to invoking the preceding Predecessor() function. The complete code for searching for the predecessor of a given key in a BST is as follows: int BST::Predecessor(int key) { // Search the key's node first BSTNode * keyNode = Search(root, key); // Return the key. // If the key is not found or // predecessor is not found, // return -1 return keyNode == NULL ? -1 : Predecessor(keyNode); } Similar to our preceding Successor() operation, the time complexity of running the Predecessor() operation is O(h), where h is the height of the BST. However, if we try to find out the predecessor of a minimum key in a skewed left BST, the time complexity of the operation is O(N), which is the worst case scenario. Removing a node based on a given key The last operation in the BST that we are going to discuss is removing a node based on a given key. We will create a Remove() operation in C++. There are three possible cases for removing a node from a BST, and they are as follows: Removing a leaf (a node that doesn't have any child). In this case, we just need to remove the node. From our preceding BST, we can remove keys 7, 15, 29, and 53 since they are leaves with no nodes. Removing a node that has only one child (either a left or right child). In this case, we have to connect the child to the parent of the node. After that, we can remove the target node safely. As an example, if we want to remove node 3, we have to point the Parent pointer of node 7 to node 12 and make the left node of 12 points to 7. Then, we can safely remove node 3. Removing a node that has two children (left and right children). In this case, we have to find out the successor (or predecessor) of the node's key. After that, we can replace the target node with the successor (or predecessor) node. Suppose we want to remove node 31, and that we want 53 as its successor. Then, we can remove node 31 and replace it with node 53. Now, node 53 will have two children, node 29 in the left and node 88 in the right. Also, similar to the Search() operation, if the target node doesn't exist, we just need to return NULL. The implementation of the Remove() operation in C++ is as follows: BSTNode * BST::Remove( BSTNode * node, int key) { // The given node is // not found in BST if (node == NULL) return NULL; // Target node is found if (node->Key == key) { // If the node is a leaf node // The node can be safely removed if (node->Left == NULL && node->Right == NULL) node = NULL; // The node have only one child at right else if (node->Left == NULL && node->Right != NULL) { // The only child will be connected to // the parent's of node directly node->Right->Parent = node->Parent; // Bypass node node = node->Right; } // The node have only one child at left else if (node->Left != NULL && node->Right == NULL) { // The only child will be connected to // the parent's of node directly node->Left->Parent = node->Parent; // Bypass node node = node->Left; } // The node have two children (left and right) else { // Find successor or predecessor to avoid quarrel int successorKey = Successor(key); // Replace node's key with successor's key node->Key = successorKey; // Delete the old successor's key node->Right = Remove(node->Right, successorKey); } } // Target node's key is smaller than // the given key then search to right else if (node->Key < key) node->Right = Remove(node->Right, key); // Target node's key is greater than // the given key then search to left else node->Left = Remove(node->Left, key); // Return the updated BST return node; } Since we will always remove a node starting from the root node, we can simplify the preceding Remove() operation by creating the following one: void BST::Remove(int key) { root = Remove(root, key); } As shown in the preceding Remove() code, the time complexity of the operation is O(1) for both case 1 (the node that has no child) and case 2 (the node that has only one child). For case 3 (the node that has two children), the time complexity will be O(h), where h is the height of the BST, since we have to find the successor or predecessor of the node's key. If you found this tutorial useful, do check out the book C++ Data Structures and Algorithms for more useful material on data structure and algorithms with real-world implementation in C++. Working with shaders in C++ to create 3D games Getting Inside a C++ Multithreaded Application Understanding the Dependencies of a C++ Application
Read more
  • 0
  • 3
  • 57843
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-how-to-implement-dynamic-sql-in-postgresql-10
Amey Varangaonkar
23 Feb 2018
7 min read
Save for later

How to implement Dynamic SQL in PostgreSQL 10

Amey Varangaonkar
23 Feb 2018
7 min read
In this PostgreSQL tutorial, we'll take a close look at the concept of dynamic SQL, and how it can make the life of database programmers easy by allowing efficient querying of data. This tutorial has been taken from the second edition of Learning PostgreSQL 10. You can read more here. Dynamic SQL is used to reduce repetitive tasks when it comes to querying. For example, one could use dynamic SQL to create table partitioning for a certain table on a daily basis, to add missing indexes on all foreign keys, or add data auditing capabilities to a certain table without major coding effects. Another important use of dynamic SQL is to overcome the side effects of PL/pgSQL caching, as queries executed using the EXECUTE statement are not cached. Dynamic SQL is achieved via the EXECUTE statement. The EXECUTE statement accepts a string and simply evaluates it. The synopsis to execute a statement is given as follows: EXECUTE command-string [ INTO [STRICT] target ] [ USING expression [, ...] ]; Executing DDL statements in dynamic SQL In some cases, one needs to perform operations at the database object level, such as tables, indexes, columns, roles, and so on. For example, a database developer would like to vacuum and analyze a specific schema object, which is a common task after the deployment in order to update the statistics. For example, to analyze the car_portal_app schema tables, one could write the following script: DO $$ DECLARE table_name text; BEGIN FOR table_name IN SELECT tablename FROM pg_tables WHERE schemaname ='car_portal_app' LOOP RAISE NOTICE 'Analyzing %', table_name; EXECUTE 'ANALYZE car_portal_app.' || table_name; END LOOP; END; $$; Executing DML statements in dynamic SQL Some applications might interact with data in an interactive manner. For example, one might have billing data generated on a monthly basis. Also, some applications filter data on different criteria defined by the user. In such cases, dynamic SQL is very convenient. For example, in the car portal application, the search functionality is needed to get accounts using the dynamic predicate, as follows: CREATE OR REPLACE FUNCTION car_portal_app.get_account (predicate TEXT) RETURNS SETOF car_portal_app.account AS $$ BEGIN RETURN QUERY EXECUTE 'SELECT * FROM car_portal_app.account WHERE ' || predicate; END; $$ LANGUAGE plpgsql; To test the previous function: car_portal=> SELECT * FROM car_portal_app.get_account ('true') limit 1; account_id | first_name | last_name | email | password ------------+------------+-----------+-----------------+------------------- --------------- 1 | James | Butt | jbutt@gmail.com | 1b9ef408e82e38346e6ebebf2dcc5ece (1 row) car_portal=> SELECT * FROM car_portal_app.get_account (E'first_name='James''); account_id | first_name | last_name | email | password ------------+------------+-----------+-----------------+------------------- --------------- 1 | James | Butt | jbutt@gmail.com | 1b9ef408e82e38346e6ebebf2dcc5ece (1 row) Dynamic SQL and the caching effect As mentioned earlier, PL/pgSQL caches execution plans. This is quite good if the generated plan is expected to be static. For example, the following statement is expected to use an index scan because of selectivity. In this case, caching the plan saves some time and thus increases performance: SELECT * FROM account WHERE account_id =<INT> In other scenarios, however, this is not true. For example, let's assume we have an index on the advertisement_date column and we would like to get the number of advertisements since a certain date, as follows: SELECT count (*) FROM car_portal_app.advertisement WHERE advertisement_date >= <certain_date>; In the preceding query, the entries from the advertisement table can be fetched from the hard disk either by using the index scan or using the sequential scan based on selectivity, which depends on the provided certain_date value. Caching the execution plan of such a query will cause serious problems; thus, writing the function as follows is not a good idea: CREATE OR REPLACE FUNCTION car_portal_app.get_advertisement_count (some_date timestamptz ) RETURNS BIGINT AS $$ BEGIN RETURN (SELECT count (*) FROM car_portal_app.advertisement WHERE advertisement_date >=some_date)::bigint; END; $$ LANGUAGE plpgsql; To solve the caching issue, one could rewrite the previous function either using the SQL language function or by using the PL/pgSQL execute command, as follows: CREATE OR REPLACE FUNCTION car_portal_app.get_advertisement_count (some_date timestamptz ) RETURNS BIGINT AS $$ DECLARE count BIGINT; BEGIN EXECUTE 'SELECT count (*) FROM car_portal_app.advertisement WHERE advertisement_date >= $1' USING some_date INTO count; RETURN count; END; $$ LANGUAGE plpgsql; Recommended practices for dynamic SQL usage Dynamic SQL can cause security issues if not handled carefully; dynamic SQL is vulnerable to the SQL injection technique. SQL injection is used to execute SQL statements that reveal secure information, or even to destroy data in a database. A very simple example of a PL/pgSQL function vulnerable to SQL injection is as follows: CREATE OR REPLACE FUNCTION car_portal_app.can_login (email text, pass text) RETURNS BOOLEAN AS $$ DECLARE stmt TEXT; result bool; BEGIN stmt = E'SELECT COALESCE (count(*)=1, false) FROM car_portal_app.account WHERE email = ''|| $1 || E'' and password = ''||$2||E'''; RAISE NOTICE '%' , stmt; EXECUTE stmt INTO result; RETURN result; END; $$ LANGUAGE plpgsql; The preceding function returns true if the email and the password match. To test this function, let's insert a row and try to inject some code, as follows: car_portal=> SELECT car_portal_app.can_login('jbutt@gmail.com', md5('jbutt@gmail.com')); NOTICE: SELECT COALESCE (count(*)=1, false) FROM account WHERE email = 'jbutt@gmail.com' and password = '1b9ef408e82e38346e6ebebf2dcc5ece' Can_login ----------- t (1 row) car_portal=> SELECT car_portal_app.can_login('jbutt@gmail.com', md5('jbutt@yahoo.com')); NOTICE: SELECT COALESCE (count(*)=1, false) FROM account WHERE email = 'jbutt@gmail.com' and password = '37eb43e4d439589d274b6f921b1e4a0d' can_login ----------- f (1 row) car_portal=> SELECT car_portal_app.can_login(E'jbutt@gmail.com'--', 'Do not know password'); NOTICE: SELECT COALESCE (count(*)=1, false) FROM account WHERE email = 'jbutt@gmail.com'--' and password = 'Do not know password' can_login ----------- t (1 row) Notice that the function returns true even when the password does not match the password stored in the table. This is simply because the predicate was commented, as shown by the raise notice: SELECT COALESCE (count(*)=1, false) FROM account WHERE email = 'jbutt@gmail.com'--' and password = 'Do not know password' To protect code against this technique, one could follow these practices: For parameterized dynamic SQL statements, use the USING clause. Use the format function with appropriate interpolation to construct your queries. Note that %I escapes the argument as an identifier and %L as a literal. Use quote_ident(), quote_literal(), and quote_nullable() to properly format your identifiers and literal. One way to write the preceding function is as follows: CREATE OR REPLACE FUNCTION car_portal_app.can_login (email text, pass text) RETURNS BOOLEAN AS $$ DECLARE stmt TEXT; result bool; BEGIN stmt = format('SELECT COALESCE (count(*)=1, false) FROM car_portal_app.account WHERE email = %Land password = %L', $1,$2); RAISE NOTICE '%' , stmt; EXECUTE stmt INTO result; RETURN result; END; $$ LANGUAGE plpgsql; We saw how dynamically SQL is used to build and execute queries on the fly. Unlike the static SQL statement, a dynamic SQL statements’ full text is unknown and can change between successive executions. These queries can be DDL, DCL, and/or DML statements. If you found this article useful, make sure to check out the book Learning PostgreSQL 10, to learn the fundamentals of PostgreSQL 10.  
Read more
  • 0
  • 1
  • 57728

article-image-4-ways-implement-feature-selection-python-machine-learning
Sugandha Lahoti
16 Feb 2018
13 min read
Save for later

4 ways to implement feature selection in Python for machine learning

Sugandha Lahoti
16 Feb 2018
13 min read
[box type="note" align="" class="" width=""]This article is an excerpt from Ensemble Machine Learning. This book serves as a beginner's guide to combining powerful machine learning algorithms to build optimized models.[/box] In this article, we will look at different methods to select features from the dataset; and discuss types of feature selection algorithms with their implementation in Python using the Scikit-learn (sklearn) library: Univariate selection Recursive Feature Elimination (RFE) Principle Component Analysis (PCA) Choosing important features (feature importance) We have explained first three algorithms and their implementation in short. Further we will discuss Choosing important features (feature importance) part in detail as it is widely used technique in the data science community. Univariate selection Statistical tests can be used to select those features that have the strongest relationships with the output variable. The scikit-learn library provides the SelectKBest class, which can be used with a suite of different statistical tests to select a specific number of features. The following example uses the chi squared (chi^2) statistical test for non-negative features to select four of the best features from the Pima Indians onset of diabetes dataset: #Feature Extraction with Univariate Statistical Tests (Chi-squared for classification) #Import the required packages #Import pandas to read csv import pandas #Import numpy for array related operations import numpy #Import sklearn's feature selection algorithm from sklearn.feature_selection import SelectKBest #Import chi2 for performing chi square test from sklearn.feature_selection import chi2 #URL for loading the dataset url ="https://archive.ics.uci.edu/ml/machine-learning-databases/pima-indians diabetes/pima-indians-diabetes.data" #Define the attribute names names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class'] #Create pandas data frame by loading the data from URL dataframe = pandas.read_csv(url, names=names) #Create array from data values array = dataframe.values #Split the data into input and target X = array[:,0:8] Y = array[:,8] #We will select the features using chi square test = SelectKBest(score_func=chi2, k=4) #Fit the function for ranking the features by score fit = test.fit(X, Y) #Summarize scores numpy.set_printoptions(precision=3) print(fit.scores_) #Apply the transformation on to dataset features = fit.transform(X) #Summarize selected features print(features[0:5,:]) You can see the scores for each attribute and the four attributes chosen (those with the highest scores): plas, test, mass, and age. Scores for each feature: [111.52   1411.887 17.605 53.108  2175.565   127.669 5.393 181.304] Selected Features: [[148. 0. 33.6 50. ] [85. 0. 26.6 31. ] [183. 0. 23.3 32. ] [89. 94. 28.1 21. ] [137. 168. 43.1 33. ]] Recursive Feature Elimination RFE works by recursively removing attributes and building a model on attributes that remain. It uses model accuracy to identify which attributes (and combinations of attributes) contribute the most to predicting the target attribute. You can learn more about the RFE class in the scikit-learn documentation. The following example uses RFE with the logistic regression algorithm to select the top three features. The choice of algorithm does not matter too much as long as it is skillful and consistent: #Import the required packages #Import pandas to read csv import pandas #Import numpy for array related operations import numpy #Import sklearn's feature selection algorithm from sklearn.feature_selection import RFE #Import LogisticRegression for performing chi square test from sklearn.linear_model import LogisticRegression #URL for loading the dataset url = "https://archive.ics.uci.edu/ml/machine-learning-databases/pima-indians-dia betes/pima-indians-diabetes.data" #Define the attribute names names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class'] #Create pandas data frame by loading the data from URL dataframe = pandas.read_csv(url, names=names) #Create array from data values array = dataframe.values #Split the data into input and target X = array[:,0:8] Y = array[:,8] #Feature extraction model = LogisticRegression() rfe = RFE(model, 3) fit = rfe.fit(X, Y) print("Num Features: %d"% fit.n_features_) print("Selected Features: %s"% fit.support_) print("Feature Ranking: %s"% fit.ranking_) After execution, we will get: Num Features: 3 Selected Features: [ True False False False False   True  True False] Feature Ranking: [1 2 3 5 6 1 1 4] You can see that RFE chose the the top three features as preg, mass, and pedi. These are marked True in the support_ array and marked with a choice 1 in the ranking_ array. Principle Component Analysis PCA uses linear algebra to transform the dataset into a compressed form. Generally, it is considered a data reduction technique. A property of PCA is that you can choose the number of dimensions or principal components in the transformed result. In the following example, we use PCA and select three principal components: #Import the required packages #Import pandas to read csv import pandas #Import numpy for array related operations import numpy #Import sklearn's PCA algorithm from sklearn.decomposition import PCA #URL for loading the dataset url = "https://archive.ics.uci.edu/ml/machine-learning-databases/pima-indians diabetes/pima-indians-diabetes.data" #Define the attribute names names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class'] dataframe = pandas.read_csv(url, names=names) #Create array from data values array = dataframe.values #Split the data into input and target X = array[:,0:8] Y = array[:,8] #Feature extraction pca = PCA(n_components=3) fit = pca.fit(X) #Summarize components print("Explained Variance: %s") % fit.explained_variance_ratio_ print(fit.components_) You can see that the transformed dataset (three principal components) bears little resemblance to the source data: Explained Variance: [ 0.88854663   0.06159078  0.02579012] [[ -2.02176587e-03    9.78115765e-02 1.60930503e-02    6.07566861e-02 9.93110844e-01          1.40108085e-02 5.37167919e-04   -3.56474430e-03] [ -2.26488861e-02   -9.72210040e-01              -1.41909330e-01  5.78614699e-02 9.46266913e-02   -4.69729766e-02               -8.16804621e-04  -1.40168181e-01 [ -2.24649003e-02 1.43428710e-01                 -9.22467192e-01  -3.07013055e-01 2.09773019e-02   -1.32444542e-01                -6.39983017e-04  -1.25454310e-01]] Choosing important features (feature importance) Feature importance is the technique used to select features using a trained supervised classifier. When we train a classifier such as a decision tree, we evaluate each attribute to create splits; we can use this measure as a feature selector. Let's understand it in detail. Random forests are among the most popular machine learning methods thanks to their relatively good accuracy, robustness, and ease of use. They also provide two straightforward methods for feature selection—mean decrease impurity and mean decrease accuracy. A random forest consists of a number of decision trees. Every node in a decision tree is a condition on a single feature, designed to split the dataset into two so that similar response values end up in the same set. The measure based on which the (locally) optimal condition is chosen is known as impurity. For classification, it is typically either the Gini impurity or information gain/entropy, and for regression trees, it is the variance. Thus when training a tree, it can be computed by how much each feature decreases the weighted impurity in a tree. For a forest, the impurity decrease from each feature can be averaged and the features are ranked according to this measure. Let's see how to do feature selection using a random forest classifier and evaluate the accuracy of the classifier before and after feature selection. We will use the Otto dataset. This dataset is available for free from kaggle (you will need to sign up to kaggle to be able to download this dataset). You can download training dataset, train.csv.zip, from the https://www.kaggle.com/c/otto-group-product-classification-challenge/data and place the unzipped train.csv file in your working directory. This dataset describes 93 obfuscated details of more than 61,000 products grouped into 10 product categories (for example, fashion, electronics, and so on). Input attributes are the counts of different events of some kind. The goal is to make predictions for new products as an array of probabilities for each of the 10 categories, and models are evaluated using multiclass logarithmic loss (also called cross entropy). We will start with importing all of the libraries: #Import the supporting libraries #Import pandas to load the dataset from csv file from pandas import read_csv #Import numpy for array based operations and calculations import numpy as np #Import Random Forest classifier class from sklearn from sklearn.ensemble import RandomForestClassifier #Import feature selector class select model of sklearn         from sklearn.feature_selection         import SelectFromModel          np.random.seed(1) Let's define a method to split our dataset into training and testing data; we will train our dataset on the training part and the testing part will be used for evaluation of the trained model: #Function to create Train and Test set from the original dataset def getTrainTestData(dataset,split): np.random.seed(0) training = [] testing = [] np.random.shuffle(dataset) shape = np.shape(dataset) trainlength = np.uint16(np.floor(split*shape[0])) for i in range(trainlength): training.append(dataset[i]) for i in range(trainlength,shape[0]): testing.append(dataset[i]) training = np.array(training) testing = np.array(testing) return training,testing We also need to add a function to evaluate the accuracy of the model; it will take the predicted and actual output as input to calculate the percentage accuracy: #Function to evaluate model performance def getAccuracy(pre,ytest): count = 0 for i in range(len(ytest)): if ytest[i]==pre[i]: count+=1 acc = float(count)/len(ytest) return acc This is the time to load the dataset. We will load the train.csv file; this file contains more than 61,000 training instances. We will use 50000 instances for our example, in which we will use 35,000 instances to train the classifier and 15,000 instances to test the performance of the classifier: #Load dataset as pandas data frame data = read_csv('train.csv') #Extract attribute names from the data frame feat = data.keys() feat_labels = feat.get_values() #Extract data values from the data frame dataset = data.values #Shuffle the dataset np.random.shuffle(dataset) #We will select 50000 instances to train the classifier inst = 50000 #Extract 50000 instances from the dataset dataset = dataset[0:inst,:] #Create Training and Testing data for performance evaluation train,test = getTrainTestData(dataset, 0.7) #Split data into input and output variable with selected features Xtrain = train[:,0:94] ytrain = train[:,94] shape = np.shape(Xtrain) print("Shape of the dataset ",shape) #Print the size of Data in MBs print("Size of Data set before feature selection: %.2f MB"%(Xtrain.nbytes/1e6)) Let's take note of the data size here; as our dataset contains about 35000 training instances with 94 attributes; the size of our dataset is quite large. Let's see: Shape of the dataset (35000, 94) Size of Data set before feature selection: 26.32 MB As you can see, we are having 35000 rows and 94 columns in our dataset, which is more than 26 MB data. In the next code block, we will configure our random forest classifier; we will use 250 trees with a maximum depth of 30 and the number of random features will be 7. Other hyperparameters will be the default of sklearn: #Lets select the test data for model evaluation purpose Xtest = test[:,0:94] ytest = test[:,94] #Create a random forest classifier with the following Parameters trees            = 250 max_feat     = 7 max_depth = 30 min_sample = 2 clf = RandomForestClassifier(n_estimators=trees, max_features=max_feat, max_depth=max_depth, min_samples_split= min_sample, random_state=0, n_jobs=-1) #Train the classifier and calculate the training time import time start = time.time() clf.fit(Xtrain, ytrain) end = time.time() #Lets Note down the model training time print("Execution time for building the Tree is: %f"%(float(end)- float(start))) pre = clf.predict(Xtest) Let's see how much time is required to train the model on the training dataset: Execution time for building the Tree is: 2.913641 #Evaluate the model performance for the test data acc = getAccuracy(pre, ytest) print("Accuracy of model before feature selection is %.2f"%(100*acc)) The accuracy of our model is: Accuracy of model before feature selection is 98.82 As you can see, we are getting very good accuracy as we are classifying almost 99% of the test data into the correct categories. This means we are classifying about 14,823 instances out of 15,000 in correct classes. So, now my question is: should we go for further improvement? Well, why not? We should definitely go for more improvements if we can; here, we will use feature importance to select features. As you know, in the tree building process, we use impurity measurement for node selection. The attribute value that has the lowest impurity is chosen as the node in the tree. We can use similar criteria for feature selection. We can give more importance to features that have less impurity, and this can be done using the feature_importances_ function of the sklearn library. Let's find out the importance of each feature: #Once we have trained the model we will rank all the features for feature in zip(feat_labels, clf.feature_importances_): print(feature) ('id', 0.33346650420175183) ('feat_1', 0.0036186958628801214) ('feat_2', 0.0037243050888530957) ('feat_3', 0.011579217472062748) ('feat_4', 0.010297382675187445) ('feat_5', 0.0010359139416194116) ('feat_6', 0.00038171336038056165) ('feat_7', 0.0024867672489765021) ('feat_8', 0.0096689721610546085) ('feat_9', 0.007906150362995093) ('feat_10', 0.0022342480802130366) As you can see here, each feature has a different importance based on its contribution to the final prediction. We will use these importance scores to rank our features; in the following part, we will select those features that have feature importance more than 0.01 for model training: #Select features which have higher contribution in the final prediction sfm = SelectFromModel(clf, threshold=0.01) sfm.fit(Xtrain,ytrain) Here, we will transform the input dataset according to the selected feature attributes. In the next code block, we will transform the dataset. Then, we will check the size and shape of the new dataset: #Transform input dataset Xtrain_1 = sfm.transform(Xtrain) Xtest_1      = sfm.transform(Xtest) #Let's see the size and shape of new dataset print("Size of Data set before feature selection: %.2f MB"%(Xtrain_1.nbytes/1e6)) shape = np.shape(Xtrain_1) print("Shape of the dataset ",shape) Size of Data set before feature selection: 5.60 MB Shape of the dataset (35000, 20) Do you see the shape of the dataset? We are left with only 20 features after the feature selection process, which reduces the size of the database from 26 MB to 5.60 MB. That's about 80% reduction from the original dataset. In the next code block, we will train a new random forest classifier with the same hyperparameters as earlier and test it on the testing dataset. Let's see what accuracy we get after modifying the training set: #Model training time start = time.time() clf.fit(Xtrain_1, ytrain) end = time.time() print("Execution time for building the Tree is: %f"%(float(end)- float(start))) #Let's evaluate the model on test data pre = clf.predict(Xtest_1) count = 0 acc2 = getAccuracy(pre, ytest) print("Accuracy after feature selection %.2f"%(100*acc2)) Execution time for building the Tree is: 1.711518 Accuracy after feature selection 99.97 Can you see that!! We have got 99.97 percent accuracy with the modified dataset, which means we are classifying 14,996 instances in correct classes, while previously we were classifying only 14,823 instances correctly. This is a huge improvement we have got with the feature selection process; we can summarize all the results in the following table: Evaluation criteria Before feature selection After feature selection Number of features 94 20 Size of dataset 26.32 MB 5.60 MB Training time 2.91 seconds 1.71 seconds Accuracy 98.82 percent 99.97 percent The preceding table shows the practical advantages of feature selection. You can see that we have reduced the number of features significantly, which reduces the model complexity and dimensions of the dataset. We are getting less training time after the reduction in dimensions, and at the end, we have overcome the overfitting issue, getting higher accuracy than before. To summarize the article, we explored 4 ways of feature selection in machine learning. If you found this post is useful, do check out the book Ensemble Machine Learning to know more about stacking generalization among other techniques.
Read more
  • 0
  • 4
  • 55361

article-image-understanding-address-spaces-and-subnetting-in-ipv4-tutorial
Melisha Dsouza
05 Feb 2019
13 min read
Save for later

Understanding Address spaces and subnetting in IPv4 [Tutorial]

Melisha Dsouza
05 Feb 2019
13 min read
In any network, Internet Protocol (IP) addressing is needed to ensure that data is sent to the correct recipient or device. Both IPv4 and IPv6 address schemes are managed by the Internet Assigned Numbers Authority (IANA). Most of the internet that we know today is based on the IPv4 addressing scheme and is still the predominant method of communication on both the internet and private networks. This tutorial is an excerpt from a book written by Glen D. Singh, Rishi Latchmepersad titled CompTIA Network+ Certification Guide. This book is a practical certification guide that covers all CompTIA certification exam topics in an easy-to-understand manner along with self-assessment scenarios for better preparation. Public IPv4 addresses There are two main IPv4 address spaces—the public address space and the private address space. The primary difference between both address spaces is that the public IPv4 addresses are routable on the internet, which means that any device that requires communication to other devices on the internet will need to be assigned a public IPv4 address on its interface, which is connected to the internet. The public address space is divided into five classes: Class A 0.0.0.0 – 126.255.255.255 Class B 128.0.0.0 – 191.255.255.255 Class C 192.0.0.0 – 223.255.255.255 Class D 224.0.0.0 – 239.255.255.255 Class E 240.0.0.0 – 255.255.255.255 Class D addresses are used for multicast traffic. These addresses are not assignable. Class E addresses are reserved for experimental usage and are not assignable. On the internet, classes A, B, and C are commonly used on devices that are directly connected to the internet, such as layer 3 switches, routers, firewalls, servers, and any other network-related device. As mentioned earlier, there are approximately four billion public IPv4 addresses. However, in a lot of organizations and homes, only one public IPv4 address is assigned to the router or modem's publicly facing interface. The following diagram shows how a public IP address is seen by internet users: So, what about the devices that require internet access from within the organization or home? There may be a few devices to hundreds or even thousands of devices that require an internet connection and an IP address to communicate to the internet from within a company. If ISPs give their customers a single public IPv4 address on their modem or router, how can this single public IPv4 address serve more than one device from within the organization or home? The internet gateway or router is usually configured with Network Addresses Translation (NAT), which is the method of mapping either a group of IP addresses or a single IP address on the internet-facing interface to the local area network (LAN). For any devices that are behind the internet gateway that want to communicate with another device on the internet, NAT will translate the sender's source IP address to the public IPv4 address. Therefore, all of the devices on the internet will see the public IPv4 address and not the sender's actual IP address. Private IPv4 addresses As defined by RFC 1918, there are three classes of private IPv4 address that are allocated for private use only. This means within a private network such as LAN. The benefit of using the private address space (RFC 1918) is that the classes are not unique to any particular organization or group. They can be used within an organization or a private network. However, on the internet, the public IPv4 address is unique to a device. This means that if a device is directly connected to the internet with a private IPv4 address, there will be no network connectivity to devices on the internet. Most ISPs usually have a filter to prevent any private addresses (RFC 1918) from entering their network. The private address space is divided into three classes: Class A—10.0.0.0/8 network block 10.0.0.0 - 010.255.255.255 Class B—172.16.0.0/12 network block 172.16.0.0 - 172.31.255.255 Class C—192.168.0.0/16 network block 192.168.0.0 - 192.168.255.255 Subnetting in IPv4 What is subnetting and why do we need to subnet a network? First, subnetting is the process of breaking down a single IP address block into smaller subnetworks (subnets). Second, the reason we need to subnet is to efficiently distribute IP addresses with the result of less wastage. This brings us to other questions, such as why do we need to break down a single IP address block, and why is least wastage so important? Could we simply assign a Class A, B, or C address block to a network of any size? To answer these questions, we will go more in depth with this topic by using practical examples and scenarios. Let's assume that you are a network administrator at a local company and one day the IT manager assigns a new task to you. The task is to redesign the IP scheme of the company. He has also told you to use an address class that is suitable for the company's size and to ensure that there is minimal wastage of IP addresses. The first thing you decided to do was draw a high-level network diagram indicating each branch, which shows the number of hosts per branch office and the Wide Area Network (WAN) links between each branch router: Network diagram As we can see from the preceding diagram, each building has a branch router, and each router is connected to another using a WAN link. Each branch location has a different number of host devices that requires an IP address for network communication. Step 1 – determining an appropriate class of address and why The subnet mask can tell us a lot about a network, such as the following: The network and host portion of an IP address The number of hosts within a network If we use a network block from either of the address classes, we will get the following available hosts: As you may remember, the network portion of an address is represented by 1s in the subnet mask, while the 0s represent the host portion. We can use the following formula to calculate the total number of IP addresses within a subnet by the known the amount of host bits in the subnet mask. Using the formula 2H, where H represents the host bit, we get the following results: Class A = 224 = 16,777,216 total IPs Class B = 216 = 65,536 total IPs Class C = 28 = 256 total IPs In IPv4, there are two IPs that cannot be assigned to any devices. These are the Network ID and the Broadcast IP address. Therefore, you need to subtract two addresses from the total IP formula. Using the formula 2H-2 to calculate usable IPs, we get the following: Class A = 224 – 2 = 16,777,214 total IPs Class B = 216 – 2 = 65,534 total IPs Class C = 28 – 2 = 254 total IPs Looking back at Network diagram, we can identify the following seven networks: Branch A LAN: 25 hosts Branch B LAN: 15 hosts Branch C LAN: 28 hosts Branch D LAN: 26 hosts WAN R1-R2: 2 IPs are needed WAN R2-R3: 2 IPs are needed WAN R3-R4: 2 IPs are needed Determining the appropriate address class depends on the largest network and the number of networks needed. Currently, the largest network is Branch C, which has 28 host devices that needs an IP address. We can use the smallest available class, which is any Class C address because it will be able to support the largest network we have. However, to do this, we need to choose a Class C address block. Let's use the 192.168.1.0/24 block. Remember, the subnet mask is used to identify the network portion of the address. This also means that we are unable to modify the network portion of the IP address when we are subnetting, but we can modify the host portion: The first 24-bits represent the network portion and the remaining 8-bits represent the host portion. Using the formula 2H – 2 to calculate the number of usable host IPs, we get the following: 2H – 2 28 – 2 = 256 – 2 = 254 usable IP addresses Assigning this single network block to either of the seven networks, there will be a lot of IP addresses being wasted. Therefore, we need to apply our subnetting techniques to this Class C address block. Step 2 – creating subnets (subnetworks) To create more subnets or subnetworks, we need to borrow bits on the host portion of the network. The formula 2N is used to calculate the number of subnets, where N is the number of bits borrowed on the host portion. Once these bits are borrowed, they will become part of the network portion and a new subnet mask will be presented. So far, we have a Network ID of 192.168.1.0/24. We need to get seven subnets, and each subnet should be able to fit our largest network (which is Branch C—28 hosts). Let's create our subnets. Remember that we need to borrow bits on the host portion, starting where the 1s end in the subnet mask. Let's borrow two host bits and apply them to our formula to determine whether we are able to get the seven subnets: When bits are borrowed on the host portion, the bits are changed to 1s in the subnet mask. This produces a new subnet mask for all of the subnets that have been created. Let's use our formula for calculating the number of networks: Number of Networks = 2N 22 = 2 x 2 = 4 networks As we can see, two host bits are not enough as we need at least seven networks. Let's borrow one more host bit: Once again, let's use our formula for calculating the number of networks: Number of Networks = 2N 23 = 2 x 2 x 2 = 8 networks Using 3 host bits, we are able to get a total of 8 subnets. In this situation, we have one additional network, and this additional network can be placed aside for future use if there's an additional branch in the future. Since we borrowed 3 bits, we have 5 host bits remaining. Let's use our formula for calculating usable IP addresses: Usable IP addresses = 2H – 2 25 – 2 = 32 – 2 = 30 usable IPs This means that each of the 8 subnets will have a total of 32 IP addresses, with 30 usable IP addresses inclusive. Now we have a perfect match. Let's work out our 8 new subnets. The guidelines we must follow at this point are as follows: We cannot modify the network portion of the address (red) We cannot modify the host portion of the address (black) We can only modify the bits that we borrowed (green) Starting with the Network ID, we get the following eight subnets: We can't forget about the subnet mask: As we can see, there are twenty-seven 1s in the subnet mask, which gives us 255.255.255.224 or /27 as the new subnet mask for all eight subnets we've just created. Take a look at each of the subnets. They all have a fixed increment of 32. A quick method to calculate the incremental size is to use the formula 2x. This assists in working out the decimal notation of each subnet much easier than calculating the binary. The last network in any subnet always ends with the customized ending of the new subnet mask. From our example, the new subnet mask 255.255.255.224 ends with 224, and the last subnet also ends with the same value, 192.168.1.224. Step 3 – assigning each network an appropriate subnet and calculating the ranges To determine the first usable IP address within a subnet, the first bit from the right must be 1. To determine the last usable IP address within a subnet all of the host bits except the first bit from the right should all be 1s. The broadcast IP of any subnet is when all of the host bits are 1s. Let's take a look at the first subnet. We will assign subnet 1 to the Branch A LAN: The second subnet will be allocated to the Branch B LAN: The third subnet will be allocated to the Branch C LAN: The fourth subnet will be allocated to Branch D LAN: At this point, we have successfully allocated subnets 1 to 4 to each of the branch's LANs. During our initial calculation for determining the size of each subnet, we saw that each of the eight subnets are equal, and that we have 32 total IPs with 30 usable IP addresses. Currently, we have subnets 5 to 8 for allocation, but if we allocate subnet 5, 6 and 7 to the WAN links between the branches R1-R2, R2-R3 and R3-R4, we would be wasting 28 IP addresses since each WAN link (point-to-point) only requires 2 IP addresses. What if we can take one of our existing subnets and create even more but smaller networks to fit each WAN (point-to-point) link? We can do this with a process known as Variable Length Subnet Masking (VLSM). By using this process, we are subnetting a subnet. For now, we will place aside subnets 5, 6, and 7 as a future reservation for any future branches: Step 4 – VLSM and subnetting a subnet For the WAN links, we need at least three subnets. Each must have a minimum of two usable IP addresses. To get started, let's use the following formula to determine the number of host bits that are needed so that we have at least two usable IP addresses: 2H – 2, where H is the number of host bits. We are going to use one bit, 21 – 2 = 2 – 2 = 0 usable IP addresses. Let's add an extra host bit in our formula, that is, 22 – 2 = 4 – 2 = 2 usable IP addresses. At this point, we have a perfect match, and we know that only two host bits are needed to give us our WAN (point-to-point) links. We are going to use the following guidelines: We cannot modify the network portion of the address (red) Since we know that the two host bits are needed to represent two usable IP addresses, we can lock it into place (purple) The bit between the network portion (red) and the locked-in host bits (purple) will be the new network bits (black) To calculate the number of networks, we can use 2N = 23 = 8 networks. Even though we got a lot more networks than we actually needed, the remainder of the networks can be set aside for future use. To calculate the total IPs and increment, we can use 2H = 22 = 4 total IP addresses (inclusive of the Network ID and Broadcast IP addresses). To calculate the number of usable IP addresses, we can use 2H – 2 = 22 – 2 = 2 usable IP addresses per network. Let's work out our eight new subnets for any existing and future WAN (point-to-point) links: Now that we have eight new subnets, let's allocate them accordingly. The first subnet will be allocated to WAN 1, R1-R2: The second subnet will be allocated to WAN 2, R2-R3: The third subnet will be allocated to WAN 3, R3-R4: Now that we have allocated the first three subnets to each of the WAN links, the following remaining subnets can be set aside for any future branches which may need another WAN link. These will be assigned for future reservation: Summary In this tutorial, we understood public and private IPV4 addresses. We also learned the importance of having a subnet and saw the 4 simple steps needed to complete the subnetting process. To learn from industry experts and implement their practices to resolve complex IT issues and effectively pass and achieve this certification, check out our book CompTIA Network+ Certification Guide. AWS announces more flexibility its Certification Exams, drops its exam prerequisites Top 10 IT certifications for cloud and networking professionals in 2018 What matters on an engineering resume? Hacker Rank report says skills, not certifications
Read more
  • 0
  • 0
  • 53629

article-image-implementing-memory-management-with-golang-garbage-collector
Packt Editorial Staff
03 Sep 2019
10 min read
Save for later

Implementing memory management with Golang's garbage collector

Packt Editorial Staff
03 Sep 2019
10 min read
Did you ever think of how bulk messages are pushed in real-time that fast? How is it possible? Low latency garbage collector (GC) plays an important role in this. In this article, we present ways to look at certain parameters to implement memory management with the Golang GC. Garbage collection is the process of freeing up memory space that is not being used. In other words, the GC sees which objects are out of scope and cannot be referenced anymore and frees the memory space they consume. This process happens in a concurrent way while a Go program is running and not before or after the execution of the program. This article is an excerpt from the book Mastering Go - Third Edition by Mihalis Tsoukalos. Mihalis runs through the nuances of Go, with deep guides to types and structures, packages, concurrency, network programming, compiler design, optimization, and more.  Implementing the Golang GC The Go standard library offers functions that allow you to study the operation of the GC and learn more about what the GC does secretly. These functions are illustrated in the gColl.go utility. The source code of gColl.go is presented here in chunks. Package main import (    "fmt"    "runtime"    "time" ) You need the runtime package because it allows you to obtain information about the Go runtime system, which, among other things, includes the operation of the GC. func printStats(mem runtime.MemStats) { runtime.ReadMemStats(&mem) fmt.Println("mem.Alloc:", mem.Alloc) fmt.Println("mem.TotalAlloc:", mem.TotalAlloc) fmt.Println("mem.HeapAlloc:", mem.HeapAlloc) fmt.Println("mem.NumGC:", mem.NumGC, "\n") } The purpose of the printStats() function is to avoid writing the same Go code all the time. The runtime.ReadMemStats() call gets the latest garbage collection statistics for you. func main() {    var mem runtime.MemStats    printStats(mem)    for i := 0; i < 10; i++ { // Allocating 50,000,000 bytes        s := make([]byte, 50000000)        if s == nil {            fmt.Println("Operation failed!")          }    }    printStats(mem) In this part, we have a for loop that creates 10-byte slices with 50,000,000 bytes each. The reason for this is that by allocating large amounts of memory, we can trigger the GC. for i := 0; i < 10; i++ { // Allocating 100,000,000 bytes      s := make([]byte, 100000000)       if s == nil {           fmt.Println("Operation failed!")       }       time.Sleep(5 * time.Second)   } printStats(mem) } The last part of the program makes even bigger memory allocations – this time, each byte slice has 100,000,000 bytes. Running gColl.go on a macOS Big Sur machine with 24 GB of RAM produces the following kind of output: $ go run gColl.go mem.Alloc: 124616 mem.TotalAlloc: 124616 mem.HeapAlloc: 124616 mem.NumGC: 0 mem.Alloc: 50124368 mem.TotalAlloc: 500175120 mem.HeapAlloc: 50124368 mem.NumGC: 9 mem.Alloc: 122536 mem.TotalAlloc: 1500257968 mem.HeapAlloc: 122536 mem.NumGC: 19 The value of mem.Alloc is the bytes of allocated heap objects — allocated are all the objects that the GC has not yet freed. mem.TotalAlloc shows the cumulative bytes allocated for heap objects—this number does not decrease when objects are freed, which means that it keeps increasing. Therefore, it shows the total number of bytes allocated for heap objects during program execution. mem.HeapAlloc is the same as mem.Alloc. Last, mem.NumGC shows the total number of completed garbage collection cycles. The bigger that value is, the more you have to consider how you allocate memory in your code and if there is a way to optimize that. If you want even more verbose output regarding the operation of the GC, you can combine go run gColl.go with GODEBUG=gctrace=1. Apart from the regular program output, you get some extra metrics—this is illustrated in the following output: $ GODEBUG=gctrace=1 go run gColl.go gc 1 @0.021s 0%: 0.020+0.32+0.015 ms clock, 0.16+0.17/0.33/0.22+0.12 ms cpu, 4->4->0 MB, 5 MB goal, 8 P gc 2 @0.041s 0%: 0.074+0.32+0.003 ms clock, 0.59+0.087/0.37/0.45+0.030 ms cpu, 4->4->0 MB, 5 MB goal, 8 P . . . gc 18 @40.152s 0%: 0.065+0.14+0.013 ms clock, 0.52+0/0.12/0.042+0.10 ms cpu, 95->95->0 MB, 96 MB goal, 8 P gc 19 @45.160s 0%: 0.028+0.12+0.003 ms clock, 0.22+0/0.13/0.081+0.028 ms cpu, 95->95->0 MB, 96 MB goal, 8 P mem.Alloc: 120672 mem.TotalAlloc: 1500256376 mem.HeapAlloc: 120672 mem.NumGC: 19 Now, let us explain the 95->95->0 MB triplet in the previous line of output. The first value (95) is the heap size when the GC is about to run. The second value (95) is the heap size when the GC ends its operation. The last value is the size of the live heap (0). Go garbage collection is based on the tricolor algorithm The operation of the Go GC is based on the tricolor algorithm, which is the subject of this subsection. Note that the tricolor algorithm is not unique to Go and can be used in other programming languages as well. Strictly speaking, the official name for the algorithm used in Go is the tricolor mark-and-sweep algorithm. It can work concurrently with the program and uses a write barrier. This means that when a Go program runs, the Go scheduler is responsible for the scheduling of the application and the GC. This is as if the Go scheduler has to deal with a regular application with multiple goroutines! The core idea behind this algorithm came from Edsger W. Dijkstra, Leslie Lamport, A. J. Martin, C. S. Scholten, and E. F. M. Steffens and was first illustrated in a paper named On-the-Fly Garbage Collection: An Exercise in Cooperation. The primary principle behind the tricolor mark-and-sweep algorithm is that it divides the objects of the heap into three different sets according to their color, which is assigned by the algorithm. It is now time to talk about the meaning of each color set. The objects of the black set are guaranteed to have no pointers to any object of the white set. However, an object of the white set can have a pointer to an object of the black set because this has no effect on the operation of the GC. The objects of the gray set might have pointers to some objects of the white set. Finally, the objects of the white set are the candidates for garbage collection. So, when the garbage collection begins, all objects are white, and the GC visits all the root objects and colors them gray. The roots are the objects that can be directly accessed by the application, which includes global variables and other things on the stack. These objects mostly depend on the Go code of a program. After that, the GC picks a gray object, makes it black, and starts looking at whether that object has pointers to other objects of the white set or not. Therefore, when an object of the gray set is scanned for pointers to other objects, it is colored black. If that scan discovers that this particular object has one or more pointers to a white object, it puts that white object in the gray set. This process keeps going for as long as objects exist in the gray set. After that, the objects in the white set are unreachable and their memory space can be reused. Therefore, at this point, the elements of the white set are said to be garbage collected. Please note that no object can go directly from the black set to the white set, which allows the algorithm to operate and be able to clear the objects on the white set. As mentioned before, no object of the black set can directly point to an object of the white set. Additionally, if an object of the gray set becomes unreachable at some point in a garbage collection cycle, it will not be collected at this garbage collection cycle but in the next one! Although this is not an optimal situation, it is not that bad. During this process, the running application is called the mutator. The mutator runs a small function named write barrier that is executed each time a pointer in the heap is modified. If the pointer of an object in the heap is modified, which means that this object is now reachable, the write barrier colors it gray and puts it in the gray set. The mutator is responsible for the invariant that no element of the black set has a pointer to an element of the white set. This is accomplished with the help of the write barrier function. Failing to accomplish this invariant will ruin the garbage collection process and will most likely crash your program in a pretty bad and undesirable way! So, there are three different colors: black, white, and gray. When the algorithm begins, all objects are colored white. As the algorithm keeps going, white objects are moved into one of the other two sets. The objects that are left in the white set are the ones that are going to be cleared at some point. The next figure displays the three color sets with objects in them. Figure 1: The Go GC represents the heap of a program as a graph In the presented graph, you can see that while object E, which is in the white set, can access object F, it cannot be accessed by any other object because no other object points to object E, which makes it a perfect candidate for garbage collection! Additionally, objects A, B, and C are root objects and are always reachable; therefore, they cannot be garbage collected. Graph comprehended Can you guess what will happen next in that graph? Well, it is not that difficult to realize that the algorithm will have to process the remaining elements of the gray set, which means that both objects A and F will go to the black set. Object A will go to the black set because it is a root element and F will go to the black set because it does not point to any other object while it is in the gray set. After object A is garbage collected, object F will become unreachable and will be garbage collected in the next cycle of the GC because an unreachable object cannot magically become reachable in the next iteration of the garbage collection cycle. Note: The Go garbage collection can also be applied to variables such as channels. When the GC finds out that a channel is unreachable, that is when the channel variable cannot be accessed anymore, it will free its resources even if the channel has not been closed. Go allows you to manually initiate a garbage collection by putting a runtime.GC() statement in your Go code. However, have in mind that runtime.GC() will block the caller and it might block the entire program, especially if you are running a very busy Go program with many objects. This mainly happens because you cannot perform garbage collections while everything else is rapidly changing, as this will not give the GC the opportunity to clearly identify the members of the white, black, and gray sets. This garbage collection status is also called garbage collection safe-point. You can find the long and relatively advanced Go code of the GC at https://github.com/golang/go/blob/master/src/runtime/mgc.go, which you can study if you want to learn even more information about the garbage collection operation. You can even make changes to that code if you are brave enough! Understanding Go Internals: defer, panic() and recover() functions [Tutorial] Implementing hashing algorithms in Golang [Tutorial] Is Golang truly community driven and does it really matter?
Read more
  • 0
  • 0
  • 51617
article-image-25-datasets-deep-learning-iot
Sugandha Lahoti
20 Mar 2018
8 min read
Save for later

25 Datasets for Deep Learning in IoT

Sugandha Lahoti
20 Mar 2018
8 min read
Deep Learning is one of the major players for facilitating the analytics and learning in the IoT domain. A really good roundup of the state of deep learning advances for big data and IoT is described in the paper Deep Learning for IoT Big Data and Streaming Analytics: A Survey by Mehdi Mohammadi, Ala Al-Fuqaha, Sameh Sorour, and Mohsen Guizani. In this article, we have attempted to draw inspiration from this research paper to establish the importance of IoT datasets for deep learning applications. The paper also provides a handy list of commonly used datasets suitable for building deep learning applications in IoT, which we have added at the end of the article. IoT and Big Data: The relationship IoT and Big data have a two-way relationship. IoT is the main producer of big data, and as such an important target for big data analytics to improve the processes and services of IoT. However, there is a difference between the two. Large-Scale Streaming data: IoT data is a large-scale streaming data. This is because a large number of IoT devices generate streams of data continuously. Big data, on the other hand, lack real-time processing. Heterogeneity: IoT data is heterogeneous as various IoT data acquisition devices gather different information. Big data devices are generally homogeneous in nature. Time and space correlation: IoT sensor devices are also attached to a specific location, and thus have a location and time-stamp for each of the data items. Big data sensors lack time-stamp resolution. High noise data: IoT data is highly noisy, owing to the tiny pieces of data in IoT applications, which are prone to errors and noise during acquisition and transmission. Big data, in contrast, is generally less noisy. Big data, on the other hand, is classified according to conventional 3V’s, Volume, Velocity, and Variety. As such techniques used for Big data analytics are not sufficient to analyze the kind of data, that is being generated by IoT devices. For instance, autonomous cars need to make fast decisions on driving actions such as lane or speed change. These decisions should be supported by fast analytics with data streaming from multiple sources (e.g., cameras, radars, left/right signals, traffic light etc.). This changes the definition of IoT big data classification to 6V’s. Volume: The quantity of generated data using IoT devices is much more than before and clearly fits this feature. Velocity: Advanced tools and technologies for analytics are needed to efficiently operate the high rate of data production. Variety: Big data may be structured, semi-structured, and unstructured data. The data types produced by IoT include text, audio, video, sensory data and so on. Veracity: Veracity refers to the quality, consistency, and trustworthiness of the data, which in turn leads to accurate analytics. Variability: This property refers to the different rates of data flow. Value: Value is the transformation of big data to useful information and insights that bring competitive advantage to organizations. Despite the recent advancement in DL for big data, there are still significant challenges that need to be addressed to mature this technology. Every 6 characteristics of IoT big data imposes a challenge for DL techniques. One common denominator for all is the lack of availability of IoT big data datasets.   IoT datasets and why are they needed Deep learning methods have been promising with state-of-the-art results in several areas, such as signal processing, natural language processing, and image recognition. The trend is going up in IoT verticals as well. IoT datasets play a major role in improving the IoT analytics. Real-world IoT datasets generate more data which in turn improve the accuracy of DL algorithms. However, the lack of availability of large real-world datasets for IoT applications is a major hurdle for incorporating DL models in IoT. The shortage of these datasets acts as a barrier to deployment and acceptance of IoT analytics based on DL since the empirical validation and evaluation of the system should be shown promising in the natural world. The lack of availability is mainly because: Most IoT datasets are available with large organizations who are unwilling to share it so easily. Access to the copyrighted datasets or privacy considerations. These are more common in domains with human data such as healthcare and education. While there is a lot of ground to be covered in terms of making datasets for IoT available, here is a list of commonly used datasets suitable for building deep learning applications in IoT. Dataset Name Domain Provider Notes Address/Link CGIAR dataset Agriculture, Climate CCAFS High-resolution climate datasets for a variety of fields including agricultural http://www.ccafs-climate.org/ Educational Process Mining Education University of Genova Recordings of 115 subjects’ activities through a logging application while learning with an educational simulator http://archive.ics.uci.edu/ml/datasets/Educational+Process+Mining+%28EPM%29%3A+A+Learning+Analytics+Data+Set Commercial Building Energy Dataset Energy, Smart Building IIITD Energy related data set from a commercial building where data is sampled more than once a minute. http://combed.github.io/ Individual household electric power consumption Energy, Smart home EDF R&D, Clamart, France One-minute sampling rate over a period of almost 4 years http://archive.ics.uci.edu/ml/datasets/Individual+household+electric+power+consumption AMPds dataset Energy, Smart home S. Makonin AMPds contains electricity, water, and natural gas measurements at one minute intervals for 2 years of monitoring http://ampds.org/ UK Domestic Appliance-Level Electricity Energy, Smart Home Kelly and Knottenbelt Power demand from five houses. In each house both the whole-house mains power demand as well as power demand from individual appliances are recorded. http://www.doc.ic.ac.uk/∼dk3810/data/ PhysioBank databases Healthcare PhysioNet Archive of over 80 physiological datasets. https://physionet.org/physiobank/database/ Saarbruecken Voice Database Healthcare Universitat¨ des Saarlandes A collection of voice recordings from more than 2000 persons for pathological voice detection. http://www.stimmdatebank.coli.uni-saarland.de/help_en.php4   T-LESS   Industry CMP at Czech Technical University An RGB-D dataset and evaluation methodology for detection and 6D pose estimation of texture-less objects http://cmp.felk.cvut.cz/t-less/ CityPulse Dataset Collection Smart City CityPulse EU FP7 project Road Traffic Data, Pollution Data, Weather, Parking http://iot.ee.surrey.ac.uk:8080/datasets.html Open Data Institute - node Trento Smart City Telecom Italia Weather, Air quality, Electricity, Telecommunication http://theodi.fbk.eu/openbigdata/ Malaga datasets Smart City City of Malaga A broad range of categories such as energy, ITS, weather, Industry, Sport, etc. http://datosabiertos.malaga.eu/dataset Gas sensors for home activity monitoring Smart home Univ. of California San Diego Recordings of 8 gas sensors under three conditions including background, wine and banana presentations. http://archive.ics.uci.edu/ml/datasets/Gas+sensors+for+home+activity+monitoring CASAS datasets for activities of daily living Smart home Washington State University Several public datasets related to Activities of Daily Living (ADL) performance in a two story home, an apartment, and an office settings. http://ailab.wsu.edu/casas/datasets.html ARAS Human Activity Dataset Smart home Bogazici University Human activity recognition datasets collected from two real houses with multiple residents during two months. https://www.cmpe.boun.edu.tr/aras/ MERLSense Data Smart home, building Mitsubishi Electric Research Labs Motion sensor data of residual traces from a network of over 200 sensors for two years, containing over 50 million records. http://www.merl.com/wmd SportVU   Sport Stats LLC   Video of basketball and soccer games captured from 6 cameras. http://go.stats.com/sportvu RealDisp Sport O. Banos   Includes a wide range of physical activities (warm up, cool down and fitness exercises). http://orestibanos.com/datasets.htm   Taxi Service Trajectory Transportation Prediction Challenge, ECML PKDD 2015 Trajectories performed by all the 442 taxis running in the city of Porto, in Portugal. http://www.geolink.pt/ecmlpkdd2015-challenge/dataset.html GeoLife GPS Trajectories Transportation Microsoft A GPS trajectory by a sequence of time-stamped points https://www.microsoft.com/en-us/download/details.aspx?id=52367 T-Drive trajectory data Transportation Microsoft Contains a one-week trajectories of 10,357 taxis https://www.microsoft.com/en-us/research/publication/t-drive-trajectory-data-sample/ Chicago Bus Traces data Transportation M. Doering   Bus traces from the Chicago Transport Authority for 18 days with a rate between 20 and 40 seconds. http://www.ibr.cs.tu-bs.de/users/mdoering/bustraces/   Uber trip data Transportation FiveThirtyEight About 20 million Uber pickups in New York City during 12 months. https://github.com/fivethirtyeight/uber-tlc-foil-response Traffic Sign Recognition Transportation K. Lim   Three datasets: Korean daytime, Korean nighttime, and German daytime traffic signs based on Vienna traffic rules. https://figshare.com/articles/Traffic_Sign_Recognition_Testsets/4597795 DDD17   Transportation J. Binas End-To-End DAVIS Driving Dataset. http://sensors.ini.uzh.ch/databases.html      
Read more
  • 0
  • 2
  • 48997

article-image-setting-gradle-properties-to-build-a-project
Savia Lobo
30 Jul 2018
10 min read
Save for later

Setting Gradle properties to build a project [Tutorial]

Savia Lobo
30 Jul 2018
10 min read
A Gradle script is a program. We use a Groovy DSL to express our build logic. Gradle has several useful built-in methods to handle files and directories as we often deal with files and directories in our build logic. In today's post, we will take a look at how to set Gradle properties in a project build.  We will also see how to use the Gradle Wrapper task to distribute a configurable Gradle with our build scripts. This article is an excerpt taken from, 'Gradle Effective Implementations Guide - Second Edition' written by Hubert Klein Ikkink.  Setting Gradle project properties In a Gradle build file, we can access several properties that are defined by Gradle, but we can also create our own properties. We can set the value of our custom properties directly in the build script and we can also do this by passing values via the command line. The default properties that we can access in a Gradle build are displayed in the following table: NameTypeDefault valueprojectProjectThe project instance.nameStringThe name of the project directory. The name is read-only.pathStringThe absolute path of the project.descriptionStringThe description of the project.projectDirFileThe directory containing the build script. The value is read-only.buildDirFileThe directory with the build name in the directory, containing the build script.rootDirFileThe directory of the project at the root of a project structure.groupObjectNot specified.versionObjectNot specified.antAntBuilderAn AntBuilder instance. The following build file has a task of showing the value of the properties: version = '1.0' group = 'Sample' description = 'Sample build file to show project properties' task defaultProperties << { println "Project: $project" println "Name: $name" println "Path: $path" println "Project directory: $projectDir" println "Build directory: $buildDir" println "Version: $version" println "Group: $project.group" println "Description: $project.description" println "AntBuilder: $ant" } When we run the build, we get the following output: $ gradle defaultProperties :defaultProperties Project: root project 'props' Name: defaultProperties Path: :defaultProperties Project directory: /Users/mrhaki/gradle-book/Code_Files/props Build directory: /Users/mrhaki/gradle-book/Code_Files/props/build Version: 1.0 Group: Sample Description: Sample build file to show project properties AntBuilder: org.gradle.api.internal.project.DefaultAntBuilder@3c95cbbd BUILD SUCCESSFUL Total time: 1.458 secs Defining custom properties in script To add our own properties, we have to define them in an  ext{} script block in a build file. Prefixing the property name with ext. is another way to set the value. To read the value of the property, we don't have to use the ext. prefix, we can simply refer to the name of the property. The property is automatically added to the internal project property as well. In the following script, we add a customProperty property with a String value custom. In the showProperties task, we show the value of the property: // Define new property. ext.customProperty = 'custom' // Or we can use ext{} script block. ext { anotherCustomProperty = 'custom' } task showProperties { ext { customProperty = 'override' } doLast { // We can refer to the property // in different ways: println customProperty println project.ext.customProperty println project.customProperty } } After running the script, we get the following output: $ gradle showProperties :showProperties override custom custom BUILD SUCCESSFUL Total time: 1.469 secs Defining properties using an external file We can also set the properties for our project in an external file. The file needs to be named gradle.properties, and it should be a plain text file with the name of the property and its value on separate lines. We can place the file in the project directory or Gradle user home directory. The default Gradle user home directory is $USER_HOME/.gradle. A property defined in the properties file, in the Gradle user home directory, overrides the property values defined in a properties file in the project directory. We will now create a gradle.properties file in our project directory, with the following contents. We use our build file to show the property values: task showProperties { doLast { println "Version: $version" println "Custom property: $customProperty" } } If we run the build file, we don't have to pass any command-line options, Gradle will use gradle.properties to get values of the properties: $ gradle showProperties :showProperties Version: 4.0 Custom property: Property value from gradle.properties BUILD SUCCESSFUL Total time: 1.676 secs Passing properties via the command line Instead of defining the property directly in the build script or external file, we can use the -P command-line option to add an extra property to a build. We can also use the -P command-line option to set a value for an existing property. If we define a property using the -P command-line option, we can override a property with the same name defined in the external gradle.properties file. The following build script has a showProperties task that shows the value of an existing property and a new property: task showProperties { doLast { println "Version: $version" println "Custom property: $customProperty" } } Let's run our script and pass the values for the existing version property and the non-existent  customProperty: $ gradle -Pversion=1.1 -PcustomProperty=custom showProperties :showProperties Version: 1.1 Custom property: custom BUILD SUCCESSFUL Total time: 1.412 secs Defining properties via system properties We can also use Java system properties to define properties for our Gradle build. We use the -D command-line option just like in a normal Java application. The name of the system property must start with org.gradle.project, followed by the name of the property we want to set, and then by the value. We can use the same build script that we created before: task showProperties { doLast { println "Version: $version" println "Custom property: $customProperty" } } However, this time we use different command-line options to get a result: $ gradle -Dorg.gradle.project.version=2.0 -Dorg.gradle.project.customProperty=custom showProperties :showProperties Version: 2.0 Custom property: custom BUILD SUCCESSFUL Total time: 1.218 secs Adding properties via environment variables Using the command-line options provides much flexibility; however, sometimes we cannot use the command-line options because of environment restrictions or because we don't want to retype the complete command-line options each time we invoke the Gradle build. Gradle can also use environment variables set in the operating system to pass properties to a Gradle build. The environment variable name starts with ORG_GRADLE_PROJECT_ and is followed by the property name. We use our build file to show the properties: task showProperties { doLast { println "Version: $version" println "Custom property: $customProperty" } } Firstly, we set ORG_GRADLE_PROJECT_version and ORG_GRADLE_PROJECT_customProperty environment variables, then we run our showProperties task, as follows: $ ORG_GRADLE_PROJECT_version=3.1 ORG_GRADLE_PROJECT_customProperty="Set by environment variable" gradle showProp :showProperties Version: 3.1 Custom property: Set by environment variable BUILD SUCCESSFUL Total time: 1.373 secs Using the Gradle Wrapper Normally, if we want to run a Gradle build, we must have Gradle installed on our computer. Also, if we distribute our project to others and they want to build the project, they must have Gradle installed on their computers. The Gradle Wrapper can be used to allow others to build our project even if they don't have Gradle installed on their computers. The wrapper is a batch script on the Microsoft Windows operating systems or shell script on other operating systems that will download Gradle and run the build using the downloaded Gradle. By using the wrapper, we can make sure that the correct Gradle version for the project is used. We can define the Gradle version, and if we run the build via the wrapper script file, the version of Gradle that we defined is used. Creating wrapper scripts To create the Gradle Wrapper batch and shell scripts, we can invoke the built-in wrapper task. This task is already available if we have installed Gradle on our computer. Let's invoke the wrapper task from the command-line: $ gradle wrapper :wrapper BUILD SUCCESSFUL Total time: 0.61 secs After the execution of the task, we have two script files—gradlew.bat and gradlew—in the root of our project directory. These scripts contain all the logic needed to run Gradle. If Gradle is not downloaded yet, the Gradle distribution will be downloaded and installed locally. In the gradle/wrapper directory, relative to our project directory, we find the gradle-wrapper.jar and gradle-wrapper.properties files. The gradle-wrapper.jar file contains a couple of class files necessary to download and invoke Gradle. The gradle-wrapper.properties file contains settings, such as the URL, to download Gradle. The gradle-wrapper.properties file also contains the Gradle version number. If a new Gradle version is released, we only have to change the version in the gradle-wrapper.properties file and the Gradle Wrapper will download the new version so that we can use it to build our project. All the generated files are now part of our project. If we use a version control system, then we must add these files to the version control. Other people that check out our project can use the gradlew scripts to execute tasks from the project. The specified Gradle version is downloaded and used to run the build file. If we want to use another Gradle version, we can invoke the wrapper task with the --gradle-version option. We must specify the Gradle version that the Wrapper files are generated for. By default, the Gradle version that is used to invoke the wrapper task is the Gradle version used by the wrapper files. To specify a different download location for the Gradle installation file, we must use the --gradle-distribution-url option of the wrapper task. For example, we could have a customized Gradle installation on our local intranet, and with this option, we can generate the Wrapper files that will use the Gradle distribution on our intranet. In the following example, we generate the wrapper files for Gradle 2.12 explicitly: $ gradle wrapper --gradle-version=2.12 :wrapper BUILD SUCCESSFUL Total time: 0.61 secs Customizing the Gradle Wrapper If we want to customize properties of the built-in wrapper task, we must add a new task to our Gradle build file with the org.gradle.api.tasks.wrapper.Wrapper type. We will not change the default wrapper task, but create a new task with new settings that we want to apply. We need to use our new task to generate the Gradle Wrapper shell scripts and support files. We can change the names of the script files that are generated with the scriptFile property of the Wrapper task. To change the name and location of the generated JAR and properties files, we can change the jarFile property: task createWrapper(type: Wrapper) { // Set Gradle version for wrapper files. gradleVersion = '2.12' // Rename shell scripts name to // startGradle instead of default gradlew. scriptFile = 'startGradle' // Change location and name of JAR file // with wrapper bootstrap code and // accompanying properties files. jarFile = "${projectDir}/gradle-bin/gradle-bootstrap.jar" } If we run the createWrapper task, we get a Windows batch file and shell script and the Wrapper bootstrap JAR file with the properties file is stored in the gradle-bin directory: $ gradle createWrapper :createWrapper BUILD SUCCESSFUL Total time: 0.605 secs $ tree . . ├── gradle-bin │ ├── gradle-bootstrap.jar │ └── gradle-bootstrap.properties ├── startGradle ├── startGradle.bat └── build.gradle 2 directories, 5 files To change the URL from where the Gradle version must be downloaded, we can alter the distributionUrl property. For example, we could publish a fixed Gradle version on our company intranet and use the distributionUrl property to reference a download URL on our intranet. This way we can make sure that all developers in the company use the same Gradle version: task createWrapper(type: Wrapper) { // Set URL with custom Gradle distribution. distributionUrl = 'http://intranet/gradle/dist/gradle-custom- 2.12.zip' } We discussed the Gradle properties and how to use the Gradle Wrapper to allow users to build our projects even if they don't have Gradle installed. We discussed how to customize the Wrapper to download a specific version of Gradle and use it to run our build. If you've enjoyed reading this post, do check out our book 'Gradle Effective Implementations Guide - Second Edition' to know more about how to use Gradle for Java Projects. Top 7 Python programming books you need to read 4 operator overloading techniques in Kotlin you need to know 5 Things you need to know about Java 10
Read more
  • 0
  • 0
  • 45338

article-image-import-add-background-music-audacity
Packt
03 Mar 2010
2 min read
Save for later

Importing and Adding Background Music with Audacity 1.3

Packt
03 Mar 2010
2 min read
Audacity is commonly used to import music into your project, convert different audio files from one format to another, bring in multiple files and convert them, and more. In this article, we will learn how to add background music into your podcast, overdub and fade in and out. We will also discuss some additional information about importing music from CDs, cassette tapes, and vinyl records. This Audacity tutorial has been taken from Getting Started with Audacity 1.3. Read more here. Importing digital music into Audacity Before you can add background music to any Audacity project, you'll first have to import a digital music file into to the project itself. Importing WAV, AIFF, MP3, and MP4/M4A files Audacity can import a number of music file formats. WAV, AIFF, MP3 are most common, but it can also import MP4/M4A files, as long as they are not rights-managed or copy-protected (like some songs purchased through stores such as iTunes). To import a song into Audacity: Open your sample project. From the main menu, select File, Import, and then Audio. The audio selection window is displayed. Choose the music file from your computer, and then click on Open. A new track is added to your project at the very bottom of the project window. Importing music from iTunes Your iTunes library can contain protected and unprotected music files. The main difference is that the protected files were typically purchased from the iTunes store and can't be played outside of that software. There is no easy way to determine visually which music tracks are protected or unprotected, so you can try both methods outlined next to import into Audacity. However, remember there are copyright laws for songs written and recorded by popular artists, so you need to investigate how to use music legally for your own use or for distribution through a podcast. Unprotected files from iTunes If the songs that you want to import from iTunes aren't copy-protected, importing them is easy. Click-and-drag the song from the iTunes window and drop it into your Audacity project window (with your project open, of course). Within a few moments, the music track is shown at the bottom of your project window. The music track is now ready to be edited in, as an introduction or however you desire, in the main podcast file.
Read more
  • 0
  • 2
  • 43834
article-image-cross-validation-strategies-for-time-series-forecasting-tutorial
Packt Editorial Staff
06 May 2019
12 min read
Save for later

Cross-Validation strategies for Time Series forecasting [Tutorial]

Packt Editorial Staff
06 May 2019
12 min read
Time series modeling and forecasting are tricky and challenging. The i.i.d (identically distributed independence) assumption does not hold well to time series data. There is an implicit dependence on previous observations and at the same time, a data leakage from response variables to lag variables is more likely to occur in addition to inherent non-stationarity in the data space. By non-stationarity, we mean flickering changes of observed statistics such as mean and variance. It even gets trickier when taking inherent nonlinearity into consideration. Cross-validation is a well-established methodology for choosing the best model by tuning hyper-parameters or performing feature selection. There are a plethora of strategies for implementing optimal cross-validation. K-fold cross-validation is a time-proven example of such techniques. However, it is not robust in handling time series forecasting issues due to the nature of the data as explained above. In this tutorial, we shall explore two more techniques for performing cross-validation; time series split cross-validation and blocked cross-validation, which is carefully adapted to solve issues encountered in time series forecasting. We shall use Python 3.5, SciKit Learn, Matplotlib, Numpy, and Pandas. By the end of this tutorial you will have explored the following topics: Time Series Split Cross-Validation Blocked Cross-Validation Grid Search Cross-Validation Loss Function Elastic Net Regression Cross-Validation Image Source: scikit-learn.org First, the data set is split into a training and testing set. The testing set is preserved for evaluating the best model optimized by cross-validation. In k-fold cross-validation, the training set is further split into k folds aka partitions. During each iteration of the cross-validation, one fold is held as a validation set and the remaining k - 1 folds are used for training. This allows us to make the best use of the data available without annihilation. It also allows us to avoid biasing the model towards patterns that may be overly represented in a given fold. Then the error obtained on all folds is averaged and the standard deviation is calculated. One usually performs cross-validation to find out which settings give the minimum error before training a final model using these elected settings on the complete training set. Flavors of k-fold cross-validations exist, for example, leave-one-out and nested cross-validation. However, these may be the topic of another tutorial. Grid Search Cross-Validation One idea to fine-tune the hyper-parameters is to randomly guess the values for model parameters and apply cross-validation to see if they work. This is infeasible as there may be exponential combinations of such parameters. This approach is also called Random Search in the literature. Grid search works by exhaustively searching the possible combinations of the model’s parameters, but it makes use of the loss function to guide the selection of the values to be tried at each iteration. That is solving a minimization optimization problem. However, in SciKit Learn it explicitly tries all the possible combination which makes it computationally expensive. When cross-validation is used in the inner loop of the grid search, it is called grid search cross-validation. Hence, the optimization objective becomes minimizing the average loss obtained on the k folds. R2 Loss Function Choosing the loss function has a very high impact on model performance and convergence. In this tutorial, I would like to introduce to you a loss function, most commonly used in regression tasks. R2 loss works by calculating correlation coefficients between the ground truth target values and the response output from the model. The formula is, however, slightly modified so that the range of the function is in the open interval [+1, -∞]. Hence, +1 indicates maximum positive correlation and negative values indicate the opposite. Thus, all the errors obtained in this tutorial should be interpreted as desirable if their value is close to +1. It is worth mentioning that we could have chosen a different loss function such as L1-norm or L2-norm. I would encourage you to try the ideas discussed in this tutorial using other loss functions and observe the difference. Elastic Net Regression This also goes in the literature by the name elastic net regularization. Regularization is a very robust technique to avoid overfitting by penalizing large weights or in other words it alters the objective function by emphasizing the errors caused by memorizing the training set. Vanilla linear regression can be tricked into learning the parameters that perform very well on the training set, but yet fail to generalize for unseen new samples. Both L1-regularization and L2-regularization were incorporated to resolve overfitting and are known in the literature as Lasso and Ridge regression respectively. Due to the critique of both Lasso and Ridge regression, Elastic Net regression was introduced to mix the two models. As a result, some variables’ coefficients are set to zero as per L1-norm and some others are penalized or shrank as per the L2-norm. This model combines the best from both worlds and the result is a stable, robust, and a sparse model. As a consequence, there are more parameters to be fine-tuned. That’s why this is a good example to demonstrate the power of cross-validation. Crypto Data Set I have obtained ETHereum/USD exchange prices for the year 2019 from cryptodatadownload.com which you can get for free from the website or by running the following command: $ wget http://www.cryptodatadownload.com/cdd/Gemini_ETHUSD_d.csv Now that you have the CSV file you can import it to Python using Pandas. The daily close price is used as both regressor and response variables. In this setup, I have used a lag of 64 days for regressors and a target of 8 days for responses. That is, given the past 64 days closing prices forecast the next 8 days. Then the resulting nan rows at the tail are dropped as a way to handle missing values. df = pd.read_csv('./Gemini_ETHUSD_d.csv', skiprows=1) for i in range(1, STEPS): col_name = 'd{}'.format(i) df[col_name] = df['d0'].shift(periods=-1 * i) df = df.dropna() Next, we split the data frame into two one for the regressors and the other for the responses. And then split both into two one for training and the other for testing. X = df.iloc[:, :TRAIN_STEPS] y = df.iloc[:, TRAIN_STEPS:] X_train = X.iloc[:SPLIT_IDX, :] y_train = y.iloc[:SPLIT_IDX, :] X_test = X.iloc[SPLIT_IDX:, :] y_test = y.iloc[SPLIT_IDX:, :] Model Design Let’s define a method that creates an elastic net model from sci-kit learn and since we are going to forecast more than one future time step, let’s use a multi-output regressor wrapper that trains a separate model for each target time step. However, this introduces more demand for computation resources. def build_model(_alpha, _l1_ratio): estimator = ElasticNet( alpha=_alpha, l1_ratio=_l1_ratio, fit_intercept=True, normalize=False, precompute=False, max_iter=16, copy_X=True, tol=0.1, warm_start=False, positive=False, random_state=None, selection='random' ) return MultiOutputRegressor(estimator, n_jobs=4) Blocked and Time Series Splits Cross-Validation The best way to grasp the intuition behind blocked and time series splits is by visualizing them. The three split methods are depicted in the above diagram. The horizontal axis is the training set size while the vertical axis represents the cross-validation iterations. The folds used for training are depicted in blue and the folds used for validation are depicted in orange. You can intuitively interpret the horizontal axis as time progression line since we haven’t shuffled the dataset and maintained the chronological order. The idea for time series splits is to divide the training set into two folds at each iteration on condition that the validation set is always ahead of the training split. At the first iteration, one trains the candidate model on the closing prices from January to March and validates on April’s data, and for the next iteration, train on data from January to April, and validate on May’s data, and so on to the end of the training set. This way dependence is respected. However, this may introduce leakage from future data to the model. The model will observe future patterns to forecast and try to memorize them. That’s why blocked cross-validation was introduced.  It works by adding margins at two positions. The first is between the training and validation folds in order to prevent the model from observing lag values which are used twice, once as a regressor and another as a response. The second is between the folds used at each iteration in order to prevent the model from memorizing patterns from an iteration to the next. Implementing k-fold cross-validation using sci-kit learn is pretty straightforward, but in the following lines of code, we pass the k-fold splitter explicitly as we will develop the idea further in order to implement other kinds of cross-validation. model = build_model(_alpha=1.0, _l1_ratio=0.3) kfcv = KFold(n_splits=5) scores = cross_val_score(model, X_train, y_train, cv=kfcv, scoring=r2) print("Loss: {0:.3f} (+/- {1:.3f})".format(scores.mean(), scores.std())) This outputs: Loss: -103.076 (+/- 205.979) The same applies to time series splitter as follows: model = build_model(_alpha=1.0, _l1_ratio=0.3) tscv = TimeSeriesSplit(n_splits=5) scores = cross_val_score(model, X_train, y_train, cv=tscv, scoring=r2) print("Loss: {0:.3f} (+/- {1:.3f})".format(scores.mean(), scores.std())) This outputs: Loss: -9.799 (+/- 19.292) Sci-kit learn gives us the luxury to define any new types of splitters as long as we abide by its splitter API and inherit from the base splitter. class BlockingTimeSeriesSplit(): def __init__(self, n_splits): self.n_splits = n_splits def get_n_splits(self, X, y, groups): return self.n_splits def split(self, X, y=None, groups=None): n_samples = len(X) k_fold_size = n_samples // self.n_splits indices = np.arange(n_samples) margin = 0 for i in range(self.n_splits): start = i * k_fold_size stop = start + k_fold_size mid = int(0.8 * (stop - start)) + start yield indices[start: mid], indices[mid + margin: stop] Then we can use it exactly the same way like before. model = build_model(_alpha=1.0, _l1_ratio=0.3) btscv = BlockingTimeSeriesSplit(n_splits=5) scores = cross_val_score(model, X_train, y_train, cv=btscv, scoring=r2) print("Loss: {0:.3f} (+/- {1:.3f})".format(scores.mean(), scores.std())) This outputs: Loss: -15.527 (+/- 27.488) Please notice how the loss is different among the different types of splitters. In order to interpret the results correctly, let’s put it to test by using grid search cross-validation to find the optimal values for both regularization parameter alpha and -ratio that controls how much -norm contributes to the regularization. It follows that -norm contributes 1 - . params = { 'estimator__alpha':(0.1, 0.3, 0.5, 0.7, 0.9), 'estimator__l1_ratio':(0.1, 0.3, 0.5, 0.7, 0.9) } for i in range(100): model = build_model(_alpha=1.0, _l1_ratio=0.3) finder = GridSearchCV( estimator=model, param_grid=params, scoring=r2, fit_params=None, n_jobs=None, iid=False, refit=False, cv=kfcv, # change this to the splitter subject to test verbose=1, pre_dispatch=8, error_score=-999, return_train_score=True ) finder.fit(X_train, y_train) best_params = finder.best_params_ Experimental Results K-Fold Cross-Validation Optimal Parameters Grid-search cross-validation was run 100 times in order to objectively measure the consistency of the results obtained using each splitter. This way we can evaluate the effectiveness and robustness of the cross-validation method on time series forecasting. As for the k-fold cross-validation, the parameters suggested were almost uniform. That is, it did not really help us in discriminating the optimal parameters since all were equally good or bad. Time Series Split Cross-Validation Optimal Parameters Blocked Cross-Validation Optimal Parameters However, in both the cases of time series split cross-validation and blocked cross-validation, we have obtained a clear indication of the optimal values for both parameters. In case of blocked cross-validation, the results were even more discriminative as the blue bar indicates the dominance of -ratio optimal value of 0.1. Ground Truth vs Forecasting After having obtained the optimal values for our model parameters, we can train the model and evaluate it on the testing set. The results, as depicted in the plot above, indicate smooth capture of the trend and minimum error rate. # optimal model model = build_model(_alpha=0.1, _l1_ratio=0.1) # train model model.fit(X_train, y_train) # test score y_predicted = model.predict(X_test) score = r2_score(y_test, y_predicted, multioutput='uniform_average') print("Test Loss: {0:.3f}".format(score)) The output is: Test Loss: 0.925 Ideas for the Curious In this tutorial, we have demonstrated the power of using the right cross-validation strategy for time-series forecasting. The beauty of machine learning is endless. Here you’re a few ideas to try out and experiment on your own: Try using a different more volatile data set Try using different lag and target length instead of 64 and 8 days each. Try different regression models Try different loss functions Try RNN models using Keras Try increasing or decreasing the blocked splits margins Try a different value for k in cross-validation References Jeff Racine,Consistent cross-validatory model-selection for dependent data: hv-block cross-validation,Journal of Econometrics,Volume 99, Issue 1,2000,Pages 39-61,ISSN 0304-4076. Dabbs, Beau & Junker, Brian. (2016). Comparison of Cross-Validation Methods for Stochastic Block Models. Marcos Lopez de Prado, 2018, Advances in Financial Machine Learning (1st ed.), Wiley Publishing. Doctor, Grado DE et al. “New approaches in time series forecasting: methods, software, and evaluation procedures.” (2013). Learn More Seize the chance to learn more about time series forecasting techniques, machine learning, trading strategies, and algorithmic trading on my step by step online video course: Hands-on Machine Learning for Algorithmic Trading Bots with Python on PacktPub. Author Bio Mustafa Qamar-ud-Din is a machine learning engineer with over 10 years of experience in the software development industry engaged with startups on solving problems in various domains; e-commerce applications, recommender systems, biometric identity control, and event management. Time series modeling: What is it, Why it matters and How it’s used Implementing a simple Time Series Data Analysis in R Training RNNs for Time Series Forecasting
Read more
  • 0
  • 0
  • 43718

article-image-what-is-multi-layered-software-architecture
Packt Editorial Staff
17 May 2018
7 min read
Save for later

What is a multi layered software architecture?

Packt Editorial Staff
17 May 2018
7 min read
Multi layered software architecture is one of the most popular architectural patterns today. It moderates the increasing complexity of modern applications. It also makes it easier to work in a more agile manner. That's important when you consider the dominance of DevOps and other similar methodologies today. Sometimes called tiered architecture, or n-tier architecture, a multi layered software architecture consists of various layers, each of which corresponds to a different service or integration. Because each layer is separate, making changes to each layer is easier than having to tackle the entire architecture. Let's take a look at how a multi layered software architecture works, and what the advantages and disadvantages of it are. This has been taken from the book Architectural Patterns. Find it here. What does a layered software architecture consist of? Before we get into a multi layered architecture, let's start with the simplest form of layered architecture - three tiered architecture. This is a good place to start because all layered software architecture contains these three elements. These are the foundations: Presentation layer: This is the first and topmost layer which is present in the application. This tier provides presentation services, that is presentation, of content to the end user through GUI. This tier can be accessed through any type of client device like desktop, laptop, tablet, mobile, thin client, and so on. For the content to the displayed to the user, the relevant web pages should be fetched by the web browser or other presentation component which is running in the client device. To present the content, it is essential for this tier to interact with the other tiers that are present preceding it. Application layer: This is the middle tier of this architecture. This is the tier in which the business logic of the application runs. Business logic is the set of rules that are required for running the application as per the guidelines laid down by the organization. The components of this tier typically run on one or more application servers. Data layer: This is the lowest tier of this architecture and is mainly concerned with the storage and retrieval of application data. The application data is typically stored in a database server, file server, or any other device or media that supports data access logic and provides the necessary steps to ensure that only the data is exposed without providing any access to the data storage and retrieval mechanisms. This is done by the data tier by providing an API to the application tier. The provision of this API ensures complete transparency to the data operations which are done in this tier without affecting the application tier. For example, updates or upgrades to the systems in this tier do not affect the application tier of this architecture. The diagram below shows how a simple layered architecture with 3 tiers works:   These three layers are essential. But other layers can be built on top of them. That's when we get into multi layered architecture. It's sometimes called n-tiered architecture because the number of tiers or layers (n) could be anything! It depends on what you need and how much complexity you're able to handle. Multi layered software architecture A multi layered software architecture still has the presentation layer and data layer. It simply splits up and expands the application layer. These additional aspects within the application layer are essentially different services. This means your software should now be more scalable and have extra dimensions of functionality. Of course, the distribution of application code and functions among the various tiers will vary from one architectural design to another, but the concept remains the same. The diagram below illustrates what a multi layered software architecture looks like. As you can see, it's a little more complex that a three-tiered architecture, but it does increase scalability quite significantly: What are the benefits of a layered software architecture? A layered software architecture has a number of benefits - that's why it has become such a popular architectural pattern in recent years. Most importantly, tiered segregation allows you to manage and maintain each layer accordingly. In theory it should greatly simplify the way you manage your software infrastructure. The multi layered approach is particularly good for developing web-scale, production-grade, and cloud-hosted applications very quickly and relatively risk-free. It also makes it easier to update any legacy systems - when you're architecture is broken up into multiple layers, the changes that need to be made should be simpler and less extensive than they might otherwise have to be. When should you use a multi layered software architecture? Clearly, the argument for a multi layered software architecture is pretty clear. However, there are some instances when it is particularly appropriate: If you are building a system in which it is possible to split the application logic into smaller components that could be spread across several servers. This could lead to the design of multiple tiers in the application tier. If the system under consideration requires faster network communications, high reliability, and great performance, then n-tier has the capability to provide that as this architectural pattern is designed to reduce the overhead which is caused by network traffic. An example of a multi layered software architecture We can illustrate the working of an multi layered architecture with the help of an example of a shopping cart web application which is present in all e-commerce sites. The shopping cart web application is used by the e-commerce site user to complete the purchase of items through the e-commerce site. You'd expect the application to have several features that allow the user to: Add selected items to the cart Change the quantity of items in their cart Make payments The client tier, which is present in the shopping cart application, interacts with the end user through a GUI. The client tier also interacts with the application that runs in the application servers present in multiple tiers. Since the shopping cart is a web application, the client tier contains the web browser. The presentation tier present in the shopping cart application displays information related to the services like browsing merchandise, buying them, adding them to the shopping cart, and so on. The presentation tier communicates with other tiers by sending results to the client tier and all other tiers which are present in the network. The presentation tier also makes calls to database stored procedures and web services. All these activities are done with the objective of providing a quick response time to the end user. The presentation tier plays a vital role by acting as a glue which binds the entire shopping cart application together by allowing the functions present in different tiers to communicate with each other and display the outputs to the end user through the web browser. In this multi layered architecture, the business logic which is required for processing activities like calculation of shipping cost and so on are pulled from the application tier to the presentation tier. The application tier also acts as the integration layer and allows the applications to communicate seamlessly with both the data tier and the presentation tier. The last tier which is the data tier is used to maintain data. This layer typically contains database servers. This layer maintains data independent from the application server and the business logic. This approach provides enhanced scalability and performance to the data tier. Read next Microservices and Service Oriented Architecture What is serverless architecture and why should I be interested?
Read more
  • 0
  • 1
  • 41969