Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-red-team-tactics-getting-started-with-cobalt-strike-tutorial
Savia Lobo
12 Jan 2019
15 min read
Save for later

Red Team Tactics: Getting started with Cobalt Strike [Tutorial]

Savia Lobo
12 Jan 2019
15 min read
According to cobaltstrike.com: "Cobalt Strike is a software for Adversary Simulations and Red Team Operations. Adversary Simulations and Red Team Operations are security assessments that replicate the tactics and techniques of an advanced adversary in a network. While penetration tests focus on unpatched vulnerabilities and misconfigurations, these assessments benefit security operations and incident response." This tutorial is an excerpt taken from the book Hands-On Red Team Tactics written by Himanshu Sharma and Harpreet Singh. This book demonstrates advanced methods of post-exploitation using Cobalt Strike and introduces you to Command and Control (C2) servers and redirectors. In this article, you will understand the basics of what Cobalt Strike is, how to set it up, and also about its interface. Before installing Cobalt Strike, please make sure that you have Oracle Java installed with version 1.7 or above. You can check whether or not you have Java installed by executing the following command: java -version If you receive the java command not found error or another related error, then you need to install Java on your system. You can download this here: https://www.java.com/en/. Cobalt Strike comes in a package that consists of a client and server files. To start with the setup, we need to run the team server. The following are the files that you'll get once you download the package: The first thing we need to do is run the team server script located in the same directory. What is a team server? This is the main controller for the payloads that are used in Cobalt Strike. It logs all of the events that occur in Cobalt Strike. It collects all the credentials that are discovered in the post-exploitation phase or used by the attacker on the target systems to log in. It is a simple bash script that calls for the Metasploit RPC service (msfrpcd) and starts the server with cobaltstrike.jar. This script can be customized according to the needs. Cobalt Strike works on a client-server model in which the red-teamer connects to the team server via the Cobalt Strike client. All the connections (bind/reverse) to/from the victims are managed by the team server. The system requirements for running the team server are as follows: System requirements: 2 GHz+ processor 2 GB RAM 500MB+ available disk space Amazon EC2: At least a high-CPU medium (c1.medium, 1.7 GB) instance Supported operating systems: Kali Linux 1.0, 2.0 – i386 and AMD64 Ubuntu Linux 12.04, 14.04 – x86, and x86_64 The Cobalt Strike client supports: Windows 7 and above macOS X 10.10 and above Kali Linux 1.0, 2.0 – i386 and AMD64 Ubuntu Linux 12.04, 14.04 – x86, and x86_64 As shown in the following screenshot, the team server needs at least two mandatory arguments in order to run. This includes host, which is an IP address that is reachable from the internet. If behind a home router, you can port forward the listener's port on the router. The second mandatory argument is password, which will be used by the team server for authentication: The third and fourth arguments specify a Malleable C2 communication profile and a kill date for the payloads (both optional). A Malleable C2 profile is a straightforward program that determines how to change information and store it in an exchange. It's a really cool feature in Cobalt Strike. The team server must run with the root privileges so that it can start the listener on system ports (port numbers: 0-1023); otherwise, you will receive a Permission denied error when attempting to start a listener: The Permission denied error can be seen on the team server console window, as shown in the following screenshot: Now that the concept of the team server has been explained, we can move on to the next topic. You'll learn how to set up a team server for accessing it through Cobalt Strike. Cobalt Strike setup The team server can be run using the following command: sudo ./teamserver 192.168.10.122 harry@123 Here, I am using the IP 192.168.10.122 as my team server and harry@123 as my password for the team server: If you receive the same output as we can see in the preceding screenshot, then this means that your team server is running successfully. Of course, the SHA256 hash for the SSL certificate used by the team server will be different each time it runs on your system, so don't worry if the hash changes each time you start the server. Upon successfully starting the server, we can now get on with the client. To run the client, use the following command: java -jar cobaltstrike.jar This command will open up the connect dialog, which is used to connect to the Cobalt Strike team server. At this point, you need to provide the team server IP, the Port number (which is 50050, by default), the User (which can be any random user of your choice), and the Password for the team server. The client will connect with the team server when you press the Connect button. Upon successful authorization, you will see a team server fingerprint verification window. This window will ask you to show the exact same SHA256 hash for the SSL certificate that was generated by the team server at runtime. This verification only happens once during the initial stages of connection. If you see this window again, your team server is either restarted or you are connected to a new device. This is a precautionary measure for preventing Man-in-the-Middle (MITM) attacks: Once the connection is established with the team server, the Cobalt Strike client will open: Let's look further to understand the Cobalt Strike interface so that you can use it to its full potential in a red-team engagement. Cobalt Strike interface The user interface for Cobalt Strike is divided into two horizontal sections, as demonstrated in the preceding screenshot. These sections are the visualization tab and the display tab. The top of the interface shows the visualization tab, which visually displays all the sessions and targets in order to make it possible to better understand the network of the compromised host. The bottom of the interface shows the display tab, which is used to display the Cobalt Strike features and sessions for interaction. Toolbar Common features used in Cobalt Strike can be readily accessible at the click of a button. The toolbar offers you all the common functions to speed up your Cobalt Strike usage: Each feature in the toolbar is as follows: Connecting to another team server In order to connect to another team server, you can click on the + sign, which will open up the connect window: All of the previous connections will be stored as a profile and can be called for connection again in the connect window: Disconnecting from the team server By clicking on the minus (–) sign, you will be disconnected from the current instance of the team server: You will also see a box just above the server switchbar that says Disconnected from team server. Once you disconnect from the instance, you can close it and continue the operations on the other instance. However, be sure to bear in mind that once you close the tab after disconnection, you will lose all display tabs that were open on that particular instance. What's wrong with that? This may cause some issues. This is because in a red-team operation you do not always have the specific script that will execute certain commands and save the information in the database. In this case, it would be better to execute the command on a shell and then save the output on Notepad or Sublime. However, not many people follow this practice, and hence they lose a lot of valuable information. You can now imagine how heart-breaking it can be to close the instance in case of disconnection and find that all of your shell output (which was not even copied to Notepad) is gone! Configure listeners For a team server to function properly, you need to configure a listener. But before we can do this, we need to know what a listener actually is. Just like the handler used in Metasploit (that is, exploit/multi/handler), the Cobalt Strike team server also needs a handler for handling the bind/reverse connections to and from the target/victim's system/server. You can configure a listener by clicking on the headphones-like icon: After clicking the headphones icon, you'll open the Listeners tab in the bottom section. Click on the Add button to add a new listener: You can choose the type of payload you want to listen for with the Host IP address and the port to listen on for the team server or the redirector: In this case, we have used a beacon payload, which will be communicating over SSL. Beacon payloads are a special kind of payload in Cobalt Strike that may look like a generic meterpreter but actually have much more functionality than that. Beacons will be discussed in more detail in further chapters. As a beacon uses HTTP/S as the communication channel to check for the tasking allotted to it, you'll be asked to give the IP address for the team server and domain name in case any redirector is configured (Redirectors will be discussed in more details in further chapters): Once you're done with the previous step, you have now successfully configured your listener. Your listener is now ready for the incoming connection: Session graphs To see the sessions in a graph view, you can click the button shown in the following screenshot: Session graphs will show a graphical representation of the systems that have been compromised and injected with the payloads. In the following screenshot, the system displayed on the screen has been compromised. PT is the user, PT-PC is the computer name (hostname), and the numbers just after the @ are the PIDs of the processes that have the payload injected into them: When you escalate the privileges from a normal user to NT AUTHORITY\SYSTEM (vertical privilege escalation), the session graph will show the system in red and surrounded by lightning bolts. There is also another thing to notice here: the * (asterisk) just after the username. This means that the system with PID 1784 is escalated to NT AUTHORITY\SYSTEM: Session table To see the open sessions in a tabular view, click on the button shown in the following screenshot: All the sessions that are opened in Cobalt Strike will be shown along with the sessions' details. For example, this may include external IP, internal IP, user, computer name, PID into which the session is injected, or last. Last is an element of Cobalt Strike that is similar to WhatsApp's Last Seen feature, showing the last time that the compromised system contacted the team server (in seconds). This is generally used to check when the session was last active: Right-clicking on one of the sessions gives the user multiple options to interact with, as demonstrated in the following screenshot: These options will be discussed later in the book. Targets list To view the targets, click on the button shown in the following screenshot: Targets will only show the IP address and the computer name, as follows: For further options, you can right-click on the target: From here, you can interact with the sessions opened on the target system. As you can see in the preceding screenshot, PT@2908 is the session opened on the given IP and the beacon payload resides in the PID 2908. Consequently, we can interact with this session directly from here: Credentials Credentials such as web login passwords, password hashes extracted from the SAM file, plain-text passwords extracted using mimikatz, etc. are retrieved from the compromised system and are saved in the database. They can be displayed by clicking on the icon shown in the following screenshot: When you perform a hashdump in Metasploit (a post-exploitation module that dumps all NTLM password hashes from the SAM database), the credentials are saved in the database. With this, when you dump hashes in Cobalt Strike or when you use valid credentials to log in, the credentials are saved and can be viewed from here: Downloaded files To view all the exfiltrated data from the target system, you can click on the button shown in the following screenshot: This will show the files (exfiltration) that were downloaded from the target system: Keystrokes This option is generally used when you have enabled a keylogger in the beacon. The keylogger will then log the keystrokes and send it to the beacon. To use this option, click the button shown in the following screenshot: When a user logs into the system, the keylogger will log all the keystrokes of that user (explorer.exe is a good candidate for keylogging). So, before you enable the keylogger from the beacon, migrate or inject a new beacon into the explorer.exe process and then start the keylogger. Once you do this, you can see that there's a new entry in the Keystrokes tab: The left side of the tab will show the information related to the beacon. This may include the user, the computer name, the PID in which the keylogger is injected, and the timestamp when the keylogger sends the saved keystrokes to the beacon. In contrast, the right side of the tab will show you the keystrokes that were logged. Screenshots To view the screenshots from the target system, click on the button shown in the following screenshot: This will open up the tab for screenshots. Here, you will get to know what's happening on the system's screen at that moment itself. This is quite helpful when a server administrator is logged in to the system and works on Active Directory (AD) and Domain Controller (DC) settings. When monitoring the screen, we can find crucial information that can lead to DC compromise: To know about Payload generation in stageless Windows executable, Java signed applet, and MS Office macros, head over to the book for a complete overview. Scripted web delivery This technique is used to deliver the payload via the web. To continue, click on the button shown in the following screenshot: A scripted web delivery will deliver the payload to the target system when the generated command/script is executed on the system. A new window will open where you can select the type of script/command that will be used for payload delivery. Here, you also have the option to add the listener accordingly: File hosting Files that you want to host on a web server can also be hosted through the Cobalt Strike team server. To host a file through the team server, click on the button shown in the following screenshot: This will bring up the window where you can set the URI, the file you want to host, the web server's IP address and port, and the MIME type. Once done, you can download the same file from the Cobalt Strike team server's web server. You can also provide the IP and port information of your favorite web redirector. This method is generally used for payload delivery: Managing the web server The web server running on the team server, which is generally used for file hosting and beacons, can be managed as well. To manage the web server, click on the button shown in the following screenshot: This will open the Sites tab where you can find all web services, the beacons, and the jobs assigned to those running beacons. You can manage the jobs here: Server switchbar The Cobalt Strike client can connect to multiple team servers at the same time and you can manage all the existing connections through the server switchbar. The switchbar allows you to switch between the server instances: You can also rename the instances according to the role of the server. To do this, simply right-click on the Instance tab and you'll get two options: Rename and Disconnect: You need to click on the Rename button to rename the instance of your choice. Once you click this button, you'll be prompted for the new name that you want to give to your instance: For now, we have changed this to EspionageServer: Renaming the switchbar helps a lot when it comes to managing multiple sessions from multiple team servers at the same time. To know more about how to customize a team server head over to the book. To summarize, we got to know what a team server is, how to setup Cobalt Strike and about the Cobalt Strike Interface. If you've enjoyed reading this, head over to the book, Hands-On Red Team Tactics to know about advanced penetration testing tools, techniques to get reverse shells over encrypted channels, and processes for post-exploitation. “All of my engineering teams have a machine learning feature on their roadmap” – Will Ballard talks artificial intelligence in 2019 [Interview] IEEE Computer Society predicts top ten tech trends for 2019: assisted transportation, chatbots, and deep learning accelerators among others Facebook releases DeepFocus, an AI-powered rendering system to make virtual reality more real
Read more
  • 0
  • 0
  • 30622

article-image-vue-js-3-0-is-ditching-javascript-for-typescript-what-else-is-new
Bhagyashree R
01 Oct 2018
5 min read
Save for later

Vue.js 3.0 is ditching JavaScript for TypeScript. What else is new?

Bhagyashree R
01 Oct 2018
5 min read
Last week, Evan You, the creator of Vue.js gave a summary of what to expect in the coming major release of Vue.js 3.0. To provide a better support for TypeScript, the codebase is being written in TypeScript leaving behind vanilla JS. This new codebase currently targets evergreen browsers such as Google Chrome, and assumes baseline native ES2015 support. Let’s see what else we will see in this major iteration: High-level API changes Template syntax will not see much changes, except some tweaks in the scoped slots syntax. Vue.js 3.0 will come with native support for class-based components. This will provide users with an API that is pleasant to use in native ES2015 without the need of any transpilation or stage-x features. The Vue.js 3.x codebase will be written in TypeScript, providing improved support for TypeScript. Support for the 2.x object-based component format will be provided by internally transforming the object to a corresponding class. Functional components can now be plain functions, however, the async components will need to be explicitly created via a helper function. The virtual DOM format used in render functions will see major changes. Upgrading will be easier if you don’t heavily rely on handwritten (non-JSX) render functions in your app. Mixins will still be supported. Cleaner and more maintainable source code architecture To make contributing to Vue.js easier, Vue.js 3.0 is being re-written from the ground up for a cleaner and more maintainable architecture. To do this, the developers are breaking some internal functionalities into individual packages to isolate the scope of complexity. For example, the observer module will be converted to its own package, with its own public API and tests. As mentioned earlier, the codebase is being re-written in TypeScript. This makes proficiency in TypeScript a primary prerequisite for contributing to the new codebase. However, the type information and IDE support will enable a new contributor to easily make meaningful contributions. Proxy-based observation mechanism Vue.js 3.0 will come with a Proxy-based observer implementation that provides reactivity tracking with full language coverage. This aims to eliminate a number of limitations of the current implementation of Vue.js 2, which is based on Object.defineProperty: Detection of property addition or deletion Detection of Array index mutation or .length mutation Support for Map, Set, WeakMap and WeakSet Additionally, this new observer will have the following features: Exposed API for creating observables: This provides a lightweight and simple cross-component state management solution for small to medium scale scenarios. Lazy observation by default: In Vue.js 3.x, only the data used to render the initially visible part of an app will need to be observed. This will eliminate the overhead on app startup if your dataset is huge. Immutable observables: Immutable versions of a value can be created to prevent mutations even on nested properties, except when the system temporarily unlocks it internally. Better debugging capabilities: Two new hooks, renderTracked and renderTriggered are added. These will help you precisely trace when and why a component re-render is tracked or triggered. Other runtime improvements Smaller runtime The new codebase is designed to be tree-shaking friendly. The built-in components and directive runtime helpers will be imported on-demand and are tree-shakable. As a result, the constant baseline size for the new runtime is <10kb gzipped. Improved performance On initial benchmarks, the developers are observing up to 100% performance improvement across the board. Vue.js 3.0 will reduce the time spent in JavaScript when your app boots up. Built-in support for Fragments and Portals Vue 3.0 will come with built-in support for Fragments and Portals. Fragments are the components returning multiple root nodes. Portals are introduced to render a sub-tree in another part of the DOM, instead of inside the component. Improved slots mechanism All compiler-generated slots are now functions and invoked during the child component’s render call. This will ensure dependencies in slots are collected as dependencies for the child instead of the parent. This means that: When a slot content changes, only the child re-renders When the parent re-renders the child does not have to if its slot content did not change This improvement will provide even more precise change detection at the component tree level. Custom Renderer API Using this API you will be able to create custom renderers. With this API, it will be easier for the render-to-native projects like Weex and NativeScript Vue to stay up-to-date with upstream changes. This API will also make the creation of custom renderers for various other purposes trivially easier. Along with these, they have announced few compiler improvements and IE11 support. They haven’t revealed any date yet but we can expect Vue.js 3.0 to release in 2019. To know more, check out their official announcement on Medium. Vue CLI 3.0 is here as the standard build toolchain behind Vue applications Introducing Vue Native for building native mobile apps with Vue.js Testing Single Page Applications (SPAs) using Vue.js developer tools
Read more
  • 0
  • 0
  • 30579

article-image-google-forms-multiple-choice-and-fill-blank-assignments
Packt
28 Sep 2016
14 min read
Save for later

Google Forms for Multiple Choice and Fill-in-the-blank Assignments

Packt
28 Sep 2016
14 min read
In this article by Michael Zhang, the author of the book Teaching with Google Classroom, we will see how to create multiple choice and fill-in-the-blank assignments using Google Forms. (For more resources related to this topic, see here.) The third-party app, Flubaroo will help grade multiple choice, fill-in-the-blank, and numeric questions. Before you can use Flubaroo, you will need to create the assignment and deploy it on Google Classroom. The Form app of Google within the Google Apps for Education (GAFE) allows you to create online surveys, which you can use as assignments. Google Forms then outputs the values of the form into a Google Sheet, where the Google Sheet add-on, Flubaroo grades the assignment. After using Google Forms and Flubaroo for assignments, you may decide to also use it for exams. However, while Google Forms provides a means for creating the assessment and Google Classroom allows you to easily distribute it to your students, there is no method to maintain security of the assessment. Therefore, if you choose to use this tool for summative assessment, you will need to determine an appropriate level of security. (Often, there is nothing that prevents students from opening a new tab and searching for an answer or from messaging classmates.) For example, in my classroom, I adjusted the desks so that there was room at the back of the classroom to pace during a summative assessment. Additionally, some school labs include a teacher's desktop that has software to monitor student desktops. Whatever method you choose, take precautions to ensure the authenticity of student results when assessing students online. Google Forms is a vast Google App that requires its own book to fully explore its functionality. Therefore, the various features you will explore in this article will focus on the scope of creating and assessing multiple choice and fill-in-the-blank assignments. However, once you are familiar with Google Forms, you will find additional applications. For example, in my school, I work with the administration to create forms to collect survey data from stakeholders such as staff, students, and parents. Recently, for our school's annual Open House, I created a form to record the number of student volunteers so that enough food for the volunteers would be ordered. Also, during our school's major fundraiser, I developed a Google Form for students to record donations so that reports could be generated from the information more quickly than ever before. The possibilities of using Google Forms within a school environment are endless! In this article, you will explore the following topics: Creating an assignment with Google Forms Installing the Flubaroo Google Sheets add-on Assessing an assignment with Flubaroo Creating a Google Form Since Google Forms is not as well known as apps such as Gmail or Google Calendar, it may not be immediately visible in the App Launcher. To create a Google Form, follow these instructions: In the App Launcher, click on the More section at the bottom: Click on the Google Forms icon: If there is still no Google Forms app icon, open a new tab and type forms.google.com into the address bar. Click on the Blank template to create a new Google Form: Google Forms has a recent update to Google's Material Design interface. This article will use screenshots from the new Google Forms look. Therefore, if you see a banner with Try the new Google Forms, click on the banner to launch the new Google Forms App: To name the Google Form, click on Untitled form in the top-left corner and type in the name. This will also change the name of the form. If necessary, you can click on the form title to change the title afterwards: Optionally, you can add a description to the Google Form directly below the form title: Often, I use the description to provide further instructions or information such as time limit, whether dictionaries or other reference books are permissible or even website addresses to where they can find information related to the assignment. Adding questions to a Google form By default, each new Google Form will already have a multiple choice card inserted into the form. In order to access the options, click on anywhere along the white area beside Untitled Question: The question will expand to form a question card where you can make changes to the question: Type the question stem in the Untitled Question line. Then, click on Option 1 to create a field to change it to a selection: To add additional selectors, click on the Add option text below the current selector or simply press the Enter key on the keyboard to begin the next selector. Because of the large number of options in a question card, the following screenshot provides a brief description of these options: A: Question title B: Question options C: Move the option indicator. Hovering your mouse over an option will show this indicator that you can click and drag to reorder your options. D: Move the question indicator. Clicking and dragging this indicator will allow you to reorder your questions within the assignment. E: Question type drop-down menu. There are several types of questions you can choose from. However, not all will work with the Flubaroo grading add-on. The following screenshot displays all question types available: F: Remove option icon. G: Duplicate question button. Google Forms will make a copy of the current question. H: Delete question button. I: Required question switch. By enabling this option, students must answer this question in order to complete the assignment. J: More options menu. Depending on the type of question, this section will provide options to enable a hint field below the question title field, create nonlinear multiple choice assignments, and validate data entered into a specific field. Flubaroo grades the assignment from the Google Sheet that Google Forms creates. It matches the responses of the students with an answer key. While there is tolerance for case sensitivity and a range of number values, it cannot effectively grade answers in the sentence or paragraph form. Therefore, use only short answers for the fill-in-the-blank or numerical response type questions and avoid using paragraph questions altogether for Flubaroo graded assignments. Once you have completed editing your question, you can use the side menu to add additional questions to your assignment. You can also add section headings, images, YouTube videos, and additional sections to your assignment. The following screenshot provides a brief legend for the icons:   To create a fill-in-the-blank question, use the short answer question type. When writing the question stem, use underscores to indicate where the blank is in the question. You may need to adjust the wording of your fill-in-the-blank questions when using Google Forms. Following is an example of a fill-in-the-blank question: Identify your students Be sure to include fields for your students name and e-mail address. The e-mail address is required so that Flubaroo can e-mail your student their responses when complete. Google Forms within GAFE also has an Automatically collect the respondent's username option in the Google Form's settings, found in the gear icon. If you use the automatic username collection, you do not need to include the name and e-mail fields. Changing the theme of a Google form Once you have all the questions in your Google Form, you can change the look and feel of the Google Form. To change the theme of your assignment, use the following steps: Click on the paint pallet icon in the top-right corner of the Google Form: For colors, select the desired color from the options available. If you want to use an image theme, click on the image icon at the bottom right of the menu: Choose a theme image. You can narrow the type of theme visible by clicking on the appropriate category in the left sidebar: Another option is to upload your own image as the theme. Click on the Upload photos option in the sidebar or select one image from your Google Photos using the Your Albums option. The application for Google Forms within the classroom is vast. With the preceding features, you can add images and videos to your Google Form. Furthermore, in conjunction with the Google Classroom assignments, you can add both a Google Doc and a Google Form to the same assignment. An example of an application is to create an assignment in Google Classroom where students must first watch the attached YouTube video and then answer the questions in the Google Form. Then Flubaroo will grade the assignment and you can e-mail the students their results. Assigning the Google Form in Google classroom Before you assign your Google Form to your students, preview the form and create a key for the assignment by filling out the form first. By doing this first, you will catch any errors before sending the assignment to your students, and it will be easier to find when you have to grade the assignment later. Click on the eye shaped preview icon in the top-right corner of the Google form to go to the live form: Fill out the form with all the correct answers. To find this entry later, I usually enter KEY in the name field and my own e-mail address for the e-mail field. Now the Google Form is ready to be assigned in Google Classroom. In Google Classroom, once students have submitted a Google Form, Google Classroom will automatically mark the assignment as turned in. Therefore, if you are adding multiple files to an assignment, add the Google Form last and avoid adding multiple Google Forms to a single assignment. To add a Google Form to an assignment, follow these steps: In the Google Classroom assignment, click on the Google Drive icon: Select the Google Form and click on the Add button: Add any additional information and assign the assignment. Installing Flubaroo Flubaroo, like Goobric and Doctopus, is a third-party app that provides additional features that help save time grading assignments. Flubaroo requires a one-time installation into Google Sheets before it can grade Google Form responses. While we can install the add-on in any Google Sheet, the following steps will use the Google Sheet created by Google Forms: In the Google Form, click on the RESPONSES tab at the top of the form: Click on the Google Sheets icon: A pop-up will appear. The default selection is to create a new Google Sheet. Click on the CREATE button: A new tab will appear with a Google Sheet with the Form's responses. Click on the Add-ons menu and select Get add-ons…: Flubaroo is a popular add-on and may be visible in the first few apps to click on. If not, search for the app with the search field and then click on it in the search results. Click on the FREE button: The permissions pop-up will appear. Scroll to the bottom and click on the Allow button to activate Flubaroo: A pop-up and sidebar will appear in Google Sheets to provide announcements and additional instructions to get started: Assessing using Flubaroo When your students have submitted their Google Form assignment, you can grade them with Flubaroo. There are two different settings for grading with it—manual and automatic. Manual grading will only grade responses when you initiate the grading; whereas, automatic grading will grade responses as they are submitted. Manual grading To assess a Google Form assignment with Flubaroo, follow these steps: If you have been following along from the beginning of the article, select Grade Assignment in the Flubaroo submenu of the Add-ons menu: If you have installed Flubaroo in a Google Sheet that is not the form responses, you will need to first select Enable Flubaroo in this sheet in the Flubaroo submenu before you will be able to grade the assignment: A pop-up will guide you through the various settings of Flubaroo. The first page is to confirm the columns in the Google Sheet. Flubaroo will guess whether the information in a column identifies the student or is graded normally. Under the Grading Options drop-down menu, you can also select Skip Grading or Grade by Hand. If the question is undergoing normal grading, you can choose how many points each question is worth. Click on the Continue button when all changes are complete: In my experience, Flubaroo accurately guesses which fields identify the student. Therefore, I usually do not need to make changes to this screen unless I am skipping questions or grading certain ones by hand. The next page shows all the submissions to the form. Click on the radio button beside the submission that is the key and then click on the Continue button: Flubaroo will show a spinning circle to indicate that it is grading the assignment. It will finish when you see the following pop-up: When you close the pop-up, you will see a new sheet created in the Google Sheet summarizing the results. You will see the class average, the grades of individual students as well as the individual questions each student answered correctly: Once Flubaroo grades the assignment, you can e-mail students the results. In the Add-ons menu, select Share Grades under the Flubaroo submenu: A new pop-up will appear. It will have options to select the appropriate column for the e-mail of each submission, the method to share grades with the students, whether to list the questions so that students know which questions they got right and which they got wrong, whether to include an answer key, and a message to the students. The methods to share grades include e-mail, a file in Google Drive, or both. Once you have chosen your selections, click on the Continue button: A pop-up will confirm that the grades have successfully been e-mailed. Google Apps has a daily quota of 2000 sent e-mails (including those sent in Gmail or any other Google App). While normally not an issue. If you are using Flubaroo on a large scale, such as a district-wide Google Form, this limit may prevent you from e-mailing results to students. In this case, use the Google Drive option instead. If needed, you can regrade submissions. By selecting this option in the Flubaroo submenu, you will be able to change settings, such as using a different key, before Flubaroo will regrade all the submissions. Automatic grading Automatic grading provides students with immediate feedback once they submit their assignments. You can enable automatic grading after first setting up manual grading so that any late assignments get graded. Or you can enable automatic grading before assigning the assignment. To enable automatic grading on a Google Sheet that has already been manually graded, select Enable Autograde from the Advanced submenu of Flubaroo, as shown in the following screenshot: A pop-up will appear allowing you to update the grading or e-mailing settings that were set during the manual grading. If you select no, then you will be taken through all the pop-up pages from the Manual grading section so that you can make necessary changes. If you have not graded the assignment manually, when you select Enable Autograde, you will be prompted by a pop-up to set up grading and e-mailing settings, as shown in the following screenshot. Clicking on the Ok button will take you through the setting pages shown in the preceding Manual grading section: Summary In this article, you learned how to create a Google Form, assign it in Google Classroom, and grade it with the Google Sheet's Flubaroo add-on. Using all these apps to enhance Google Classroom shows how the apps in the GAFE suite interact with each other to provide a powerful tool for you. Resources for Article: Further resources on this subject: Mapping Requirements for a Modular Web Shop App [article] Using Spring JMX within Java Applications [article] Fine Tune Your Web Application by Profiling and Automation [article]
Read more
  • 0
  • 2
  • 30547

article-image-python-ldap-applications-part-1-installing-and-configuring-python-ldap-library-and-bin
Packt
22 Oct 2009
16 min read
Save for later

Configuring and securing PYTHON LDAP Applications Part 1

Packt
22 Oct 2009
16 min read
This article mini-series by Matt Butcher will look at the Python application programmers interface (API) for the LDAP libraries, and using this API, we will connect to our OpenLDAP server and manipulate the directory information tree. More specifically, we will cover the following in this article series: Installing and configuring the Python-LDAP library. Binding to an LDAP directory. Comparing attributes between the client and server. Performing searches on the directory. Modifying the directory information tree with add, delete, and modify operations. Modifying directory passwords. Working with LDAP schemas. This first part will deal with installation and configuration of the Python-LDAP library. We will then see how the binding operation is performed. Installing Python-LDAP There are a couple of LDAP libraries available for Python, but the most popular is the Python-LDAP module, which (as with the PHP API) uses the OpenLDAP C library as a base for providing network access to an LDAP server. Like OpenLDAP, the Python-LDAP API is Open Source. It works on Linux, Windows, Mac OS X, BSD, and probably other UNIX operating systems as well (platforms that have both Python and OpenLDAP available). The source code is available at the official Python-LDAP website: http://python-ldap.sourceforge.net. Here pre-compiled binaries for many platforms are available, but we will install the version in the Ubuntu repository. Before installing Python-LDAP, you will need to have the Python scripting language installed. Typically, this is installed by default on Ubuntu (and on most flavors of Linux). Installing Python-LDAP requires only one command: $ sudo apt-get install python-ldap This will choose the module or modules that match the installed Python version. That is, if you are running Python 2.4 (the stable version, at the time of writing), this will install the python2.4-ldap package. The library, which consists of several Python packages, will be installed into /usr/lib/python2.4/site-packages/. In Ubuntu, there is no need to run further configuration in order to make use of the Python-LDAP library. We are ready to dive into the API. If you install by hand, either from source or from the binary packages, you may need to add the Python-LDAP library to your Python path. See the Python documentation for details. The Python-LDAP API is well documented. The documentation is available online at the official Python-LDAP website: http://python-ldap.sourceforge.net/docs.shtml. You may find it more convenient to download a copy of the documentation and use it locally. In previous versions of Ubuntu Python-LDAP documentation was available in the package python-ldap-doc, which could be installed with apt-get. Also, many of the Python-LDAP functions and objects have documentation strings that can be accessed from the Python interpreter like this: >>> print ldap.initialize.__doc__ Return LDAPObject instance by opening LDAP connection to LDAP host specified by LDAP URL Parameters: uri LDAP URL containing at least connection scheme and hostport, e.g. ldap://localhost:389 trace_level If non-zero a trace output of LDAP calls is generated. trace_file File object where to write the trace output to. Default is to use stdout. The documentation string usually contains a brief description of the function or object, and is a useful quick reference. A Quick Overview of the Python LDAP API Now that the package is installed, let's take a quick look at what was installed. The Python-LDAP package comes with nine different modules: ldap: This is the main LDAP module. It contains the functions  necessary for performing LDAP operations, such as binding, searching,  adding, and modifying. ldap.async: Python can do synchronous and asynchronous transactions.  This module provides utilities that are useful when performing asynchronous  operations. ldap.cidict: This contains the cidict class, which is a case-insensitive  dictionary. Although LDAP is case-insensitive when it comes to attribute  names, it is often necessary to perform case-insensitive operations on  dictionary keys. ldap.modlist: Utility functions for creating modification records (for  performing the LDAP modify operation) are in this package. ldap.filter: This module provides a couple of utility functions for  creating LDAP search filters. ldap.sasl: Python-LDAP's SASL support is partially contained in this  package. It is not documented in the online documentation, but there are  plenty of notes in the doc strings in this module. ldap.schema: This module contains classes that describe the subschema  subentry records. It can be used to access schema information. ldapurl: This module provides a class for generating and parsing LDAP  URLs. ldif: This module is used to parse or write LDIF-formatted LDAP records. Most of the commonly used LDAP features are in the ldap module, and we will be focused mainly on using that. Since many of the submodules have only a couple of functions, we will use them in passing but treat them as separate objects of discussion. A Note on the Python Examples The Python interpreter (python) can be run interactively. Running Python in an interactive mode can be very useful for discovery and debugging. Further, since it prints useful information directly to the console, it can be useful for demonstration purposes. In many of the examples below, the code is shown as it would be entered in the interactive shell. Here is an example: >>> h = "Hello World">>> h'Hello World'>>> print hHello World Lines that begin with >>> and ... are interpreter prompts (similar to $ in shell). Examples with the >>> are run in the interpreter interactively. Some code, however, will be typed into a file (as usual). This code will not have lines beginning with the interpreter prompt. They tend to look more like this: h = “Hello World”print h Most of the time, features are introduced using the interpreter, but lengthier examples are done in the form of a Python script. Where it might be confusing, I will explicitly say in the text which of the two methods I am using. Connecting and Binding to the Directory Now that we have the library installed, we are ready to use the API. The Python-LDAP API connects and binds in two stages. Initializing the LDAP system is done with the ldap.initialize() function. The initialize() method returns an LDAPObject object, which contains methods for performing LDAP operations and retrieving information about the LDAP connection and transactions. A basic initialization is done like this: >>> import ldap>>> con = ldap.initialize('ldap://localhost') The first line of this example imports the ldap module, that contains the initialize() method as well as the LDAPObject that we will make frequent use of. The second line initializes the LDAP code, and returns an LDAPObject that we will use to connect to the server. The initialize() function takes a simple LDAP URL (protocol://host:port) as a parameter. Sometimes, you may prefer to pass in simply host and port information. This can be done with the connect(host, port) function, that also returns an LDAPObject object. In addition, if you need to check or set any LDAP options, you should use the get_option() and set_option() functions before binding. For instance, we can set the connection to require a TLS certificate by setting the OPT_X_TLS_DEMAND option: >>> con.get_option(ldap.OPT_X_TLS_DEMAND)0>>> con.set_option(ldap.OPT_X_TLS_DEMAND, True)>>> con.get_option(ldap.OPT_X_TLS_DEMAND)1 A Safe Connection In most production environments, security is a major concern. As we have seen in previous chapters, one major component of security in network-based LDAP services is the use of SSL/TLS-based connections. There are two ways to get transport-layer security with the Python-LDAP module. The first is to connect to the LDAPS (LDAP over SSL) port. This is done by passing the correct parameter to the initialize() function. Instead of using the ldap:// protocol, which will make an unverified unencrypted connection to port 389, use an ldaps:// protocol, which will make an SSL connection to port 636 (you can specify alternate an alternate port by appending a colon (:) and then the port number to the end of the URL). Or, instead of using LDAPS, you can perform a Start TLS operation before binding to the server: >>> import ldap>>> con = ldap.initialize('ldap://localhost')>>> con.start_tls_s() Note that while the call to ldap.initialize() does not actually open a connection, the call to ldap.start_tls_s() does create a connection. Exceptions Connecting to an LDAP server may result in the raising of an exception, so in production code, it is best to wrap the connection attempt inside of a try/except block. Here is a fragment of a script: #!/usr/bin/env pythonimport ldap, sysserver = 'ldap://localhost'l = ldap.initialize(server)try: l.start_tls_s()except ldap.LDAPError, e: print e.message['info'] if type(e.message) == dict and e.message.has_key('desc'): print e.message['desc'] else: print e sys.exit() In the case above, if the start_tls_s() method results in an error, it will be caught. The except clause checks if the returned message is a dict (which it should always be), and also checks if it has the description ('desc') field. If so, it prints the description. Otherwise, it prints the entire message. There are a few dozen exceptions that the Python-LDAP library might raise, but all of them are subclasses of the LDAPError class, and can be caught by the line: except ldap.LDAPError, e: Within an LDAPError object, there is a dictionary, called message, which contains the 'info' and 'desc' fields. The 'info' field contains the information returned from the server, and the 'desc' field contains a description of the error. In general, it is best to use try/except blocks around LDAP operations in order to catch any errors that might occur during processing. Binding Once we have an LDAPObject instance, we can bind to the LDAP directory. The Python-LDAP API supports both simple and SASL binding methods, and there are five different bind methods: bind(): Takes three required parameters: a DN, a password (or credential, for SASL), and a string indicating what type of bind method to use. Currently, only ldap.AUTH_SIMPLE is supported. This is asynchronous. Example: con.bind(dn, pw, ldap.AUTH_SIMPLE) bind_s(): This one is same as above, but it is synchronous, and returns  information about the status of the bind. simple_bind(): This performs a simple bind. This has two optional  parameters: DN and password. If no parameter is specified, this will bind as  anonymous. This is asynchronous. simple_bind_s(): This is the synchronous version of the above. sasl_interactive_bind_s(): This performs an SASL bind, and it takes two parameters: an SASL identifier and an SASL authentication string. First, for many Python LDAP functions, including almost all of the LDAP operations, there are both synchronous and asynchronous versions. Synchronous versions, which will block until the server returns a result, have method names that end with _s. The other operations – those that do not end with _s – are asynchronous. An asynchronous message will begin an operation, and then return control to the program. The operation will continue in the background. It is the responsibility of the program to periodically check on the operation to see if it has been completed. Since they wait to return any results until the operation has been completed, synchronous methods will often have different return values than their asynchronous counterparts. Synchronized methods may return the results obtained from the server, or they may have void returns. Asynchronous methods, on the other hand, will always return a message identifier. This identifier can be used to access the results of the operation. Here's an example of the different results for the two different forms of simple bind. First, the synchronous bind: >>> dn = "uid=matt,ou=users,dc=example,dc=com">>> pw = "secret">>> con.simple_bind_s( dn, pw ) (97, [])>>> Notice that this method returns a tuple. Now, look at the asynchronous version: >>> con.simple_bind( dn, pw )8>>> con.result(8)(97, []) In this case, the simple_bind() method returned 8 – the message identification number for the result. We can use the result() method to fetch the resulting information. The result() method returns a two-item tuple, where the first item is the status code (97 means success), and the second is a list of messages from the server. In this case, the list is empty. Notes on Getting ResultsThere are two noteworthy caveats about fetching results. First, a particular result can only be fetched once. You cannot call result() with the same message ID multiple times. Second, you can execute multiple asynchronous operations without checking the results. The consequence of doing this is that all of the results will be stored until they are fetched. This consumes memory, and can lead to confusing results if result() or result( ldap.RES_ANY ) is called. Later in this chapter, we will see more sophisticated uses of synchronous and asynchronous methods, but for now we will continue looking at methods of binding. The bind() and bind_s() methods work the same way, but they require a third parameter, specifying which sort of authentication mechanism to use. Unfortunately, at the time of this writing, only the AUTH_SIMPLE form of binding (plain old simple bind) is supported by this mechanism: >>> con.bind_s( dn, pw, ldap.AUTH_SIMPLE ) (97, []) This performs a simple bind to the server. Exceptions A bind can fail for a number of reasons, the most common being that the connection failed (the CONNECT_ERROR exception) or authentication failed (INVALID_CREDENTIALS). In production code, it is a good idea to check for these exceptions using try/except blocks. By checking for them separately, you can distinguish between, say, authentication failures and other, more serious failures: l = ldap.initialize(server)try: #l.start_tls_s() l.bind_s(user_dn, user_pw)except ldap.INVALID_CREDENTIALS: print "Your username or password is incorrect." sys.exit()except ldap.LDAPError, e: if type(e.message) == dict and e.message.has_key('desc'): print e.message['desc'] else: print e sys.exit() In this case, if the failure is due to the user entering the wrong DN or password, a message to that effect is printed. Otherwise, the error description provided by the LDAP library is printed. SASL Interactive Binds SASL is a robust authentication mechanism, but the flexibility and adaptability of SASL comes at the cost of additional complexity. This additional complexity is evident in the Python-LDAP module. SASL binding is implemented differently than the other bind methods. First, there is no asynchronous version of the SASL bind method (not all thread safety issues have been worked out in this module, yet). Since the SASL code is not as stable as the rest of the API, you may want to stick to simple binding (with SSL/TLS protection) rather than rely upon SASL support. There is only one SASL binding method, sasl_interactive_bind_s(). This method takes two arguments. The first is a DN string. It is almost always left blank, since with SASL, we usually authenticate with some other identifier. The second argument is an sasl object (or a subclass of an sasl object). The sasl object contains a dictionary of information that the SASL subsystem uses to perform authentication. Each different SASL mechanism is implemented as a class that is a subclass of the sasl object. There are a handful of different subclasses that come with the Python-LDAP module, though you can create your own if you need support for a different mechanism. cram_md5: This class implements the CRAM-MD5 SASL mechanism. A new cram_md5 object can be created with a constructor that passes in the authentication ID, a password, and an optional authorization ID. digest_md5: This implements the DIGEST-MD5 SASL mechanism. Like  cram_md5(), this object can be constructed with an authentication ID, a  password, and an optional authorization ID. gssapi: This implements the GSSAPI mechanism, an its constructor has only the optional authorization ID. It is used to perform Kerberos V authentication. external: This implements the EXTERNAL SASL mechanism, that uses an underlying transport security mechanism (like SSL/TLS). Its constructor only takes the optional authorization ID. Our LDAP server is configured to allow DIGEST-MD5 SASL connections, so we will walk through an example of performing this sort of SASL authentication. >>> import ldap>>> import ldap.sasl>>> user_name = "matt">>> pw = "secret">>> >>> con = ldap.initialize("ldap://localhost")>>> auth_tokens = ldap.sasl.digest_md5( user_name, pw )>>> >>> con.sasl_interactive_bind_s( "", auth_tokens )0 To begin with, we import the ldap and ldap.sasl packages, and we store the user name and password information in a couple of variables. After initializing a connection, we need to create a new sasl object – on that will contain the information necessary to perform DIGEST-MD5 authentication. We do this by constructing a new digest_md5 object: >>> auth_tokens = ldap.sasl.digest_md5( user_name, pw ) Now, auth_tokens points to our new SASL object. Next, we need to bind. This is done with the sasl_interactive_bind_s() method of the LDAPObject: >>> con.sasl_interactive_bind_s( "", auth_tokens ) If a SASL interactive bind is successful, then this method will return an integer. Otherwise, an INVALID_CREDENTIALS exception will be raised: >>> auth_tokens = ldap.sasl.digest_md5( "foo", pw )>>> try:... con.sasl_interactive_bind_s( "", auth_tokens )... except ldap.INVALID_CREDENTIALS, e :... print e... {'info': 'SASL(-13): user not found: no secret in database', 'desc': 'Invalid credentials'} In this case, the user foo was not found in the SASL DB, and the SASL subsystem returned an error.
Read more
  • 0
  • 0
  • 29829

article-image-creating-simple-gamemanager-using-unity3d
Ellison Leao
09 Jan 2015
5 min read
Save for later

Creating a simple GameManager using Unity3D

Ellison Leao
09 Jan 2015
5 min read
Using the so called "Game Managers" in games is just as common as eating when making games. Probably every game made has their natural flow: Start -> Play -> Pause -> Die -> Game Over , etc. To handle these different game states, we need a proper manager who can provide a mechanism to know when to change to state "A" to state "B" during gameplay. In this post we will show you how to create a simple game manager for Unity3D games. We will assume that you have some previous knowledge in Unity, but if you haven't get the chance to know it, please go to the Official Learn Unity page and get started. We are going to create the scripts using the C# language. 1 - The Singleton Pattern For the implementation, we will use the Singleton pattern. Why? Some reasons: One instance for all the game implementation, with no possible duplications. The instance is never destroyed on scene changes. It stores the current game state to be accessible anytime. We will not explain the design of the Singleton pattern because it's not the purpose of this post. If you wish to know more about it, you can go here. 2 - The GameManager code Create a new project on Unity and add a first csharp script called SimpleGameManager.cs and add the following code: using UnityEngine; using System.Collections; // Game States // for now we are only using these two public enum GameState { INTRO, MAIN_MENU } public delegate void OnStateChangeHandler(); public class SimpleGameManager { protected SimpleGameManager() {} private static SimpleGameManager instance = null; public event OnStateChangeHandler OnStateChange; public GameState gameState { get; private set; } public static SimpleGameManager Instance{ get { if (SimpleGameManager.instance == null){ DontDestroyOnLoad(SimpleGameManager.instance); SimpleGameManager.instance = new SimpleGameManager(); } return SimpleGameManager.instance; } } public void SetGameState(GameState state){ this.gameState = state; OnStateChange(); } public void OnApplicationQuit(){ SimpleGameManager.instance = null; } } Explaining the code in parts, we have: First we are making some enums for easily check the Game State, so for this example we will have: public enum GameState { INTRO, MAIN_MENU } Then we will have an event delegate method that we will use as a callback when a game state changes. This is ideal for changing scenes. public delegate void OnStateChangeHandler(); Moving forward we will have the gameState attribute, that is a getter for the current Game State. public GameState gameState {get; private set;} Then we will have our class. Taking a look at the singleton implementation we can see that we will use the Instance static variable to get our Game Manager current instance or create a new one if it doesn't exists. It's also interesting to see that we call the DontDestroyOnLoad method in the Game Manager instanciation. On doing that, Unity makes sure that our instance is never destroyed between scenes. The method used to change the Game State is SetGameState, which we only need to pass the GameState enum variable as the parameter. public void SetGameState(GameState state){ this.gameState = state; OnStateChange(); } It automatically sets the new gameState for the instance and call the callback OnStateChangemethod. 3 - Creating Sample Scenes For testing our new Game Manager, we will create 2 Unity scenes: Intro and Menu. The Intro scene will just show some debug messages, simulating an Intro game scene, and after 3 seconds it will change to the Menu Scene were we have the Game Menu code. Create a new scene called Intro and create a csharp script called Intro.cs. Put the following code into the script: using UnityEngine; using System.Collections; public class Intro : MonoBehaviour { SimpleGameManager GM; void Awake () { GM = SimpleGameManager.Instance; GM.OnStateChange += HandleOnStateChange; Debug.Log("Current game state when Awakes: " + GM.gameState); } void Start () { Debug.Log("Current game state when Starts: " + GM.gameState); } public void HandleOnStateChange () { GM.SetGameState(GameState.MAIN_MENU); Debug.Log("Handling state change to: " + GM.gameState); Invoke("LoadLevel", 3f); } public void LoadLevel(){ Application.LoadLevel("Menu"); } } You can see here that we just need to call the Game Manager instance inside the Awake method. The same initialization will happen on the others scripts, to get the current Game Manager state. After getting the Game Manager instance we set the OnStateChange event, which is load the Menu scene after 3 seconds. You can notice that the first line of the event sets the new Game State by calling the SetGameState method. If you run this scene however, you will get an error because we don't have the Menu.cs Scene yet. So let's create it! Create a new scene called Menu and add a csharp script called Menu.cs into this Scene. Add the following code to Menu.cs: using UnityEngine; using System.Collections; public class Menu : MonoBehaviour { SimpleGameManager GM; void Awake () { GM = SimpleGameManager.Instance; GM.OnStateChange += HandleOnStateChange; } public void HandleOnStateChange () { Debug.Log("OnStateChange!"); } public void OnGUI(){ //menu layout GUI.BeginGroup (new Rect (Screen.width / 2 - 50, Screen.height / 2 - 50, 100, 800)); GUI.Box (new Rect (0, 0, 100, 200), "Menu"); if (GUI.Button (new Rect (10, 40, 80, 30), "Start")){ StartGame(); } if (GUI.Button (new Rect (10, 160, 80, 30), "Quit")){ Quit(); } GUI.EndGroup(); } public void StartGame(){ //start game scene GM.SetGameState(GameState.GAME); Debug.Log(GM.gameState); } public void Quit(){ Debug.Log("Quit!"); Application.Quit(); } } We added simple Unity GUI elements for this scene just for example. Run the Intro Scene and check the Debug logs, You should see the messages when the Game State is changing from the old state to the new state and keeping the instance between scenes. And there you have it! You can add more GameStates for multiple screens like Credits, High Score, Levels, etc. The code for this examples is on github, feel free to fork and use it in your games! https://github.com/bttfgames/SimpleGameManager About this Author  Ellison Leão (@ellisonleao) is a passionate software engineer with more than 6 years of experience in web projects and contributor to the MelonJS framework and other open source projects. When he is not writing games, he loves to play drums.
Read more
  • 0
  • 1
  • 29734

article-image-why-golan-is-the-fastest-growing-language-on-github
Sugandha Lahoti
09 Aug 2018
4 min read
Save for later

Why Golang is the fastest growing language on GitHub

Sugandha Lahoti
09 Aug 2018
4 min read
Google’s Go language or alternatively Golang is currently one of the fastest growing programming languages in the software industry. Its speed, simplicity, and reliability make it the perfect choice for all kinds of developers. Now, its popularity has further gained momentum. According to a report, Go is the fastest growing language on GitHub in Q2 of 2018. Go has grown almost 7% overall with a 1.5% change from the previous Quarter. Source: Madnight.github.io What makes Golang so popular? A person was quoted on Reddit saying, “What I would have done in Python, Ruby, C, C# or C++, I'm now doing in Go.” Such is the impact of Go. Let’s see what makes Golang so popular. Go is cross-platform, so you can target an operating system of your choice when compiling a piece of code. Go offers a native concurrency model that is unlike most mainstream programming languages. Go relies on a concurrency model called CSP ( Communicating Sequential Processes). Instead of locking variables to share memory, Golang allows you to communicate the value stored in your variable from one thread to another. Go has a fairly mature package of its own. Once you install Go, you can build production level software that can cover a wide range of use cases from Restful web APIs to encryption software, before needing to consider any third party packages. Go code typically compiles to a single native binary, which basically makes deploying an application written in Go as easy as copying the application file to the destination server. Go is also being rapidly being adopted as the go-to cloud native language and by leading projects like Docker and Ethereum. It’s concurrency feature and easy deployment make it a popular choice for cloud development. Can Golang replace Python? Reddit is abuzz with people sharing their thoughts about whether Golang would replace Python. A user commented that “Writing a utility script is quicker in Go than in Python or JS. Not quicker as in performance, but in terms of raw development speed.” Another Reddit user pointed out three reasons not to use Python in a Reddit discussion, Why are people ditching python for go?: Dynamic compilation of Python can result in errors that exist in code, but they are in fact not detected. CPython really is very slow; very specifically, procedures that are invoked multiple times are not optimized to run more quickly in future runs (like pypy); they always run at the same slow speed. Python has a terrible distribution story; it's really hard to ship all your Python dependencies onto a new system. Go addresses those points pretty sharply. It has a good distribution story with static binaries. It has a repeatable build process, and it's pretty fast. In the same discussion, however, a user nicely sums it up saying, “There is nothing wrong with python except maybe that it is not statically typed and can be a bit slow, which also depends on the use case. Go is the new kid on the block, and while Go is nice, it doesn't have nearly as many libraries as python does. When it comes to stable, mature third-party packages, it can't beat python at the moment.” If you’re still thinking about whether or not to begin coding with Go, here’s a quirky rendition of the popular song Let it Go from Disney’s Frozen to inspire you. Write in Go! Write in Go! Go Cloud is Google’s bid to establish Golang as the go-to language of cloud Writing test functions in Golang [Tutorial] How Concurrency and Parallelism works in Golang [Tutorial]
Read more
  • 0
  • 0
  • 29292
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-how-to-develop-a-simple-to-do-list-app-tutorial
Sugandha Lahoti
26 Sep 2018
11 min read
Save for later

How to develop a Simple To-Do List App [Tutorial]

Sugandha Lahoti
26 Sep 2018
11 min read
In this article, we will build a simple to-do list app that allows a user to add and display tasks. In this process, we will learn the following: How to build a user interface in Android Studio Working with ListViews How to work with Dialogs This article is taken from the book Learning Kotlin by building Android Applications by Eunice Adutwumwaa Obugyei and Natarajan Raman. This book will teach you programming in Kotlin including data types, flow control, lambdas, object-oriented, and functional programming while building  Android Apps. Creating the project Let's start by creating a new project in Android Studio, with the name TodoList. Select Add No Activity on the Add an Activity to Mobile screen: When the project creation is complete, create a Kotlin Activity by selecting File | New | Kotlin Activity, as shown in the following screenshot: This will start a New Android Activity wizard. On the Add an Activity to Mobile screen, select Basic Activity, as shown in the following screenshot: Now, check Launcher Activity on the Customize the Activity screen, and click the Finish button: Building your UI In Android, the code for your user interface is written in XML. You can build your UI by doing either of the following: Using the Android Studio Layout Editor Writing the XML code by hand Let's go ahead and start designing our TodoList app. Using the Android Studio layout editor Android Studio provides a layout editor, which gives you the ability to build your layouts by dragging widgets into the visual editor. This will auto-generate the XML code for your UI. Open the content_main.xml file. Make sure the Design tab at the bottom of the screen is selected, as shown in the following screenshot: To add a component to your layout, you just drag the item from the Palette on the left side of the screen. To find a component, either scroll through the items on the Palette, or click on the Palette search icon and search for the item you need. If the Palette is not showing on your screen, select View | Tool Windows | Palette to display it. Go ahead and add a ListView to your view. When a view is selected, its attributes are displayed in the XML Attributes editor on the right side of the screen. The Attributes editor allows you to view and edit the attributes of the selected component. Go ahead and make the following changes: Set the ID as list_view Change both the layout_width and layout_height attributes to match_parent If the Attributes editor is not showing; select View | Tool Windows | Attributes to display it. Now, select Text at the bottom of the editor window to view the generated XML code. You'll notice that the XML code now has a ListView placed within the ConstraintLayout:  A layout always has a root element. In the preceding code, ConstraintLayout is the root element. Instead of using the layout editor, you could have written the previous code yourself. The choice between using the layout editor or writing the XML code is up to you. You can use the option that you're most comfortable with. We'll continue to make additions to the UI as we go along. Now, build and run your code. as shown in the following screenshot: As you can see, the app currently doesn't have much to it. Let's go ahead and add a little more flesh to it. Since we'll use the FloatingActionButton as the button the user uses to add a new item to their to-do list, we need to change its icon to one that makes its purpose quite clear. Open the activity_main.xml file: One of the attributes of the android.support.design.widget.FloatingActionButton is app:srcCompat. This is used to specify the icon for the FloatingActionButton. Change its value from @android:drawable/ic_dialog_email to @android:drawable/ic_input_add. Build and run again. The FloatingActionButton at the bottom now looks like an Add icon, as shown in the following screenshot: Adding functionality to the user interface At the moment, when the user clicks on the Add button, a ticker shows at the bottom of the screen. This is because of the piece of code in the onCreate() method that defines and sets an OnClickListener to the FloatingActionButton: fab.setOnClickListener { view -> Snackbar.make(view, "Replace with your own action", Snackbar.LENGTH_LONG) .setAction("Action", null).show() } This is not ideal for our to-do list app. Let's go ahead and create a new method in the MainActivity class that will handle the click event: fun showNewTaskUI() { } The method currently does nothing. We'll add code to show the appropriate UI soon. Now, replace the code within the setOnClickListener() call with a call to the new method: fab.setOnClickListener { showNewTaskUI() } Adding a new task For adding a new task, we'll show the user an AlertDialog with an editable field. Let's start by building the UI for the dialog. Right-click the res/layout directory and select New | Layout resource file, as shown in the following screenshot: On the New Resource File window, change the Root element to LinearLayout and set the File name as dialog_new_task. Click OK to create the layout,  as shown in the following screenshot: Open the dialog_new_task layout and add an EditText view to the LinearLayout. The XML code in the layout should now look like this: The inputType attribute is used to specify what kind of data the field can take. By specifying this attribute, the user is shown an appropriate keyboard. For example, if the inputType is set to number, the numbers keyboard is displayed: Now, let's go ahead and add a few string resources we'll need for the next section. Open the res/values/strings.xml file and add the following lines of code to the resources tag: <string name="add_new_task_dialog_title">Add New Task</string> <string name="save">Save</string> The add_new_task_dialog_title string will be used as the title of our dialog The save string will be used as the text of a button on the dialog The best way to use an AlertDialog is by encapsulating it in a DialogFragment. The DialogFragment takes away the burden of handling the dialog's life cycle events. It also makes it easy for you to reuse the dialog in other activities. Create a new Kotlin class with the name NewTaskDialogFragment, and replace the class definition with the following lines of code: class NewTaskDialogFragment: DialogFragment() { // 1 // 2 interface NewTaskDialogListener { fun onDialogPositiveClick(dialog: DialogFragment, task: String) fun onDialogNegativeClick(dialog: DialogFragment) } var newTaskDialogListener: NewTaskDialogListener? = null // 3 // 4 companion object { fun newInstance(title: Int): NewTaskDialogFragment { val newTaskDialogFragment = NewTaskDialogFragment() val args = Bundle() args.putInt("dialog_title", title) newTaskDialogFragment.arguments = args return newTaskDialogFragment } } override fun onCreateDialog(savedInstanceState: Bundle?): Dialog { // 5 val title = arguments.getInt("dialog_title") val builder = AlertDialog.Builder(activity) builder.setTitle(title) val dialogView = activity.layoutInflater.inflate(R.layout.dialog_new_task, null) val task = dialogView.findViewById<EditText>(R.id.task) builder.setView(dialogView) .setPositiveButton(R.string.save, { dialog, id -> newTaskDialogListener?.onDialogPositiveClick(this, task.text.toString); }) .setNegativeButton(android.R.string.cancel, { dialog, id -> newTaskDialogListener?.onDialogNegativeClick(this) }) return builder.create() } override fun onAttach(activity: Activity) { // 6 super.onAttach(activity) try { newTaskDialogListener = activity as NewTaskDialogListener } catch (e: ClassCastException) { throw ClassCastException(activity.toString() + " must implement NewTaskDialogListener") } } } Let's take a closer look at what this class does: The class extends the DialogFragment class. It declares an interface with the name NewTaskDialogListener, which declares two methods: onDialogPositiveClick(dialog: DialogFragment, task: String) onDialogNegativeClick(dialog: DialogFragment) It declares a variable of type NewTaskDialogListener. It defines a method, newInstance(), in a companion object. By doing this, the method can be accessed without having to create an instance of the NewTaskDialogFragment class. The newInstance() method does the following: It takes an Int parameter named title It creates an instance of the NewTaskDialogFragment and passes the title as part of its arguments It returns the new instance of the NewTaskDialogFragment It overrides the onCreateDialog() method. This method does the following: It attempts to retrieve the title argument passed It instantiates an AlertDialog builder and assigns the retrieved title as the dialog's title It uses the LayoutInflater of the DialogFragment instance's parent activity to inflate the layout we created Then, it sets the inflated view as the dialog's view Sets two buttons to the dialog: Save and Cancel When the Save button is clicked, the text in the EditText will be retrieved and passed to the newTaskDialogListener variable via the onDialogPositiveClick() method In the onAttach() method, we attempt to assign the Activity object passed to the newTaskDialogListener variable created earlier. For this to work, the Activity object should implement the NewTaskDialogListener interface. Now, open the MainActivity class. Change the class declaration to include implementation of the NewTaskDialogListener. Your class declaration should now look like this: class MainActivity : AppCompatActivity(), NewTaskDialogFragment.NewTaskDialogListener { And, add implementations of the methods declared in the NewTaskDialogListener by adding the following methods to the MainActivity class: override fun onDialogPositiveClick(dialog: DialogFragment, task:String) { } override fun onDialogNegativeClick(dialog: DialogFragment) { } In the showNewTaskUI() method, add the following lines of code: val newFragment = NewTaskDialogFragment.newInstance(R.string.add_new_task_dialog_title) newFragment.show(fragmentManager, "newtask") In the preceding lines of code, the newInstance() method in NewTaskDialogFragment is called to generate an instance of the NewTaskDialogFragment class. The show() method of the DialogFragment is then called to display the dialog. Build and run. Now, when you click the Add button, you should see a dialog on your screen,  as shown in the following screenshot: As you may have noticed, nothing happens when you click the SAVE button. In the  onDialogPositiveClick() method, add the line of code shown here: Snackbar.make(fab, "Task Added Successfully", Snackbar.LENGTH_LONG).setAction("Action", null).show() As we may remember, this line of code displays a ticker at the bottom of the screen. Now, when you click the SAVE button on the New Task dialog, a ticker shows at the bottom of the screen. We're currently not storing the task the user enters. Let's create a collection variable to store any task the user adds. In the MainActivity class, add a new variable of type ArrayList<String>, and instantiate it with an empty ArrayList: private var todoListItems = ArrayList<String>() In the onDialogPositiveClick() method, place the following lines of code at the beginning of the method definition: todoListItems.add(task) listAdapter?.notifyDataSetChanged() This will add the task variable passed to the todoListItems data, and call notifyDataSetChanged() on the listAdapter to update the ListView. Saving the data is great, but our ListView is still empty. Let's go ahead and rectify that. Displaying data in the ListView To make changes to a UI element in the XML layout, you need to use the findViewById() method to retrieve the instance of the element in the corresponding Activity of your layout. This is usually done in the onCreate() method of the Activity. Open MainActivity.kt, and declare a new ListView instance variable at the top of the class: private var listView: ListView? = null Next, instantiate the ListView variable with its corresponding element in the layout. Do this by adding the following line of code at the end of the onCreate() method: listView = findViewById(R.id.list_view) To display data in a ListView, you need to create an Adapter, and give it the data to display and information on how to display that data. Depending on how you want the data displayed in your ListView, you can either use one of the existing Android Adapters, or create your own. For now, we'll use one of the simplest Android Adapters, ArrayAdapter. The ArrayAdapter takes an array or list of items, a layout ID, and displays your data based on the layout passed to it. In the MainActivity class, add a new variable, of type ArrayAdapter: private var listAdapter: ArrayAdapter<String>? = null Add the method shown here to the class: private fun populateListView() { listAdapter = ArrayAdapter(this, android.R.layout.simple_list_item_1, todoListItems) listView?.adapter = listAdapter } In the preceding lines of code, we create a simple ArrayAdapter and assign it to the listView as its Adapter. Now, add a call to the previous method in the onCreate() method: populateListView() Build and run. Now, when you click the Add button, you'll see your entry show up on the ListView, as shown in the following screenshot: In this article, we built a simple TodoList app that allows a user to add new tasks, and edit or delete an already added task. In the process, we learned to use ListViews and Dialogs. Next, to learn about the different datastore options available and how to use them to make our app more usable, read our book, Learning Kotlin by building Android Applications. 6 common challenges faced by Android App developers. Google plans to let the AMP Project have an open governance model, soon! Entry level phones to taste the Go edition of the Android 9.0 Pie version
Read more
  • 0
  • 0
  • 28994

article-image-installing-php-nuke
Packt
22 Feb 2010
10 min read
Save for later

Installing PHP-Nuke

Packt
22 Feb 2010
10 min read
The steps to install and configure PHP-Nuke are simple: Download and extract the PHP-Nuke files. Download and apply ChatServ's patches. Create the database for PHP-Nuke. Create a database user, and fill the database with data. Make some simple changes to the PHP-Nuke configuration file. Copy the PHP-Nuke files to the document root of the web server. Test it out! Let's get started. Downloading PHP-Nuke The latest version of PHP-Nuke can be downloaded at phpnuke.org downloads page: http://www.phpnuke.org/modules.php?name=Downloads&d_op=viewdownload&cid=1 You can also obtain older versions of PHP-Nuke, including version 1.0, from SourceForge: http://sourceforge.net/project/showfiles.php?group_id=7511&package_id=7622 SourceForge is the world's largest home of open-source projects. Many projects use SourceForge's facilities to host and maintain their projects. You can find almost anything you want on SourceForge—whether it is in a usable state or has been updated recently is another matter. Extracting PHP-Nuke Once you have downloaded PHP-Nuke, you should extract the contents of the PHP-Nuke ZIP archive to the root of your c: drive. You will have to create a folder called PHP-Nuke-7.8 in the root of your c: drive. (If you extract the files elsewhere, create the folder PHP-Nuke-7.8 and copy the contents of the main unzipped folder to this new folder). If you don't have a tool for extracting the files, you can download an evaluation edition (or buy a full edition) of WinZip from www.winzip.com. There are also free, powerful, extracting tools such as ZipGenius (http://www.zipgenius.it/index_eng.htm) and 7-Zip (http://sourceforge.net/projects/sevenzip/) among others. In the PHP-Nuke-7.8 folder, you will find three subfolders called html, sql, and upgrades. The upgrades folder contains scripts that handle upgrading the database between different versions, the sql folder contains the definition of the PHP-Nuke database that we will be working with, and the html folder contains the guts of your PHP-Nuke installation. The html folder contains all the PHP scripts, HTML files, images, CSS stylesheets, and so on that drive PHP-Nuke. Within the html folder are further subfolders; some of these include: modules: Contains the modules that make up your PHP-Nuke site. Modules are the essence of PHP-Nuke's operation; we look at them from article Your First Page onwards. blocks: Contains PHP-Nuke's blocks. Blocks are 'mini-functionality' units and usually provide snippet views of modules. We will look at blocks in article Managing the Site. language: Contains PHP-Nuke language files. These allow the language of PHP-Nuke's interface to be changed. images: Contains images used in the display of the PHP-Nuke site. themes: Contains the themes for PHP-Nuke. The use of themes allows you to completely change the look of a PHP-Nuke site with a click of a button. includes, db: Contain code to support the running of PHP-Nuke. The db folder, for example, contains database access code. admin: Contains code to power the administration area of your site. Downloading the Patches No software is without its flaws, and PHP-Nuke is no exception. After a release, the large user community invariably finds problems and potential security holes. Furthermore, PHP-Nuke also contains features such as its forum, which is in fact the phpBB application specially modified to work with PHP-Nuke. phpBB itself is updated on a regular basis to correct critical security vulnerabilities or to fix other problems, and consequently the corresponding part of PHP-Nuke also needs to be updated. Rather than releasing a new version of PHP-Nuke for these situations, patches for its various parts are released. ChatServ's patches from www.nukeresources.com are mostly concerned with variable validation, in other words, making sure that the variables used in the application are of the right type for storing in the database. This has been an area of weakness with many earlier versions of PHP-Nuke. The patches are often incorporated into subsequent versions of PHP-Nuke so that each new version becomes more robust. Note that you don't have to apply the patches, and PHP-Nuke will still work without them. However, by applying them you will have taken a good step towards improving the security of your site. If you navigate to http://www.nukeresources.com, there is a handy menu on the front page to access the patches: To obtain the patch corresponding to your version, click the link and you will be taken to the relevant file (of course, www.nukeresources is a PHP-Nuke powered site!). Click on the Nuke 7.8 link to go to the Downloads page of www.nukeresources.com. On this page, clicking the Download this file Now! link will download the patches for PHP-Nuke 7.8. The name of this file will be of the form 78patched.tar.gz. This is a GZIP compressed file that contains all the patches that we are about to apply. The GZIP file can be extracted with WinZip, or any of the other utilities we discussed earlier. The patches are simply modified versions of the original PHP-Nuke files. The original files have been modified to address various security issues that may have been identified since the initial release, or maybe since the last version of the patch. Applying the Patches To apply the patches, first we need to extract the 78patched.tar.gz file. We will extract the files into a folder called patches that we will create in the PHP-Nuke-7.8 folder. After extracting the files, copy the contents of the patches folder to your html folder. Do not copy the patches folder, copy its contents. The patches folder contains files that replace the files in the default installation, and you get a Confirm File Replace window. Select Yes for all the files, and when the copying is complete, your installation is ready to go. We have performed this patching immediately after installing PHP-Nuke, but we could have done this at any time. Doing it at this point is more sensible as it means that we are working on the most secure version of PHP-Nuke. Also, the patch process we have described here overwrites existing PHP-Nuke installation files. If you have modified these files, then the changes will be lost on applying the patch. Thus applying the patches later without disturbing any of your changes becomes more demanding. There is one further thing to watch for after applying the patches. You may find that the patched files have had their permissions set to read-only, and that you are unable to modify the files. To modify the files (and we do have to modify at least the config.php file in this article) you will need to remove this setting. You can do this on Windows by right-clicking on the file, selecting Properties from the menu, unchecking the Read-only setting, and clicking the OK button: We've done almost all the work with the files that we need to; now we turn our attention to creating and populating PHP-Nuke's database. Preparing the PHP-Nuke Database We'll be using the phpMyAdmin tool to do our database work. phpMyAdmin is part of the XAMPP installation (detailed in Appendix A), or can be downloaded from www.phpmyadmin.net, if you don't already have it. phpMyAdmin provides a powerful web interface for working with your MySQL databases. First of all, open your browser and navigate to http://localhost/phpmyadmin/, or whatever the location of your phpMyAdmin installation is: Creating the Database We need to create an empty database for PHP-Nuke to hold all the data about our site. To do this, we simply enter a name for our database into the Create new database textbox: We will call our database nuke. Enter this, and click the Create button. The name you give doesn't particularly matter, as long as it is not the name of some already existing database. If you try to use the same name as an already existing database, phpMyAdmin will inform you of this, and no action will be taken. The exact name isn't particularly important at this point because there is another configuration step coming up, which requires us to tell PHP-Nuke the name of the database we've created for it. After clicking Create, the screen will reload and you will be notified of the successful creation of your database: Creating a Database User Before we start populating the database, we will create a database user that can access only the PHP-Nuke database. This user is not a human, but will be used by PHP-Nuke to connect to the database while it performs its data-handling activities. The advantage of creating a database user is that it adds an extra level of security to our installation. PHP-Nuke will be able to work with data only in this database of the MySQL server, and no other. Also, PHP-Nuke will be restricted in the operations it can perform on the tables in the database. We will need to create a username for this boxed-in user to access the nuke database. Let's call our user nuker and go with the password nukepassword. However, in order to add an extra level of security we will introduce some digits into nukepassword, and some other slight twists, to strengthen it, and so use the word No0kPassv0rd as our database user password. To create the database user, click the SQL tab, and enter the following into the Run SQL query/queries on database textbox: GRANT ALL PRIVILEGES ON nuke.* TO nuker@localhost IDENTIFIED BY 'No0kPassv0rd' WITH GRANT OPTION Your screen should look like this: Click the Go button, and the database user will be created: Populating the Database Now we are ready to fill our database with data for PHP-Nuke. This doesn't mean we start typing the data in ourselves; the data comes with the PHP-Nuke installation. This data is found in a file called nuke.sql in the sql folder of the PHP-Nuke installation. This file contains a number of SQL statements that define the tables within the database and also fill them with 'raw' data for the site. However, before we fill the database with the tables from this file, we need to make a modification to this file. By default, the name of each database table in PHP-Nuke begins with nuke_. For example, there is a table with the name nuke_stories that holds information about stories, and a table called nuke_topics that holds information about story topics. These are just two of the tables; there are more than 90 in the standard installation. The word nuke_ is a 'table prefix', and is used to ensure that there are no clashes between the names of PHP-Nuke's tables and tables from another application in the same database, since the rest of the table name is descriptive of the data stored in the table, and other applications may have similarly named tables. What this does mean is that unless this table prefix is changed, the table names in your PHP-Nuke database will be known to anyone attempting to hack your site. Many of the typical attacks used to damage PHP-Nuke are based around the fact that the names of the tables in the database powering a PHP-Nuke site are known. By changing the table prefix to something less obvious, you have taken another step to making your site more secure.
Read more
  • 0
  • 8
  • 28844

article-image-what-is-a-convolutional-neural-network-cnn-video
Richard Gall
25 Sep 2018
5 min read
Save for later

What is a convolutional neural network (CNN)? [Video]

Richard Gall
25 Sep 2018
5 min read
What is a convolutional neural network, exactly? Well, let's start with the basics: a convolutional neural network (CNN) is a type of neural network that is most often applied to image processing problems. You've probably seen them in action anywhere a computer is identifying objects in an image. But you can also use convolutional neural networks in natural language processing projects, too. The fact that they are useful for these fast growing areas is one of the main reasons they're so important in deep learning and artificial intelligence today. What makes a convolutional neural network unique? Once you understand how a convolutional neural network works and what makes it unique from other neural networks, you can see why they're so effective for processing and classifying images. But let’s first take a regular neural network. A regular neural network has an input layer, hidden layers and an output layer. The input layer accepts inputs in different forms, while the hidden layers perform calculations on these inputs. The output layer then delivers the outcome of the calculations and extractions. Each of these layers contains neurons that are connected to neurons in the previous layer, and each neuron has its own weight. This means you aren’t making any assumptions about the data being fed into the network - great usually, but not if you’re working with images or language. Convolutional neural networks work differently as they treat data as spatial. Instead of neurons being connected to every neuron in the previous layer, they are instead only connected to neurons close to it and all have the same weight. This simplification in the connections means the network upholds the spatial aspect of the data set. It means your network doesn’t think an eye is all over the image. The word ‘convolutional’ refers to the filtering process that happens in this type of network. Think of it this way, an image is complex - a convolutional neural network simplifies it so it can be better processed and ‘understood.’ What's inside a convolutional neural network? Like a normal neural network, a convolutional neural network is made up of multiple layers. There are a couple of layers that make it unique - the convolutional layer and the pooling layer. However, like other neural networks, it will also have a ReLu or rectified linear unit layer, and a fully connected layer. The ReLu layer acts as an activation function, ensuring non-linearity as the data moves through each layer in the network - without it, the data being fed into each layer would lose the dimensionality that we want to maintain. The fully connected layer, meanwhile, allows you to perform classification on your dataset. The convolutional layer The convolutional layer is the most important, so let’s start there. It works by placing a filter over an array of image pixels - this then creates what’s called a convolved feature map. "It’s a bit like looking at an image through a window which allows you to identify specific features you might not otherwise be able to see. The pooling layer Next we have the pooling layer - this downsamples or reduces the sample size of a particular feature map. This also makes processing much faster as it reduces the number of parameters the network needs to process. The output of this is a pooled feature map. There are two ways of doing this, max pooling, which takes the maximum input of a particular convolved feature, or average pooling, which simply takes the average. These steps amount to feature extraction, whereby the network builds up a picture of the image data according to its own mathematical rules. If you want to perform classification, you'll need to move into the fully connected layer. To do this, you'll need to flatten things out - remember, a neural network with a more complex set of connections can only process linear data. How to train a convolutional neural network There are a number of ways you can train a convolutional neural network. If you’re working with unlabelled data, you can use unsupervised learning methods. One of the best popular ways of doing this is using auto-encoders - this allows you to squeeze data in a space with low dimensions, performing calculations in the first part of the convolutional neural network. Once this is done you’ll then need to reconstruct with additional layers that upsample the data you have. Another option is to use generative adversarial networks, or GANs. With a GAN, you train two networks. The first gives you artificial data samples that should resemble data in the training set, while the second is a ‘discriminative network’ - it should distinguish between the artificial and the 'true' model. What's the difference between a convolutional neural network and a recurrent neural network? Although there's a lot of confusion about the difference between a convolutional neural network and a recurrent neural network, it's actually more simple than many people realise. Whereas a convolutional neural network is a feedforward network that filters spatial data, a recurrent neural network, as the name implies, feeds data back into itself. From this perspective recurrent neural networks are better suited to sequential data. Think of it like this: a convolutional network is able to perceive patterns across space - a recurrent neural network can see them over time. How to get started with convolutional neural networks If you want to get started with convolutional neural networks Python and TensorFlow are great tools to begin with. It’s worth exploring MNIST dataset too. This is a database of handwritten digits that you can use to get started with building your first convolutional neural network. To learn more about convolutional neural networks, artificial intelligence, and deep learning, visit Packt's store for eBooks and videos.
Read more
  • 0
  • 0
  • 28567

article-image-build-google-cloud-iot-application
Gebin George
27 Jun 2018
19 min read
Save for later

Build an IoT application with Google Cloud [Tutorial]

Gebin George
27 Jun 2018
19 min read
In this tutorial, we will build a sample internet of things application using Google Cloud IoT. We will start off by implementing the end-to-end solution, where we take the data from the DHT11 sensor and post it to the Google IoT Core state topic. This article is an excerpt from the book, Enterprise Internet of Things Handbook, written by Arvind Ravulavaru. End-to-end communication To get started with Google IoT Core, we need to have a Google account. If you do not have a Google account, you can create one by navigating to this URL: https://accounts.google.com/SignUp?hl=en. Once you have created your account, you can login and navigate to Google Cloud Console: https://console.cloud.google.com. Setting up a project The first thing we are going to do is create a project. If you have already worked with Google Cloud Platform and have at least one project, you will be taken to the first project in the list or you will be taken to the Getting started page. As of the time of writing this book, Google Cloud Platform has a free trial for 12 months with $300 if the offer is still available when you are reading this chapter, I would highly recommend signing up: Once you have signed up, let's get started by creating a new project. From the top menu bar, select the Select a Project dropdown and click on the plus icon to create a new project. You can fill in the details as illustrated in the following screenshot: Click on the Create button. Once the project is created, navigate to the Project and you should land on the Home page. Enabling APIs Following are the steps to be followed for enabling APIs: From the menu on the left-hand side, select APIs & Services | Library as shown in the following screenshot: On the following screen, search for pubsub and select the Pub/Sub API from the results and we should land on a page similar to the following: Click on the ENABLE button and we should now be able to use these APIs in our project. Next, we need to enable the real-time API; search for realtime and we should find something similar to the following: Click on the ENABLE & button. Enabling device registry and devices The following steps should be used for enabling device registry and devices: From the left-hand side menu, select IoT Core and we should land on the IoT Core home page: Instead of the previous screen, if you see a screen to enable APIs, please enable the required APIs from here. Click on the & Create device registry button. On the Create device registry screen, fill the details as shown in the following table: Field Value Registry ID Pi3-DHT11-Nodes Cloud region us-central1 Protocol MQTT HTTP Default telemetry topic device-events Default state topic dht11 After completing all the details, our form should look like the following: We will add the required certificates later on. Click on the Create button and a new device registry will be created. From the Pi3-DHT11-Nodes registry page, click on the Add device button and set the Device ID as Pi3-DHT11-Node or any other suitable name. Leave everything as the defaults and make sure the Device communication is set to Allowed and create a new device. On the device page, we should see a warning as highlighted in the following screenshot: Now, we are going to add a new public key. To generate a public/private key pair, we need to have OpenSSL command line available. You can download and set up OpenSSL from here: https://www.openssl.org/source/. Use the following command to generate a certificate pair at the default location on your machine: openssl req -x509 -newkey rsa:2048 -keyout rsa_private.pem -nodes -out rsa_cert.pem -subj "/CN=unused" If everything goes well, you should see an output as shown here: Do not share these certificates anywhere; anyone with these certificates can connect to Google IoT Core as a device and start publishing data. Now, once the certificates are created, we will attach them to the device we have created in IoT Core. Head back to the device page of the Google IoT Core service and under Authentication click on Add public key. On the following screen, fill it in as illustrated: The public key value is the contents of rsa_cert.pem that we generated earlier. Click on the ADD button. Now that the public key has been successfully added, we can connect to the cloud using the private key. Setting up Raspberry Pi 3 with DHT11 node Now that we have our device set up in Google IoT Core, we are going to complete the remaining operation on Raspberry Pi 3 to send data. Pre-requisites The requirements for setting up Raspberry Pi 3 on a DHT11 node are: One Raspberry Pi 3: https://www.amazon.com/Raspberry-Pi-Desktop-Starter-White/dp/B01CI58722 One breadboard: https://www.amazon.com/Solderless-Breadboard-Circuit-Circboard-Prototyping/dp/B01DDI54II/ One DHT11 sensor: https://www.amazon.com/HiLetgo-Temperature-Humidity-Arduino-Raspberry/dp/B01DKC2GQ0 Three male-to-female jumper cables: https://www.amazon.com/RGBZONE-120pcs-Multicolored-Dupont-Breadboard/dp/B01M1IEUAF/ If you are new to the world of Raspberry Pi GPIO's interfacing, take a look at this Raspberry Pi GPIO Tutorial: The Basics Explained on YouTube: https://www.youtube.com/watch?v=6PuK9fh3aL8. The following steps are to be used for the setup process: Connect the DHT11 sensor to Raspberry Pi 3 as shown in the following diagram: Next, power up Raspberry Pi 3 and log in to it. On the desktop, create a new folder named Google-IoT-Device. Open a new Terminal and cd into this folder. Setting up Node.js Refer to the following steps to install Node.js: Open a new Terminal and run the following commands: $ sudo apt update $ sudo apt full-upgrade This will upgrade all the packages that need upgrades. Next, we will install the latest version of Node.js. We will be using the Node 7.x version: $ curl -sL https://deb.nodesource.com/setup_7.x | sudo -E bash - $ sudo apt install nodejs This will take a moment to install, and once your installation is done, you should be able to run the following commands to see the version of Node.js and npm: $ node -v $ npm -v Developing the Node.js device app Now, we will set up the app and write the required code: From the Terminal, once you are inside the Google-IoT-Device folder, run the following command: $ npm init -y Next, we will install jsonwebtoken (https://www.npmjs.com/package/jsonwebtoken) and mqtt (https://www.npmjs.com/package/mqtt) from npm. Execute the following command: $ npm install jsonwebtoken mqtt--save Next, we will install rpi-dht-sensor (https://www.npmjs.com/package/rpi-dht-sensor) from npm. This module will help in reading the DHT11 temperature and humidity values: $ npm install rpi-dht-sensor --save Your final package.json file should look similar to the following code snippet: { "name": "Google-IoT-Device", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "test": "echo "Error: no test specified" && exit 1" }, "keywords": [], "author": "", "license": "ISC", "dependencies": { "jsonwebtoken": "^8.1.1", "mqtt": "^2.15.3", "rpi-dht-sensor": "^0.1.1" } } Now that we have the required dependencies installed, let's continue. Create a new file named index.js at the root of the Google-IoT-Device folder. Next, create a folder named certs at the root of the Google-IoT-Device folder and move the two certificates we created using OpenSSL there. Your final folder structure should look something like this: Open index.js in any text editor and update it as shown here: var fs = require('fs'); var jwt = require('jsonwebtoken'); var mqtt = require('mqtt'); var rpiDhtSensor = require('rpi-dht-sensor'); var dht = new rpiDhtSensor.DHT11(2); // `2` => GPIO2 var projectId = 'pi-iot-project'; var cloudRegion = 'us-central1'; var registryId = 'Pi3-DHT11-Nodes'; var deviceId = 'Pi3-DHT11-Node'; var mqttHost = 'mqtt.googleapis.com'; var mqttPort = 8883; var privateKeyFile = '../certs/rsa_private.pem'; var algorithm = 'RS256'; var messageType = 'state'; // or event var mqttClientId = 'projects/' + projectId + '/locations/' + cloudRegion + '/registries/' + registryId + '/devices/' + deviceId; var mqttTopic = '/devices/' + deviceId + '/' + messageType; var connectionArgs = { host: mqttHost, port: mqttPort, clientId: mqttClientId, username: 'unused', password: createJwt(projectId, privateKeyFile, algorithm), protocol: 'mqtts', secureProtocol: 'TLSv1_2_method' }; console.log('connecting...'); var client = mqtt.connect(connectionArgs); // Subscribe to the /devices/{device-id}/config topic to receive config updates. client.subscribe('/devices/' + deviceId + '/config'); client.on('connect', function(success) { if (success) { console.log('Client connected...'); sendData(); } else { console.log('Client not connected...'); } }); client.on('close', function() { console.log('close'); }); client.on('error', function(err) { console.log('error', err); }); client.on('message', function(topic, message, packet) { console.log(topic, 'message received: ', Buffer.from(message, 'base64').toString('ascii')); }); function createJwt(projectId, privateKeyFile, algorithm) { var token = { 'iat': parseInt(Date.now() / 1000), 'exp': parseInt(Date.now() / 1000) + 86400 * 60, // 1 day 'aud': projectId }; var privateKey = fs.readFileSync(privateKeyFile); return jwt.sign(token, privateKey, { algorithm: algorithm }); } function fetchData() { var readout = dht.read(); var temp = readout.temperature.toFixed(2); var humd = readout.humidity.toFixed(2); return { 'temp': temp, 'humd': humd, 'time': new Date().toISOString().slice(0, 19).replace('T', ' ') // https://stackoverflow.com/a/11150727/1015046 }; } function sendData() { var payload = fetchData(); payload = JSON.stringify(payload); console.log(mqttTopic, ': Publishing message:', payload); client.publish(mqttTopic, payload, { qos: 1 }); console.log('Transmitting in 30 seconds'); setTimeout(sendData, 30000); } In the previous code, we first define the projectId, cloudRegion, registryId, and deviceId based on what we have created. Next, we build the connectionArgs object, using which we are going to connect to Google IoT Core using MQTT-SN. Do note that the password property is a JSON Web Token (JWT), based on the projectId and privateKeyFile algorithm. The token that is created by this function is valid only for one day. After one day, the cloud will refuse connection to this device if the same token is used. The username value is the Common Name (CN) of the certificate we have created, which is unused. Using mqtt.connect(), we are going to connect to the Google IoT Core. And we are subscribing to the device config topic, which can be used to send device configurations when connected. Once the connection is successful, we callsendData() every 30 seconds to send data to the state topic. Save the previous file and run the following command: $ sudo node index.js And we should see something like this: As you can see from the previous Terminal logs, the device first gets connected then starts transmitting the temperature and humidity along with time. We are sending time as well, so we can save it in the BigQuery table and then build a time series chart quite easily. Now, if we head back to the Device page of Google IoT Core and navigate to the Configuration & state history tab, we should see the data that we are sending to the state topic here: Now that the device is sending data, let's actually read the data from another client. Reading the data from the device For this, you can either use the same Raspberry Pi 3 or another computer. I am going to use MacBook as a client that is interested in the data sent by the Thing. Setting up credentials Before we start reading data from Google IoT Core, we have to set up our computer (for example, MacBook) as a trusted device, so our computer can request data. Let's perform the following steps to set the credentials: To do this, we need to create a new Service account key. From the left-hand-side menu of the Google Cloud Console, select APIs & Services | Credentials. Then click on the Create credentials dropdown and select Service account key as shown in the following screenshot: Now, fill in the details as shown in the following screenshot: We have given access to the entire project for this client and as an Owner. Do not select these settings if this is a production application. Click on Create and you will be asked to download and save the file. Do not share this file; this file is as good as giving someone owner-level permissions to all assets of this project. Once the file is downloaded somewhere safe, create an environment variable with the name GOOGLE_APPLICATION_CREDENTIALS and point it to the path of the downloaded file. You can refer to Getting Started with Authentication at https://cloud.google.com/docs/authentication/getting-started if you are facing any difficulties. Setting up subscriptions The data from the device is being sent to Google IoT Core using the state topic. If you recall, we have named that topic dht11. Now, we are going to create a subscription for this topic: From the menu on the left side, select Pub/Sub | Topics. Now, click on New subscription for the dht11 topic, as shown in the following screenshot: Create a new subscription by setting up the options selected in this screenshot: We are going to use the subscription named dht11-data to get the data from the state topic. Setting up the client Now that we have provided the required credentials as well as subscribed to a Pub/Sub topic, we will set up the Pub/Sub client. Follow these steps: Create a folder named test_client inside the test_client directory. Now, run the following command: $ npm init -y Next, install the @google-cloud/pubsub (https://www.npmjs.com/package/@google-cloud/pubsub) module with the help of the following command: $ npm install @google-cloud/pubsub --save Create a file inside the test_client folder named index.js and update it as shown in this code snippet: var PubSub = require('@google-cloud/pubsub'); var projectId = 'pi-iot-project'; var stateSubscriber = 'dht11-data' // Instantiates a client var pubsub = new PubSub({ projectId: projectId, }); var subscription = pubsub.subscription('projects/' + projectId + '/subscriptions/' + stateSubscriber); var messageHandler = function(message) { console.log('Message Begin >>>>>>>>'); console.log('message.connectionId', message.connectionId); console.log('message.attributes', message.attributes); console.log('message.data', Buffer.from(message.data, 'base64').toString('ascii')); console.log('Message End >>>>>>>>>>'); // "Ack" (acknowledge receipt of) the message message.ack(); }; // Listen for new messages subscription.on('message', messageHandler); Update the projectId and stateSubscriber in the previous code. Now, save the file and run the following command: $ node index.js We should see the following output in the console: This way, any client that is interested in the data of this device can use this approach to get the latest data. With this, we conclude the section on posting data to Google IoT Core and fetching the data. In the next section, we are going to work on building a dashboard. Building a dashboard Now that we have seen how a client can read the data from our device on demand, we will move on to building a dashboard, where we display data in real time. For this, we are going to use Google Cloud Functions, Google BigQuery, and Google Data Studio. Google Cloud Functions Cloud Functions are solution for serverless services. Cloud Functions is a lightweight solution for creating standalone and single-purpose functions that respond to cloud events. You can read more about Google Cloud Functions at https://cloud.google.com/functions/. Google BigQuery Google BigQuery is an enterprise data warehouse that solves this problem by enabling super-fast SQL queries using the processing power of Google's infrastructure. You can read more about Google BigQuery at https://cloud.google.com/bigquery/. Google Data Studio Google Data Studio helps to build dashboards and reports using various data connectors, such as BigQuery or Google Analytics. You can read more about Google Data Studio at https://cloud.google.com/data-studio/. As of April 2018, these three services are still in beta. As we have already seen in the Architecture section, once the data is published on the state topic, we are going to create a cloud function that will get triggered by the data event on the Pub/Sub client. And inside our cloud function, we are going to get a copy of the published data and then insert it into the BigQuery dataset. Once the data is inserted, we are going to use Google Data Studio to create a new report by linking the BigQuery dataset to the input. So, let's get started. Setting up BigQuery The first thing we are going to do is set up BigQuery: From the side menu of the Google Cloud Platform Console, our project page, click on the BigQuery URL and we should be taken to the Google BigQuery home page. Select Create new dataset, as shown in the following screenshot: Create a new dataset with the values illustrated in the following screenshot: Once the dataset is created, click on the plus sign next to the dataset and create an empty table. We are going to name the table dht11_data and we are going have three fields in it, as shown here: Click on the Create Table button to create the table. Now that we have our table ready, we will write a cloud function to insert the incoming data from Pub/Sub into this table. Setting up Google Cloud Function Now, we are going to set up a cloud function that will be triggered by the incoming data: From the Google Cloud Console's left-hand-side menu, select Cloud Functions under Compute. Once you land on the Google Cloud Functions homepage, you will be asked to enable the cloud functions API. Click on Enable API: Once the API is enabled, we will be on the Create function page. Fill in the form as shown here: The Trigger is set to Cloud Pub/Sub topic and we have selected dht11 as the Topic. Under the Source code section; make sure you are in the index.js tab and update it as shown here: var BigQuery = require('@google-cloud/bigquery'); var projectId = 'pi-iot-project'; var bigquery = new BigQuery({ projectId: projectId, }); var datasetName = 'pi3_dht11_dataset'; var tableName = 'dht11_data'; exports.pubsubToBQ = function(event, callback) { var msg = event.data; var data = JSON.parse(Buffer.from(msg.data, 'base64').toString()); // console.log(data); bigquery .dataset(datasetName) .table(tableName) .insert(data) .then(function() { console.log('Inserted rows'); callback(); // task done }) .catch(function(err) { if (err && err.name === 'PartialFailureError') { if (err.errors && err.errors.length > 0) { console.log('Insert errors:'); err.errors.forEach(function(err) { console.error(err); }); } } else { console.error('ERROR:', err); } callback(); // task done }); }; In the previous code, we were using the BigQuery Node.js module to insert data into our BigQuery table. Update projectId, datasetName, and tableName as applicable in the code. Next, click on the package.json tab and update it as shown: { "name": "cloud_function", "version": "0.0.1", "dependencies": { "@google-cloud/bigquery": "^1.0.0" } } Finally, for the Function to execute field, enter pubsubToBQ. pubsubToBQ is the name of the function that has our logic and this function will be called when the data event occurs. Click on the Create button and our function should be deployed in a minute. Running the device Now that the entire setup is done, we will start pumping data into BigQuery: Head back to Raspberry Pi 3 which was sending the DHT11 temperature and humidity data, and run the application. We should see the data being published to the state topic: Now, if we head back to the Cloud Functions page, we should see the requests coming into the cloud function: You can click on VIEW LOGS to view the logs of each function execution: Now, head over to our table in BigQuery and click on the RUN QUERY button; run the query as shown in the following screenshot: Now, all the data that was generated by the DHT11 sensor is timestamped and stored in BigQuery. You can use the Save to Google Sheets button to save this data to Google Sheets and analyze the data there or plot graphs, as shown here: Or we can go one step ahead and use the Google Data Studio to do the same. Google Data Studio reports Now that the data is ready in BigQuery, we are going to set up Google Data Studio and then connect both of them, so we can access the data from BigQuery in Google Data Studio: Navigate to https://datastudio.google.com and log in with your Google account. Once you are on the Home page of Google Data Studio, click on the Blank report template. Make sure you read and agree to the terms and conditions before proceeding. Name the report PI3 DHT11 Sensor Data. Using the Create new data source button, we will create a new data source. Click on Create new data source and we should land on a page where we need to create a new Data Source. From the list of Connectors, select BigQuery; you will be asked to authorize Data Studio to interface with BigQuery, as shown in the following screenshot: Once we authorized, we will be shown our projects and related datasets and tables: Select the dht11_data table and click on Connect. This fetches the metadata of the table as shown here: Set the Aggregation for the temp and humd fields to Max and set the Type for time as Date & Time. Pick Minute (mm) from the sub-list. Click on Add to report and you will be asked to authorize Google Data Studio to read data from the table. Once the data source has been successfully linked, we will create a new time series chart. From the menu, select Insert | Time Series link. Update the data configuration of the chart as shown in the following screenshot: You can play with the styles as per your preference and we should see something similar to the following screenshot: This report can then be shared with any user. With this, we have seen the basic features and implementation process needed to work with Google Cloud IoT Core as well other features of the platform. If you found this post useful, do check out the book,  Enterprise Internet of Things Handbook, to build state of the art IoT applications best-fit for Enterprises. Cognitive IoT: How Artificial Intelligence is remoulding Industrial and Consumer IoT Five Most Surprising Applications of IoT How IoT is going to change tech teams
Read more
  • 0
  • 17
  • 28530
article-image-creating-a-simple-modular-application-in-java-11-tutorial
Prasad Ramesh
20 Feb 2019
11 min read
Save for later

Creating a simple modular application in Java 11 [Tutorial]

Prasad Ramesh
20 Feb 2019
11 min read
Modular programming enables one to organize code into independent, cohesive modules, which can be combined to achieve the desired functionality. This article is an excerpt from a book written by Nick Samoylov and Mohamed Sanaulla titled Java 11 Cookbook - Second Edition. In this book, you will learn how to implement object-oriented designs using classes and interfaces in Java 11. The complete code for the examples shown in this tutorial can be found on GitHub. You should be wondering what this modularity is all about, and how to create a modular application in Java. In this article, we will try to clear up the confusion around creating modular applications in Java by walking you through a simple example. Our goal is to show you how to create a modular application; hence, we picked a simple example so as to focus on our goal. Our example is a simple advanced calculator, which checks whether a number is prime, calculates the sum of prime numbers, checks whether a number is even, and calculates the sum of even and odd numbers. Getting ready We will divide our application into two modules: The math.util module, which contains the APIs for performing the mathematical calculations The calculator module, which launches an advanced calculator How to do it Let's implement the APIs in the com.packt.math.MathUtil class, starting with the isPrime(Integer number) API: public static Boolean isPrime(Integer number){ if ( number == 1 ) { return false; } return IntStream.range(2,num).noneMatch(i -> num % i == 0 ); } Implement the sumOfFirstNPrimes(Integer count) API: public static Integer sumOfFirstNPrimes(Integer count){ return IntStream.iterate(1,i -> i+1) .filter(j -> isPrime(j)) .limit(count).sum(); } Let's write a function to check whether the number is even: public static Boolean isEven(Integer number){ return number % 2 == 0; } The negation of isEven tells us whether the number is odd. We can have functions to find the sum of the first N even numbers and the first N odd numbers, as shown here: public static Integer sumOfFirstNEvens(Integer count){ return IntStream.iterate(1,i -> i+1) .filter(j -> isEven(j)) .limit(count).sum(); } public static Integer sumOfFirstNOdds(Integer count){ return IntStream.iterate(1,i -> i+1) .filter(j -> !isEven(j)) .limit(count).sum(); } We can see in the preceding APIs that the following operations are repeated: An infinite sequence of numbers starting from 1 Filtering the numbers based on some condition Limiting the stream of numbers to a given count Finding the sum of numbers thus obtained Based on our observation, we can refactor the preceding APIs and extract these operations into a method, as follows: Integer computeFirstNSum(Integer count, IntPredicate filter){ return IntStream.iterate(1,i -> i+1) .filter(filter) .limit(count).sum(); } Here, count is the limit of numbers we need to find the sum of, and filter is the condition for picking the numbers for summing. Let's rewrite the APIs based on the refactoring we just did: public static Integer sumOfFirstNPrimes(Integer count){ return computeFirstNSum(count, (i -> isPrime(i))); } public static Integer sumOfFirstNEvens(Integer count){ return computeFirstNSum(count, (i -> isEven(i))); } public static Integer sumOfFirstNOdds(Integer count){ return computeFirstNSum(count, (i -> !isEven(i))); So far, we have seen a few APIs around mathematical computations. These APIs are part of our com.packt.math.MathUtil class. The complete code for this class can be found at Chapter03/2_simple-modular-math-util/math.util/com/packt/math, in the codebase downloaded for this book. Let's make this small utility class part of a module named math.util. The following are some conventions we use to create a module: Place all the code related to the module under a directory named math.util and treat this as our module root directory. In the root folder, insert a file named module-info.java. Place the packages and the code files under the root directory. What does module-info.java contain? The following: The name of the module The packages it exports, that is, the one it makes available for other modules to use The modules it depends on The services it uses The service for which it provides implementation Our math.util module doesn't depend on any other module (except, of course, the java.base module). However, it makes its API available for other modules (if not, then this module's existence is questionable). Let's go ahead and put this statement into code: module math.util{ exports com.packt.math; } We are telling the Java compiler and runtime that our math.util module is exporting the code in the com.packt.math package to any module that depends on math.util. The code for this module can be found at Chapter03/2_simple-modular-math-util/math.util. Now, let's create another module calculator that uses the math.util module. This module has a Calculator class whose work is to accept the user's choice for which mathematical operation to execute and then the input required to execute the operation. The user can choose from five available mathematical operations: Prime number check Even number check Sum of N primes Sum of N evens Sum of N odds Let's see this in code: private static Integer acceptChoice(Scanner reader){ System.out.println("************Advanced Calculator************"); System.out.println("1. Prime Number check"); System.out.println("2. Even Number check"); System.out.println("3. Sum of N Primes"); System.out.println("4. Sum of N Evens"); System.out.println("5. Sum of N Odds"); System.out.println("6. Exit"); System.out.println("Enter the number to choose operation"); return reader.nextInt(); } Then, for each of the choices, we accept the required input and invoke the corresponding MathUtil API, as follows: switch(choice){ case 1: System.out.println("Enter the number"); Integer number = reader.nextInt(); if (MathUtil.isPrime(number)){ System.out.println("The number " + number +" is prime"); }else{ System.out.println("The number " + number +" is not prime"); } break; case 2: System.out.println("Enter the number"); Integer number = reader.nextInt(); if (MathUtil.isEven(number)){ System.out.println("The number " + number +" is even"); } break; case 3: System.out.println("How many primes?"); Integer count = reader.nextInt(); System.out.println(String.format("Sum of %d primes is %d", count, MathUtil.sumOfFirstNPrimes(count))); break; case 4: System.out.println("How many evens?"); Integer count = reader.nextInt(); System.out.println(String.format("Sum of %d evens is %d", count, MathUtil.sumOfFirstNEvens(count))); break; case 5: System.out.println("How many odds?"); Integer count = reader.nextInt(); System.out.println(String.format("Sum of %d odds is %d", count, MathUtil.sumOfFirstNOdds(count))); break; } The complete code for the Calculator class can be found at Chapter03/2_simple-modular-math-util/calculator/com/packt/calculator/Calculator.java. Let's create the module definition for our calculator module in the same way we created it for the math.util module: module calculator{ requires math.util; } In the preceding module definition, we mentioned that the calculator module depends on the math.util module by using the required keyword. The code for this module can be found at Chapter03/2_simple-modular-math-util/calculator. Let's compile the code: javac -d mods --module-source-path . $(find . -name "*.java") The preceding command has to be executed from Chapter03/2_simple-modular-math-util. Also, you should have the compiled code from across both the modules, math.util and calculator, in the mods directory. Just a single command and everything including the dependency between the modules is taken care of by the compiler. We didn't require build tools such as ant to manage the compilation of modules. The --module-source-path command is the new command-line option for javac, specifying the location of our module source code. Let's execute the preceding code: java --module-path mods -m calculator/com.packt.calculator.Calculator The --module-path command, similar to --classpath, is the new command-line option  java, specifying the location of the compiled modules. After running the preceding command, you will see the calculator in action: Congratulations! With this, we have a simple modular application up and running. We have provided scripts to test out the code on both Windows and Linux platforms. Please use run.bat for Windows and run.sh for Linux. How it works Now that you have been through the example, we will look at how to generalize it so that we can apply the same pattern in all our modules. We followed a particular convention to create the modules: |application_root_directory |--module1_root |----module-info.java |----com |------packt |--------sample |----------MyClass.java |--module2_root |----module-info.java |----com |------packt |--------test |----------MyAnotherClass.java We place the module-specific code within its folders with a corresponding module-info.java file at the root of the folder. This way, the code is organized well. Let's look into what module-info.java can contain. From the Java language specification (http://cr.openjdk.java.net/~mr/jigsaw/spec/lang-vm.html), a module declaration is of the following form: {Annotation} [open] module ModuleName { {ModuleStatement} } Here's the syntax, explained: {Annotation}: This is any annotation of the form @Annotation(2). open: This keyword is optional. An open module makes all its components accessible at runtime via reflection. However, at compile-time and runtime, only those components that are explicitly exported are accessible. module: This is the keyword used to declare a module. ModuleName: This is the name of the module that is a valid Java identifier with a permissible dot (.) between the identifier names—similar to math.util. {ModuleStatement}: This is a collection of the permissible statements within a module definition. Let's expand this next. A module statement is of the following form: ModuleStatement: requires {RequiresModifier} ModuleName ; exports PackageName [to ModuleName {, ModuleName}] ; opens PackageName [to ModuleName {, ModuleName}] ; uses TypeName ; provides TypeName with TypeName {, TypeName} ; The module statement is decoded here: requires: This is used to declare a dependency on a module. {RequiresModifier} can be transitive, static, or both. Transitive means that any module that depends on the given module also implicitly depends on the module that is required by the given module transitively. Static means that the module dependence is mandatory at compile time, but optional at runtime. Some examples are requires math.util, requires transitive math.util, and requires static math.util. exports: This is used to make the given packages accessible to the dependent modules. Optionally, we can force the package's accessibility to specific modules by specifying the module name, such as exports com.package.math to claculator. opens: This is used to open a specific package. We saw earlier that we can open a module by specifying the open keyword with the module declaration. But this can be less restrictive. So, to make it more restrictive, we can open a specific package for reflective access at runtime by using the opens keyword—opens com.packt.math. uses: This is used to declare a dependency on a service interface that is accessible via java.util.ServiceLoader. The service interface can be in the current module or in any module that the current module depends on. provides: This is used to declare a service interface and provide it with at least one implementation. The service interface can be declared in the current module or in any other dependent module. However, the service implementation must be provided in the same module; otherwise, a compile-time error will occur. We will look at the uses and provides clauses in more detail in the Using services to create loose coupling between the consumer and provider modules recipe. The module source of all modules can be compiled at once using the --module-source-path command-line option. This way, all the modules will be compiled and placed in their corresponding directories under the directory provided by the -d option. For example, javac -d mods --module-source-path . $(find . -name "*.java") compiles the code in the current directory into a mods directory. Running the code is equally simple. We specify the path where all our modules are compiled into using the command-line option --module-path. Then, we mention the module name along with the fully qualified main class name using the command-line option -m, for example, java --module-path mods -m calculator/com.packt.calculator.Calculator. In this tutorial, we learned to create a simple modular Java application. To learn more Java 11 recipes, check out the book Java 11 Cookbook - Second Edition. Brian Goetz on Java futures at FOSDEM 2019 7 things Java programmers need to watch for in 2019 Clojure 1.10 released with Prepl, improved error reporting and Java compatibility
Read more
  • 0
  • 0
  • 28521

article-image-step-by-step-guide-to-creating-odoo-addon-modules
Sugandha Lahoti
19 Jun 2018
15 min read
Save for later

A step by step guide to creating Odoo Addon Modules

Sugandha Lahoti
19 Jun 2018
15 min read
Odoo uses a client/server architecture in which clients are web browsers accessing the Odoo server via RPC. Both the server and client extensions are packaged as modules which are optionally loaded in a database. Odoo modules can either add brand new business logic to an Odoo system or alter and extend existing business logic. Everything in Odoo starts and ends with modules. In this article, we will cover the basics of creating Odoo Addon Modules. Recipes that we will cover in this. Creating and installing a new addon module Completing the addon module manifest Organizing the addon module file structure Adding Models Our main goal here is to understand how an addon module is structured and the typical incremental workflow to add components to it. This post is an excerpt from the book Odoo 11 Development Cookbook - Second Edition by Alexandre Fayolle and Holger Brunn. With this book, you can create fast and efficient server-side applications using the latest features of Odoo v11. For this article, you should have Odoo installed. You are also expected to be comfortable in discovering and installing extra addon modules. Creating and installing a new addon module In this recipe, we will create a new module, make it available in our Odoo instance, and install it. Getting ready We will need an Odoo instance ready to use. For explanation purposes, we will assume Odoo is available at ~/odoo-dev/odoo, although any other location of your preference can be used. We will also need a location for our Odoo modules. For the purpose of this recipe, we will use a local-addons directory alongside the odoo directory, at ~/odoo-dev/local-addons. How to do it... The following steps will create and install a new addon module: Change the working directory in which we will work and create the addons directory where our custom module will be placed: $ cd ~/odoo-dev $ mkdir local-addons Choose a technical name for the new module and create a directory with that name for the module. For our example, we will use my_module: $ mkdir local-addons/my_module A module's technical name must be a valid Python identifier; it must begin with a letter, and only contain letters (preferably lowercase), numbers, and underscore characters. Make the Python module importable by adding an __init__.py file: $ touch local-addons/my_module/__init__.py Add a minimal module manifest for Odoo to detect it. Create a __manifest__.py file with this line: {'name': 'My module'} Start your Odoo instance including our module directory in the addons path: $ odoo/odoo-bin --addons-path=odoo/addon/,local-addons/ If the --save option is added to the Odoo command, the addons path will be saved in the configuration file. Next time you start the server, if no addons path option is provided, this will be used. Make the new module available in your Odoo instance; log in to Odoo using admin, enable the Developer Mode in the About box, and in the Apps top menu, select Update Apps List. Now, Odoo should know about our Odoo module. Select the Apps menu at the top and, in the search bar in the top-right, delete the default Apps filter and search for my_module. Click on its Install button, and the installation will be concluded. How it works... An Odoo module is a directory containing code files and other assets. The directory name used is the module's technical name. The name key in the module manifest is its title. The __manifest__.py file is the module manifest. It contains a Python dictionary with information about the module, the modules it depends on, and the data files that it will load. In the example, a minimal manifest file was used, but in real modules, we will want to add a few other important keys. These are discussed in the Completing the module manifest recipe, which we will see next. The module directory must be Python-importable, so it also needs to have an __init__.py file, even if it's empty. To load a module, the Odoo server will import it. This will cause the code in the __init__.py file to be executed, so it works as an entry point to run the module Python code. Due to this, it will usually contain import statements to load the module Python files and submodules. Known modules can be installed directly from the command line using the --init or -i option. This list is initially set when you create a new database, from the modules found on the addons path provided at that time. It can be updated in an existing database with the Update Module List menu. Completing the addon module manifest The manifest is an important piece for Odoo modules. It contains important information about the module and declares the data files that should be loaded. Getting ready We should have a module to work with, already containing a __manifest__.py manifest file. You may want to follow the previous recipe to provide such a module to work with. How to do it... We will add a manifest file and an icon to our addon module: To create a manifest file with the most relevant keys, edit the module __manifest__.py file to look like this: { 'name': "Title", 'summary': "Short subtitle phrase", 'description': """Long description""", 'author': "Your name", 'license': "AGPL-3", 'website': "http://www.example.com", 'category': 'Uncategorized', 'version': '11.0.1.0.0', 'depends': ['base'], 'data': ['views.xml'], 'demo': ['demo.xml'], } To add an icon for the module, choose a PNG image to use and copy it to static/description/icon.png. How it works... The remaining content is a regular Python dictionary, with keys and values. The example manifest we used contains the most relevant keys: name: This is the title for the module. summary: This is the subtitle with a one-line description. description: This is a long description written in plain text or the ReStructuredText (RST) format. It is usually surrounded by triple quotes, and is used in Python to delimit multiline texts. For an RST quickstart reference, visit http://docutils.sourceforge.net/docs/user/rst/quickstart.html. author: This is a string with the name of the authors. When there is more than one, it is common practice to use a comma to separate their names, but note that it still should be a string, not a Python list. license: This is the identifier for the license under which the module is made available. It is limited to a predefined list, and the most frequent option is AGPL-3. Other possibilities include LGPL-3, Other OSI approved license, and Other proprietary. website: This is a URL people should visit to know more about the module or the authors. category: This is used to organize modules in areas of interest. The list of the standard category names available can be seen at https://github.com/odoo/odoo/blob/11.0/odoo/addons/base/module/module_data.xml. However, it's also possible to define other new category names here. version: This is the modules' version numbers. It can be used by the Odoo app store to detect newer versions for installed modules. If the version number does not begin with the Odoo target version (for example, 11.0), it will be automatically added. Nevertheless, it will be more informative if you explicitly state the Odoo target version, for example, using 11.0.1.0.0 or 11.0.1.0 instead of 1.0.0 or 1.0. depends: This is a list with the technical names of the modules it directly depends on. If none, we should at least depend on the base module. Don't forget to include any module defining XML IDs, Views, or Models referenced by this module. That will ensure that they all load in the correct order, avoiding hard-to-debug errors. data: This is a list of relative paths to the data files to load with module installation or upgrade. The paths are relative to the module root directory. Usually, these are XML and CSV files, but it's also possible to have YAML data files. demo: This is the list of relative paths to the files with demonstration data to load. These will only be loaded if the database was created with the Demo Data flag enabled. The image that is used as the module icon is the PNG file at static/description/icon.png. Odoo is expected to have significant changes between major versions, so modules built for one major version are likely to not be compatible with the next version without conversion and migration work. Due to this, it's important to be sure about a module's Odoo target version before installing it. There's more… Instead of having the long description in the module manifest, it's possible to have it in its own file. Since version 8.0, it can be replaced by a README file, with either a .txt, .rst, or an .md (Markdown) extension. Otherwise, include a description/index.html file in the module. This HTML description will override a description defined in the manifest file. There are a few more keys that are frequently used: application: If this is True, the module is listed as an application. Usually, this is used for the central module of a functional area auto_install: If this is True, it indicates that this is a "glue" module, which is automatically installed when all of its dependencies are installed installable: If this is True (the default value), it indicates that the module is available for installation Organizing the addon module file structure An addon module contains code files and other assets such as XML files and images. For most of these files, we are free to choose where to place them inside the module directory. However, Odoo uses some conventions on the module structure, so it is advisable to follow them. Getting ready We are expected to have an addon module directory with only the __init__.py and __manifest__.py files. In this recipe, we suppose this is local-addons/my_module. How to do it... To create the basic skeleton for the addon module, perform the given steps: Create the directories for code files: $ cd local-addons/my_module $ mkdir models $ touch models/__init__.py $ mkdir controllers $ touch controllers/__init__.py $ mkdir views $ mkdir security $ mkdir data $ mkdir demo $ mkdir i18n $ mkdir -p static/description Edit the module's top __init__.py file so that the code in subdirectories is loaded: from . import models from . import controllers This should get us started with a structure containing the most used directories, similar to this one: . ├── __init__.py ├── __manifest__.py │ ├── controllers │ └── __init__.py ├── data ├── demo ├── i18n ├── models │ └── __init__.py ├── security ├── static │ └── description └── views How it works... To provide some context, an Odoo addon module can have three types of files:  The Python code is loaded by the __init__.py files, where the .py files and code subdirectories are imported. Subdirectories containing code Python, in turn, need their own __init__.py. Data files that are to be declared in the data and demo keys of the __manifest__.py module manifest in order to be loaded are usually XML and CSV files for the user interface, fixture data, and demonstration data. There can also be YAML files, which can include some procedural instructions that are run when the module is loaded, for instance, to generate or update records programmatically rather than statically in an XML file. Web assets such as JavaScript code and libraries, CSS, and QWeb/HTML templates also play an important part. There are declared through an XML file extending the master templates to add these assets to the web client or website pages. The addon files are to be organized in these directories: models/ contains the backend code files, creating the Models and their business logic. A file per Model is recommended, with the same name as the model, for example, library_book.py for the library.book model. views/ contains the XML files for the user interface, with the actions, forms, lists, and so on. As with models, it is advised to have one file per model. Filenames for website templates are expected to end with the _template suffix. data/ contains other data files with module initial data. demo/ contains data files with demonstration data, useful for tests, training, or module evaluation. i18n/ is where Odoo will look for the translation .pot and .po files. These files don't need to be mentioned in the manifest file. security/ contains the data files defining access control lists, usually a ir.model.access.csv file, and possibly an XML file to define access Groups and Record Rules for row level security. controllers/ contains the code files for the website controllers, and for modules providing that kind of feature. static/ is where all web assets are expected to be placed. Unlike other directories, this directory name is not just a convention, and only files inside it can be made available for the Odoo web pages. They don't need to be mentioned in the module manifest, but will have to be referred to in the web template. When adding new files to a module, don't forget to declare them either in the __manifest__.py (for data files) or __init__.py (for code files); otherwise, those files will be ignored and won't be loaded. Adding models Models define the data structures to be used by our business applications. This recipe shows how to add a basic model to a module. We will use a simple book library example to explain this; we want a model to represent books. Each book has a name and a list of authors. Getting ready We should have a module to work with. We will use an empty my_module for our explanation. How to do it... To add a new Model, we add a Python file describing it and then upgrade the addon module (or install it, if it was not already done). The paths used are relative to our addon module location (for example, ~/odoo-dev/local-addons/my_module/): Add a Python file to the models/library_book.py module with the following code: from odoo import models, fields class LibraryBook(models.Model): _name = 'library.book' name = fields.Char('Title', required=True) date_release = fields.Date('Release Date') author_ids = fields.Many2many( 'res.partner', string='Authors' ) Add a Python initialization file with code files to be loaded by the models/__init__.py module with the following code: from . import library_book Edit the module Python initialization file to have the models/ directory loaded by the module: from . import models Upgrade the Odoo module, either from the command line or from the apps menu in the user interface. If you look closely at the server log while upgrading the module, you should see this line: odoo.modules.registry: module my_module: creating or updating database table After this, the new library.book model should be available in our Odoo instance. If we have the technical tools activated, we can confirm that by looking it up at Settings | Technical | Database Structure | Models. How it works... Our first step was to create a Python file where our new module was created. Odoo models are objects derived from the Odoo Model Python class. When a new model is defined, it is also added to a central model registry. This makes it easier for other modules to make modifications to it later. Models have a few generic attributes prefixed with an underscore. The most important one is _name, providing a unique internal identifier to be used throughout the Odoo instance. The model fields are defined as class attributes. We began defining the name field of the Char type. It is convenient for models to have this field, because by default, it is used as the record description when referenced from other models. We also used an example of a relational field, author_ids. It defines a many-to-many relation between Library Books and the partners. A book can have many authors and each author can have written many books. Next, we must make our module aware of this new Python file. This is done by the __init__.py files. Since we placed the code inside the models/ subdirectory, we need the previous __init__ file to import that directory, which should, in turn, contain another __init__ file importing each of the code files there (just one, in our case). Changes to Odoo models are activated by upgrading the module. The Odoo server will handle the translation of the model class into database structure changes. Although no example is provided here, business logic can also be added to these Python files, either by adding new methods to the Model's class or by extending the existing methods, such as create() or write(). Thus we learned about the structure of an Odoo addon module and learned, step-by-step, how to create a simple module from scratch. To know more about, how to define access rules for your data;  how to expose your data models to end users on the back end and on the front end; and how to create beautiful PDF versions of your data, read this book Odoo 11 Development Cookbook - Second Edition. ERP tool in focus: Odoo 11 Building Your First Odoo Application How to Scaffold a New module in Odoo 11
Read more
  • 0
  • 0
  • 28180

article-image-sizing-configuring-hadoop-cluster
Oli Huggins
16 Feb 2014
10 min read
Save for later

Sizing and Configuring your Hadoop Cluster

Oli Huggins
16 Feb 2014
10 min read
This article, written by Khaled Tannir, the author of Optimizing Hadoop for MapReduce, discusses two of the most important aspects to consider while optimizing Hadoop for MapReduce: sizing and configuring the Hadoop cluster correctly. Sizing your Hadoop cluster Hadoop's performance depends on multiple factors based on well-configured software layers and well-dimensioned hardware resources that utilize its CPU, Memory, hard drive (storage I/O) and network bandwidth efficiently. Planning the Hadoop cluster remains a complex task that requires a minimum knowledge of the Hadoop architecture and may be out the scope of this book. This is what we are trying to make clearer in this section by providing explanations and formulas in order to help you to best estimate your needs. We will introduce a basic guideline that will help you to make your decision while sizing your cluster and answer some How to plan questions about cluster's needs such as the following: How to plan my storage? How to plan my CPU? How to plan my memory? How to plan the network bandwidth? While sizing your Hadoop cluster, you should also consider the data volume that the final users will process on the cluster. The answer to this question will lead you to determine how many machines (nodes) you need in your cluster to process the input data efficiently and determine the disk/memory capacity of each one. Hadoop is a Master/Slave architecture and needs a lot of memory and CPU bound. It has two main components: JobTracker: This is the critical component in this architecture and monitors jobs that are running on the cluster TaskTracker: This runs tasks on each node of the cluster To work efficiently, HDFS must have high throughput hard drives with an underlying filesystem that supports the HDFS read and write pattern (large block). This pattern defines one big read (or write) at a time with a block size of 64 MB, 128 MB, up to 256 MB. Also, the network layer should be fast enough to cope with intermediate data transfer and block. HDFS is itself based on a Master/Slave architecture with two main components: the NameNode / Secondary NameNode and DataNode components. These are critical components and need a lot of memory to store the file's meta information such as attributes and file localization, directory structure, names, and to process data. The NameNode component ensures that data blocks are properly replicated in the cluster. The second component, the DataNode component, manages the state of an HDFS node and interacts with its data blocks. It requires a lot of I/O for processing and data transfer. Typically, the MapReduce layer has two main prerequisites: input datasets must be large enough to fill a data block and split in smaller and independent data chunks (for example, a 10 GB text file can be split into 40,960 blocks of 256 MB each, and each line of text in any data block can be processed independently). The second prerequisite is that it should consider the data locality, which means that the MapReduce code is moved where the data lies, not the opposite (it is more efficient to move a few megabytes of code to be close to the data to be processed, than moving many data blocks over the network or the disk). This involves having a distributed storage system that exposes data locality and allows the execution of code on any storage node. Concerning the network bandwidth, it is used at two instances: during the replication process and following a file write, and during the balancing of the replication factor when a node fails. The most common practice to size a Hadoop cluster is sizing the cluster based on the amount of storage required. The more data into the system, the more will be the machines required. Each time you add a new node to the cluster, you get more computing resources in addition to the new storage capacity. Let's consider an example cluster growth plan based on storage and learn how to determine the storage needed, the amount of memory, and the number of DataNodes in the cluster. Daily data input 100 GB Storage space used by daily data input = daily data input * replication factor = 300 GB HDFS replication factor 3 Monthly growth 5% Monthly volume = (300 * 30) + 5% =  9450 GB After one year = 9450 * (1 + 0.05)^12 = 16971 GB Intermediate MapReduce data 25% Dedicated space = HDD size * (1 - Non HDFS reserved space per disk / 100 + Intermediate MapReduce data / 100) = 4 * (1 - (0.25 + 0.30)) = 1.8 TB (which is the node capacity) Non HDFS reserved space per disk 30% Size of a hard drive disk 4 TB Number of DataNodes needed to process: Whole first month data = 9.450 / 1800 ~= 6 nodes The 12th month data = 16.971/ 1800 ~= 10 nodes Whole year data = 157.938 / 1800 ~= 88 nodes Do not use RAID array disks on a DataNode. HDFS provides its own replication mechanism. It is also important to note that for every disk, 30 percent of its capacity should be reserved to non-HDFS use. It is easy to determine the memory needed for both NameNode and Secondary NameNode. The memory needed by NameNode to manage the HDFS cluster metadata in memory and the memory needed for the OS must be added together. Typically, the memory needed by Secondary NameNode should be identical to NameNode. Then you can apply the following formulas to determine the memory amount: NameNode memory 2 GB - 4 GB Memory amount = HDFS cluster management memory + NameNode memory + OS memory Secondary NameNode memory 2 GB - 4 GB OS memory 4 GB - 8 GB HDFS memory 2 GB - 8 GB At least NameNode (Secondary NameNode) memory = 2 + 2 + 4 = 8 GB It is also easy to determine the DataNode memory amount. But this time, the memory amount depends on the physical CPU's core number installed on each DataNode. DataNode process memory 4 GB - 8 GB Memory amount = Memory per CPU core * number of CPU's core + DataNode process memory + DataNode TaskTracker memory + OS memory DataNode TaskTracker memory 4 GB - 8 GB OS memory 4 GB - 8 GB CPU's core number 4+ Memory per CPU core 4 GB - 8 GB At least DataNode memory = 4*4 + 4 + 4 + 4 = 28 GB Regarding how to determine the CPU and the network bandwidth, we suggest using the now-a-days multicore CPUs with at least four physical cores per CPU. The more physical CPU's cores you have, the more you will be able to enhance your job's performance (according to all rules discussed to avoid underutilization or overutilization). For the network switches, we recommend to use equipment having a high throughput (such as 10 GB) Ethernet intra rack with N x 10 GB Ethernet inter rack. Configuring your cluster correctly To run Hadoop and get a maximum performance, it needs to be configured correctly. But the question is how to do that. Well, based on our experiences, we can say that there is not one single answer to this question. The experiences gave us a clear indication that the Hadoop framework should be adapted for the cluster it is running on and sometimes also to the job. In order to configure your cluster correctly, we recommend running a Hadoop job(s) the first time with its default configuration to get a baseline. Then, you will check the resource's weakness (if it exists) by analyzing the job history logfiles and report the results (measured time it took to run the jobs). After that, iteratively, you will tune your Hadoop configuration and re-run the job until you get the configuration that fits your business needs. The number of mappers and reducer tasks that a job should use is important. Picking the right amount of tasks for a job can have a huge impact on Hadoop's performance. The number of reducer tasks should be less than the number of mapper tasks. Google reports one reducer for 20 mappers; the others give different guidelines. This is because mapper tasks often process a lot of data, and the result of those tasks are passed to the reducer tasks. Often, a reducer task is just an aggregate function that processes a minor portion of the data compared to the mapper tasks. Also, the correct number of reducers must also be considered. The number of mappers and reducers is related to the number of physical cores on the DataNode, which determines the maximum number of jobs that can run in parallel on DataNode. In a Hadoop cluster, master nodes typically consist of machines where one machine is designed as a NameNode, and another as a JobTracker, while all other machines in the cluster are slave nodes that act as DataNodes and TaskTrackers. When starting the cluster, you begin starting the HDFS daemons on the master node and DataNode daemons on all data nodes machines. Then, you start the MapReduce daemons: JobTracker on the master node and the TaskTracker daemons on all slave nodes. The following diagram shows the Hadoop daemon's pseudo formula: When configuring your cluster, you need to consider the CPU cores and memory resources that need to be allocated to these daemons. In a huge data context, it is recommended to reserve 2 CPU cores on each DataNode for the HDFS and MapReduce daemons. While in a small and medium data context, you can reserve only one CPU core on each DataNode. Once you have determined the maximum mapper's slot numbers, you need to determine the reducer's maximum slot numbers. Based on our experience, there is a distribution between the Map and Reduce tasks on DataNodes that give good performance result to define the reducer's slot numbers the same as the mapper's slot numbers or at least equal to two-third mapper slots. Let's learn to correctly configure the number of mappers and reducers and assume the following cluster examples: Cluster machine Nb Medium data size Large data size DataNode CPU cores 8 Reserve 1 CPU core Reserve 2 CPU cores DataNode TaskTracker daemon 1 1 1 DataNode HDFS daemon 1 1 1 Data block size 128 MB 256 MB DataNode CPU % utilization 95% to 120% 95% to 150% Cluster nodes 20 40 Replication factor 2 3 We want to use the CPU resources at least 95 percent, and due to Hyper-Threading, one CPU core might process more than one job at a time, so we can set the Hyper-Threading factor range between 120 percent and 170 percent. Maximum mapper's slot numbers on one node in a large data context = number of physical cores - reserved core * (0.95 -> 1.5) Reserved core = 1 for TaskTracker + 1 for HDFS Let's say the CPU on the node will use up to 120% (with Hyper-Threading) Maximum number of mapper slots = (8 - 2) * 1.2 = 7.2 rounded down to 7 Let's apply the 2/3 mappers/reducers technique: Maximum number of reducers slots = 7 * 2/3 = 5 Let's define the number of slots for the cluster: Mapper's slots: = 7 * 40 = 280 Reducer's slots: = 5 * 40 = 200 The block size is also used to enhance performance. The default Hadoop configuration uses 64 MB blocks, while we suggest using 128 MB in your configuration for a medium data context as well and 256 MB for a very large data context. This means that a mapper task can process one data block (for example, 128 MB) by only opening one block. In the default Hadoop configuration (set to 2 by default), two mapper tasks are needed to process the same amount of data. This may be considered as a drawback because initializing one more mapper task and opening one more file takes more time. Summary In this article, we learned about sizing and configuring the Hadoop cluster for optimizing it for MapReduce. Resources for Article: Further resources on this subject: Hadoop Tech Page Hadoop and HDInsight in a Heartbeat Securing the Hadoop Ecosystem Advanced Hadoop MapReduce Administration
Read more
  • 0
  • 3
  • 28097
article-image-restful-java-web-services-swagger
Fatema Patrawala
22 May 2018
14 min read
Save for later

Documenting RESTful Java Web Services using Swagger

Fatema Patrawala
22 May 2018
14 min read
In the earlier days, many software solution providers did not really pay attention to documenting their RESTful web APIs. However, many API vendors soon realized the need for a good API documentation solution. Today, you will find a variety of approaches to documenting RESTful web APIs. There are some popular solutions available today for describing, producing, consuming, and visualizing RESTful web services. In this tutorial, we will explore Swagger which offers a specification and a complete framework implementation for describing, producing, consuming, and visualizing RESTful web services. The Swagger framework works with many popular programming languages, such as Java, Scala, Clojure, Groovy, JavaScript, and .NET. This tutorial is an extract taken from the book RESTFul Java Web Services - Third Edition, written by Bogunuva Mohanram Balachandar. This book will help you master core REST concepts and create RESTful web services in Java. A glance at the market adoption of Swagger The greatest strength of Swagger is its powerful API platform, which satisfies the client, documentation, and server needs. The Swagger UI framework serves as the documentation and testing utility. Its support for different languages and its matured tooling support have really grabbed the attention of many API vendors, and it seems to be the one with the most traction in the community today. Swagger is built using Scala. This means that when you package your application, you need to have the entire Scala runtime into your build, which may considerably increase the size of your deployable artifact (the EAR or WAR file). That said, Swagger is however improving with each release. For example, the Swagger 2.0 release allows you to use YAML for describing APIs. So, keep a watch on this framework. The Swagger framework has the following three major components: Server: This component hosts the RESTful web API descriptions for the services that the clients want to use Client: This component uses the RESTful web API descriptions from the server to provide an automated interfacing mechanism to invoke the REST APIs User interface: This part of the framework reads a description of the APIs from the server, renders it as a web page, and provides an interactive sandbox to test the APIs A quick overview of the Swagger structure Let's take a quick look at the Swagger file structure before moving further. The Swagger 1.x file contents that describe the RESTful APIs are represented as the JSON objects. With Swagger 2.0 release onwards, you can also use the YAML format to describe the RESTful web APIs. This section discusses the Swagger file contents represented as JSON. The basic constructs that we'll discuss in this section for JSON are also applicable for the YAML representation of APIs, although the syntax differs. When using the JSON structure for describing the REST APIs, the Swagger file uses a Swagger object as the root document object for describing the APIs. Here is a quick overview of the various properties that you will find in a Swagger object: The following properties describe the basic information about the RESTful web application: swagger: This property specifies the Swagger version. info: This property provides metadata about the API. host: This property locates the server where the APIs are hosted. basePath: This property is the base path for the API. It is relative to the value set for the host field. schemes: This property transfers the protocol for a RESTful web API, such as HTTP and HTTPS. The following properties allow you to specify the default values at the application level, which can be optionally overridden for each operation: consumes: This property specifies the default internet media types that the APIs consume. It can be overridden at the API level. produces: This property specifies the default internet media types that the APIs produce. It can be overridden at the API level. securityDefinitions: This property globally defines the security schemes for the document. These definitions can be referred to from the security schemes specified for each API. The following properties describe the operations (REST APIs): paths: This property specifies the path to the API or the resources. The path must be appended to the basePath property in order to get the full URI. definitions: This property specifies the input and output entity types for operations on REST resources (APIs). parameters: This property specifies the parameter details for an operation. responses: This property specifies the response type for an operation. security: This property specifies the security schemes in order to execute this operation. externalDocs: This property links to the external documentation. A complete Swagger 2.0 specification is available at Github. The Swagger specification was donated to the Open API initiative, which aims to standardize the format of the API specification and bring uniformity in usage. Swagger 2.0 was enhanced as part of the Open API initiative, and a new specification OAS 3.0 (Open API specification 3.0) was released in July 2017. The tools are still being worked on to support OAS3.0. I encourage you to explore the Open API Specification 3.0 available at Github. Overview of Swagger APIs The Swagger framework consists of many sub-projects in the Git repository, each built with a specific purpose. Here is a quick summary of the key projects: swagger-spec: This repository contains the Swagger specification and the project in general. swagger-ui: This project is an interactive tool to display and execute the Swagger specification files. It is based on swagger-js (the JavaScript library). swagger-editor: This project allows you to edit the YAML files. It is released as a part of Swagger 2.0. swagger-core: This project provides the scala and java library to generate the Swagger specifications directly from code. It supports JAX-RS, the Servlet APIs, and the Play framework. swagger-codegen: This project provides a tool that can read the Swagger specification files and generate the client and server code that consumes and produces the specifications. In the next section, you will learn how to use the swagger-core project offerings to generate the Swagger file for a JAX-RS application. Generating Swagger from JAX-RS Both the WADL and RAML tools that we discussed in the previous sections use the JAX-RS annotations metadata to generate the documentation for the APIs. The Swagger framework does not fully rely on the JAX-RS annotations but offers a set of proprietary annotations for describing the resources. This helps in the following scenarios: The Swagger core annotations provide more flexibility for generating documentations compliant with the Swagger specifications It allows you to use Swagger for generating documentations for web components that do not use the JAX-RS annotations, such as servlet and the servlet filter The Swagger annotations are designed to work with JAX-RS, improving the quality of the API documentation generated by the framework. Note that the swagger-core project is currently supported on the Jersey and Restlet implementations. If you are considering any other runtime for your JAX-RS application, check the respective product manual and ensure the support before you start using Swagger for describing APIs. Some of the commonly used Swagger annotations are as follows: The @com.wordnik.swagger.annotations.Api annotation marks a class as a Swagger resource. Note that only classes that are annotated with @Api will be considered for generating the documentation. The @Api annotation is used along with class-level JAX-RS annotations such as @Produces and @Path. Annotations that declare an operation are as follows: @com.wordnik.swagger.annotations.ApiOperation: This annotation describes a resource method (operation) that is designated to respond to HTTP action methods, such as GET, PUT, POST, and DELETE. @com.wordnik.swagger.annotations.ApiParam: This annotation is used for describing parameters used in an operation. This is designed for use in conjunction with the JAX-RS parameters, such as @Path, @PathParam, @QueryParam, @HeaderParam, @FormParam, and @BeanParam. @com.wordnik.swagger.annotations.ApiImplicitParam: This annotation allows you to define the operation parameters manually. You can use this to override the @PathParam or @QueryParam values specified on a resource method with custom values. If you have multiple ImplicitParam for an operation, wrap them with @ApiImplicitParams. @com.wordnik.swagger.annotations.ApiResponse: This annotation describes the status codes returned by an operation. If you have multiple responses, wrap them by using @ApiResponses. @com.wordnik.swagger.annotations.ResponseHeader: This annotation describes a header that can be provided as part of the response. @com.wordnik.swagger.annotations.Authorization: This annotation is used within either Api or ApiOperation to describe the authorization scheme used on a resource or an operation. Annotations that declare API models are as follows: @com.wordnik.swagger.annotations.ApiModel: This annotation describes the model objects used in the application. @com.wordnik.swagger.annotations.ApiModelProperty: This annotation describes the properties present in the ApiModel object. A complete list of the Swagger core annotations is available at Github. Having learned the basics of Swagger, it is time for us to move on and build a simple example to get a feel of the real-life use of Swagger in a JAX-RS application. As always, this example uses the Jersey implementation of JAX-RS. Specifying dependency to Swagger To use Swagger in your Jersey 2 application, specify the dependency to swagger-jersey2-jaxrs jar. If you use Maven for building the source, the dependency to the swagger-core library will look as follows: <dependency> <groupId>com.wordnik</groupId> <artifactId>swagger-jersey2-jaxrs</artifactId> <version>1.5.1-M1</version> <!-use the appropriate version here, 1.5.x supports Swagger 2.0 spec --> </dependency> You should be careful while choosing the swagger-core version for your product. Note that swagger-core 1.3 produces the Swagger 1.2 definitions, whereas swagger-core 1.5 produces the Swagger 2.0 definitions. The next step is to hook the Swagger provider components into your Jersey application. This is done by configuring the Jersey servlet (org.glassfish.jersey.servlet.ServletContainer) in web.xml, as shown here: <servlet> <servlet-name>jersey</servlet-name> <servlet-class> org.glassfish.jersey.servlet.ServletContainer </servlet-class> <init-param> <param-name>jersey.config.server.provider.packages </param-name> <param-value> com.wordnik.swagger.jaxrs.json, com.packtpub.rest.ch7.swagger </param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>jersey</servlet-name> <url-pattern>/webresources/*</url-pattern> </servlet-mapping> To enable the Swagger documentation features, it is necessary to load the Swagger framework provider classes from the com.wordnik.swagger.jaxrs.listing package. The package names of the JAX-RS resource classes and provider components are configured as the value for the jersey.config.server.provider.packages init parameter. The Jersey framework scans through the configured packages for identifying the resource classes and provider components during the deployment of the application. Map the Jersey servlet to a request URI so that it responds to the REST resource calls that match the URI. If you prefer not to use web.xml, you can also use the custom application subclass for (programmatically) specifying all the configuration entries discussed here. To try this option, refer to Github. Configuring the Swagger definition After specifying the Swagger provider components, the next step is to configure and initialize the Swagger definition. This is done by configuring the com.wordnik.swagger.jersey.config.JerseyJaxrsConfig servlet in web.xml, as follows: <servlet> <servlet-name>Jersey2Config</servlet-name> <servlet-class> com.wordnik.swagger.jersey.config.JerseyJaxrsConfig </servlet-class> <init-param> <param-name>api.version</param-name> <param-value>1.0.0</param-value> </init-param> <init-param> <param-name>swagger.api.basepath</param-name> <param-value> http://localhost:8080/hrapp/webresources </param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet> Here is a brief overview of the initialization parameters used for JerseyJaxrsConfig: api.version: This parameter specifies the API version for your application swagger.api.basepath: This parameter specifies the base path for your application With this step, we have finished all the configuration entries for using Swagger in a JAX-RS (Jersey 2 implementation) application. In the next section, we will see how to use the Swagger metadata annotation on a JAX-RS resource class for describing the resources and operations. Adding a Swagger annotation on a JAX-RS resource class Let's revisit the DepartmentResource class used in the previous sections. In this example, we will enhance the DepartmentResource class by adding the Swagger annotations discussed earlier. We use @Api to mark DepartmentResource as the Swagger resource. The @ApiOperation annotation describes the operation exposed by the DepartmentResource class: import com.wordnik.swagger.annotations.Api; import com.wordnik.swagger.annotations.ApiOperation; import com.wordnik.swagger.annotations.ApiParam; import com.wordnik.swagger.annotations.ApiResponse; import com.wordnik.swagger.annotations.ApiResponses; //Other imports are removed for brevity @Stateless @Path("departments") @Api(value = "/departments", description = "Get departments details") public class DepartmentResource { @ApiOperation(value = "Find department by id", notes = "Specify a valid department id", response = Department.class) @ApiResponses(value = { @ApiResponse(code = 400, message = "Invalid department id supplied"), @ApiResponse(code = 404, message = "Department not found") }) @GET @Path("{id}") @Produces("application/json") public Department findDepartment( @ApiParam(value = "The department id", required = true) @PathParam("id") Integer id){ return findDepartmentEntity(id); } //Rest of the codes are removed for brevity } To view the Swagger documentation, build the source and deploy it to the server. Once the application is deployed, you can navigate to http://<host>:<port>/<application-name>/<application-path>/swagger.json to view the Swagger resource listing in the JSON format. The Swagger URL for this example will look like the following: http://localhost:8080/hrapp/webresource/swagger.json The following sample Swagger representation is for the DepartmentResource class discussed in this section: { "swagger": "2.0", "info": { "version": "1.0.0", "title": "" }, "host": "localhost:8080", "basePath": "/hrapp/webresources", "tags": [ { "name": "user" } ], "schemes": [ "http" ], "paths": { "/departments/{id}": { "get": { "tags": [ "user" ], "summary": "Find department by id", "description": "", "operationId": "loginUser", "produces": [ "application/json" ], "parameters": [ { "name": "id", "in": "path", "description": "The department id", "required": true, "type": "integer", "format": "int32" } ], "responses": { "200": { "description": "successful operation", "schema": { "$ref": "#/definitions/Department" } }, "400": { "description": "Invalid department id supplied" }, "404": { "description": "Department not found" } } } } }, "definitions": { "Department": { "properties": { "departmentId": { "type": "integer", "format": "int32" }, "departmentName": { "type": "string" }, "_persistence_shouldRefreshFetchGroup": { "type": "boolean" } } } } } As mentioned at the beginning of this section, from the Swagger 2.0 release onward it supports the YAML representation of APIs. You can access the YAML representation by navigating to swagger.yaml. For instance, in the preceding example, the following URI gives you the YAML file: http://<host>:<port>/<application-name>/<application-path>/swagger.yaml The complete source code for this example is available at the Packt website. You can download the example from the Packt website link that we mentioned at the beginning of this book, in the Preface section. In the downloaded source code, see the rest-chapter7-service-doctools/rest-chapter7-jaxrs2swagger project. Generating a Java client from Swagger The Swagger framework is packaged with the Swagger code generation tool as well (swagger-codegen-cli), which allows you to generate client libraries by parsing the Swagger documentation file. You can download the swagger-codegen-cli.jar file from the Maven central repository by searching for swagger-codegen-cli in search maven. Alternatively, you can clone the Git repository and build the source locally by executing mvn install. Once you have swagger-codegen-cli.jar locally available, run the following command to generate the Java client for the REST API described in Swagger: java -jar swagger-codegen-cli.jar generate -i <Input-URI-or-File-location-for-swagger.json> -l <client-language-to-generate> -o <output-directory> The following example illustrates the use of this tool: java -jar swagger-codegen-cli-2.1.0-M2.jar generate -i http://localhost:8080/hrapp/webresources/swagger.json -l java -o generated-sources/java When you run this tool, it scans through the RESTful web API description available at http://localhost:8080/hrapp/webresources/swagger.json and generates a Java client source in the generated-sources/java folder. Note that the Swagger code generation process uses the mustache templates for generating the client source. If you are not happy with the generated source, Swagger lets you specify your own mustache template files. Use the -t flag to specify your template folder. To learn more, refer to the README.md file at Github. To learn more grab this latest edition of RESTful Java Web Services to build robust, scalable and secure RESTful web services using Java APIs. Getting started with Django RESTful Web Services How to develop RESTful web services in Spring Testing RESTful Web Services with Postman
Read more
  • 0
  • 0
  • 28029

article-image-generative-adversarial-networks-using-keras
Amey Varangaonkar
21 Aug 2018
12 min read
Save for later

Generative Adversarial Networks: Generate images using Keras GAN [Tutorial]

Amey Varangaonkar
21 Aug 2018
12 min read
You might have worked with the popular MNIST dataset before - but in this article, we will be generating new MNIST-like images with a Keras GAN. It can take a very long time to train a GAN; however, this problem is small enough to run on most laptops in a few hours, which makes it a great example. The following excerpt is taken from the book Deep Learning Quick Reference, authored by Mike Bernico. The network architecture that we will be using here has been found by, and optimized by, many folks, including the authors of the DCGAN paper and people like Erik Linder-Norén, who's excellent collection of GAN implementations called Keras GAN served as the basis of the code we used here. Loading the MNIST dataset The MNIST dataset consists of 60,000 hand-drawn numbers, 0 to 9. Keras provides us with a built-in loader that splits it into 50,000 training images and 10,000 test images. We will use the following code to load the dataset: from keras.datasets import mnist def load_data(): (X_train, _), (_, _) = mnist.load_data() X_train = (X_train.astype(np.float32) - 127.5) / 127.5 X_train = np.expand_dims(X_train, axis=3) return X_train As you probably noticed, We're not returning any of the labels or the testing dataset. We're only going to use the training dataset. The labels aren't needed because the only labels we will be using are 0 for fake and 1 for real. These are real images, so they will all be assigned a label of 1 at the discriminator. Building the generator The generator uses a few new layers that we will talk about in this section. First, take a chance to skim through the following code: def build_generator(noise_shape=(100,)): input = Input(noise_shape) x = Dense(128 * 7 * 7, activation="relu")(input) x = Reshape((7, 7, 128))(x) x = BatchNormalization(momentum=0.8)(x) x = UpSampling2D()(x) x = Conv2D(128, kernel_size=3, padding="same")(x) x = Activation("relu")(x) x = BatchNormalization(momentum=0.8)(x) x = UpSampling2D()(x) x = Conv2D(64, kernel_size=3, padding="same")(x) x = Activation("relu")(x) x = BatchNormalization(momentum=0.8)(x) x = Conv2D(1, kernel_size=3, padding="same")(x) out = Activation("tanh")(x) model = Model(input, out) print("-- Generator -- ") model.summary() return model We have not previously used the UpSampling2D layer. This layer will take increases in the rows and columns of the input tensor, leaving the channels unchanged. It does this by repeating the values in the input tensor. By default, it will double the input. If we give an UpSampling2D layer a 7 x 7 x 128 input, it will give us a 14 x 14 x 128 output. Typically when we build a CNN, we start with an image that is very tall and wide and uses convolutional layers to get a tensor that's very deep but less tall and wide. Here we will do the opposite. We'll use a dense layer and a reshape to start with a 7 x 7 x 128 tensor and then, after doubling it twice, we'll be left with a 28 x 28 tensor. Since we need a grayscale image, we can use a convolutional layer with a single unit to get a 28 x 28 x 1 output. This sort of generator arithmetic is a little off-putting and can seem awkward at first but after a few painful hours, you will get the hang of it! Building the discriminator The discriminator is really, for the most part, the same as any other CNN. Of course, there are a few new things that we should talk about. We will use the following code to build the discriminator: def build_discriminator(img_shape): input = Input(img_shape) x =Conv2D(32, kernel_size=3, strides=2, padding="same")(input) x = LeakyReLU(alpha=0.2)(x) x = Dropout(0.25)(x) x = Conv2D(64, kernel_size=3, strides=2, padding="same")(x) x = ZeroPadding2D(padding=((0, 1), (0, 1)))(x) x = (LeakyReLU(alpha=0.2))(x) x = Dropout(0.25)(x) x = BatchNormalization(momentum=0.8)(x) x = Conv2D(128, kernel_size=3, strides=2, padding="same")(x) x = LeakyReLU(alpha=0.2)(x) x = Dropout(0.25)(x) x = BatchNormalization(momentum=0.8)(x) x = Conv2D(256, kernel_size=3, strides=1, padding="same")(x) x = LeakyReLU(alpha=0.2)(x) x = Dropout(0.25)(x) x = Flatten()(x) out = Dense(1, activation='sigmoid')(x) model = Model(input, out) print("-- Discriminator -- ") model.summary() return model First, you might notice the oddly shaped zeroPadding2D() layer. After the second convolution, our tensor has gone from 28 x 28 x 3 to 7 x 7 x 64. This layer just gets us back into an even number, adding zeros on one side of both the rows and columns so that our tensor is now 8 x 8 x 64. More unusual is the use of both batch normalization and dropout. Typically, these two layers are not used together; however, in the case of GANs, they do seem to benefit the network. Building the stacked model Now that we've assembled both the generator and the discriminator, we need to assemble a third model that is the stack of both models together that we can use for training the generator given the discriminator loss. To do that we can just create a new model, this time using the previous models as layers in the new model, as shown in the following code: discriminator = build_discriminator(img_shape=(28, 28, 1)) generator = build_generator() z = Input(shape=(100,)) img = generator(z) discriminator.trainable = False real = discriminator(img) combined = Model(z, real) Notice that we're setting the discriminator's training attribute to False before building the model. This means that for this model we will not be updating the weights of the discriminator during backpropagation. We will freeze these weights and only move the generator weights with the stack. The discriminator will be trained separately. Now that all the models are built, they need to be compiled, as shown in the following code: gen_optimizer = Adam(lr=0.0002, beta_1=0.5) disc_optimizer = Adam(lr=0.0002, beta_1=0.5) discriminator.compile(loss='binary_crossentropy', optimizer=disc_optimizer, metrics=['accuracy']) generator.compile(loss='binary_crossentropy', optimizer=gen_optimizer) combined.compile(loss='binary_crossentropy', optimizer=gen_optimizer) If you'll notice, we're creating two custom Adam optimizers. This is because many times we will want to change the learning rate for only the discriminator or generator, slowing one or the other down so that we end up with a stable GAN where neither is overpowering the other. You'll also notice that we're using beta_1 = 0.5. This is a recommendation from the original DCGAN paper that we've carried forward and also had success with. A learning rate of 0.0002 is a good place to start as well, and was found in the original DCGAN paper. The training loop We have previously had the luxury of calling .fit() on our model and letting Keras handle the painful process of breaking the data apart into mini batches and training for us. Unfortunately, because we need to perform the separate updates for the discriminator and the stacked model together for a single batch we're going to have to do things the old-fashioned way, with a few loops. This is how things used to be done all the time, so while it's perhaps a little more work, it does admittedly leave me feeling nostalgic. The following code illustrates the training technique: num_examples = X_train.shape[0] num_batches = int(num_examples / float(batch_size)) half_batch = int(batch_size / 2) for epoch in range(epochs + 1): for batch in range(num_batches): # noise images for the batch noise = np.random.normal(0, 1, (half_batch, 100)) fake_images = generator.predict(noise) fake_labels = np.zeros((half_batch, 1)) # real images for batch idx = np.random.randint(0, X_train.shape[0], half_batch) real_images = X_train[idx] real_labels = np.ones((half_batch, 1)) # Train the discriminator (real classified as ones and generated as zeros) d_loss_real = discriminator.train_on_batch(real_images, real_labels) d_loss_fake = discriminator.train_on_batch(fake_images, fake_labels) d_loss = 0.5 * np.add(d_loss_real, d_loss_fake) noise = np.random.normal(0, 1, (batch_size, 100)) # Train the generator g_loss = combined.train_on_batch(noise, np.ones((batch_size, 1))) # Plot the progress print("Epoch %d Batch %d/%d [D loss: %f, acc.: %.2f%%] [G loss: %f]" % (epoch,batch, num_batches, d_loss[0], 100 * d_loss[1], g_loss)) if batch % 50 == 0: save_imgs(generator, epoch, batch) There is a lot going on here, to be sure. As before, let's break it down block by block. First, let's see the code to generate noise vectors: noise = np.random.normal(0, 1, (half_batch, 100)) fake_images = generator.predict(noise) fake_labels = np.zeros((half_batch, 1)) This code is generating a matrix of noise vectors called z) and sending it to the generator. It's getting a set of generated images back, which we're calling fake images. We will use these to train the discriminator, so the labels we want to use are 0s, indicating that these are in fact generated images. Note that the shape here is half_batch x 28 x 28 x 1. The half_batch is exactly what you think it is. We're creating half a batch of generated images because the other half of the batch will be real data, which we will assemble next. To get our real images, we will generate a random set of indices across X_train and use that slice of X_train as our real images, as shown in the following code: idx = np.random.randint(0, X_train.shape[0], half_batch) real_images = X_train[idx] real_labels = np.ones((half_batch, 1)) Yes, we are sampling with replacement in this case. It does work out but it's probably not the best way to implement minibatch training. It is, however, probably the easiest and most common. Since we are using these images to train the discriminator, and because they are real images, we will assign them 1s as labels, rather than 0s. Now that we have our discriminator training set assembled, we will update the discriminator. Also, note that we aren't using the soft labels. That's because we want to keep things as easy as they can be to understand. Luckily the network doesn't require them in this case. We will use the following code to train the discriminator: # Train the discriminator (real classified as ones and generated as zeros) d_loss_real = discriminator.train_on_batch(real_images, real_labels) d_loss_fake = discriminator.train_on_batch(fake_images, fake_labels) d_loss = 0.5 * np.add(d_loss_real, d_loss_fake) Notice that here we're using the discriminator's train_on_batch() method. The train_on_batch() method does exactly one round of forward and backward propagation. Every time we call it, it updates the model once from the model's previous state. Also, notice that we're making the update for the real images and fake images separately. This is advice that is given on the GAN hack Git we had previously referenced in the Generator architecture section. Especially in the early stages of training, when real images and fake images are from radically different distributions, batch normalization will cause problems with training if we were to put both sets of data in the same update. Now that the discriminator has been updated, it's time to update the generator. This is done indirectly by updating the combined stack, as shown in the following code: noise = np.random.normal(0, 1, (batch_size, 100)) g_loss = combined.train_on_batch(noise, np.ones((batch_size, 1))) To update the combined model, we create a new noise matrix, and this time it will be as large as the entire batch. We will use that as an input to the stack, which will cause the generator to generate an image and the discriminator to evaluate that image. Finally, we will use the label of 1 because we want to backpropagate the error between a real image and the generated image. Lastly, the training loop reports the discriminator and generator loss at the epoch/batch and then, every 50 batches, of every epoch we will use save_imgs to generate example images and save them to disk, as shown in the following code: print("Epoch %d Batch %d/%d [D loss: %f, acc.: %.2f%%] [G loss: %f]" % (epoch,batch, num_batches, d_loss[0], 100 * d_loss[1], g_loss)) if batch % 50 == 0: save_imgs(generator, epoch, batch) The save_imgs function uses the generator to create images as we go, so we can see the fruits of our labor. We will use the following code to define save_imgs: def save_imgs(generator, epoch, batch): r, c = 5, 5 noise = np.random.normal(0, 1, (r * c, 100)) gen_imgs = generator.predict(noise) gen_imgs = 0.5 * gen_imgs + 0.5 fig, axs = plt.subplots(r, c) cnt = 0 for i in range(r): for j in range(c): axs[i, j].imshow(gen_imgs[cnt, :, :, 0], cmap='gray') axs[i, j].axis('off') cnt += 1 fig.savefig("images/mnist_%d_%d.png" % (epoch, batch)) plt.close() It uses only the generator by creating a noise matrix and retrieving an image matrix in return. Then, using matplotlib.pyplot, it saves those images to disk in a 5 x 5 grid. Performing model evaluation Good is somewhat subjective when you're building a deep neural network to create images.  Let's take a look at a few examples of the training process, so you can see for yourself how the GAN begins to learn to generate MNIST. Here's the network at the very first batch of the very first epoch. Clearly, the generator doesn't really know anything about generating MNIST at this point; it's just noise, as shown in the following image: But just 50 batches in, something is happening, as you can see from the following image: And after 200 batches of epoch 0 we can almost see numbers, as you can see from the following image: And here's our generator after one full epoch. These generated numbers look pretty good, and we can see how the discriminator might be fooled by them. At this point, we could probably continue to improve a little bit, but it looks like our GAN has worked as the computer is generating some pretty convincing MNIST digits, as shown in the following image: Thus, we see the power of GANs in action when it comes to image generation using the Keras library. If you found the above article to be useful, make sure you check out our book Deep Learning Quick Reference, for more such interesting coverage of popular deep learning concepts and their practical implementation. Keras 2.2.0 releases! 2 ways to customize your deep learning models with Keras How to build Deep convolutional GAN using TensorFlow and Keras
Read more
  • 0
  • 3
  • 27968