Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1083 Articles
article-image-handle-odoo-application-data-with-orm-api-tutorial
Sugandha Lahoti
19 Jul 2018
17 min read
Save for later

Handle Odoo application data with ORM API [Tutorial]

Sugandha Lahoti
19 Jul 2018
17 min read
The ORM API, allows you to write complex logic and wizards to provide a rich user interaction for your apps. The ORM provides few methods to programmatically interact with the Odoo data model and the data, called the Application Programming Interface (API). These start with the basic CRUD (create, read, update, delete) operations, but also include other operations, such as data export and import, or utility functions to aid the user interface and experience. It also provides some decorators which allow us, when adding new methods, to let the ORM know how they should be handled. In this article, we will learn how to use the most important API methods available for any Odoo Model, and the available API decorators to be used in our custom methods, depending on their purpose. We will also explore the API offered by the Discuss app since it provides the message and notification features for Odoo. The article is an excerpt from the book Odoo 11 Development Essentials - Third Edition, by Daniel Reis. All the code files in this post are available on Github. We will start by having a closer look at the API decorators. Understanding the ORM decorators ORM decorators are important for the ORM, and allow it to give those methods specific uses. Let's see the ORM decorators we have available, and when each should be used. Record handling decorators Most of the time, we want a custom method to perform some actions on a recordset. For this, we should use @api.multi, and in that case, the self argument will be the recordset to work with. The method's logic will usually include a for loop iterating on it. This is surely the most frequently used decorator. If no decorator is used on a model method, it will default to  @api.multi behavior. In some cases, the method is prepared to work with a single record (a singleton). Here we could use the  @api.one decorator, but this is not advised because for Version 9.0 it was announced it would be deprecated and may be removed in the future. Instead, we should use @api.multi and add to the method code a line with self.ensure_one(), to ensure it is a singleton as expected. Despite being deprecated, the @api.one decorator is still supported. So it's worth knowing that it wraps the decorated method, doing the for-loop iteration to feed it one record at a time. So, in an @api.one decorated method, self is guaranteed to be a singleton. The return values of each individual method call are aggregated as a list and then returned. The return value of @api.one can be tricky: it returns a list, not the data structure returned by the actual method. For example, if the method code returns a dict, the actual return value is a list of dict values. This misleading behavior was the main reason the method was deprecated. In some cases, the method is expected to work at the class level, and not on particular records. In some object-oriented languages this would be called a static method. These class-level static methods should be decorated with @api.model. In these cases, self should be used as a reference for the model, without expecting it to contain actual records. Methods decorated with @api.model cannot be used with user interface buttons. In those cases, @api.multi should be used instead. Specific purpose decorators A few other decorators have more specific purposes and are to be used together with the decorators described earlier: @api.depends(fld1,...) is used for computed field functions, to identify on what changes the (re)calculation should be triggered. It must set values on the computed fields, otherwise it will error. @api.constrains(fld1,...) is used for validation functions, and performs checks for when any of the mentioned fields are changed. It should not write changes in the data. If the checks fail, an exception should be raised. @api.onchange(fld1,...) is used in the user interface, to automatically change some field values when other fields are changed. The self argument is a singleton with the current form data, and the method should set values on it for the changes that should happen in the form. It doesn't actually write to database records, instead it provides information to change the data in the UI form. When using the preceding decorators, no return value is needed. Except for onchange methods that can optionally return a dict with a warning message to display in the user interface. As an example, we can use this to perform some automation in the To-Do form: when Responsible is set to an empty value, we will also empty the team list. For this, edit the todo_stage/models/todo_task_model.py file to add the following method: @api.onchange('user_id') def onchange_user_id(self): if not user_id: self.team_ids = None return { 'warning': { 'title': 'No Responsible', 'message': 'Team was also reset.' } } Here, we are using the @api.onchange decorator to attach some logic to any changes in the user_id field, when done through the user interface. Note that the actual method name is not relevant, but the convention is for its name to begin with onchange_. Inside an onchange method, self represents a single virtual record containing all the fields currently set in the record being edited, and we can interact with them. Most of the time, this is what we want to do: to automatically fill values in other fields, depending on the value set to the changed field. In this case, we are setting the team_ids field to an empty value. The onchange methods don't need to return anything, but they can return a dictionary containing a warning or a domain key: The warning key should describe a message to show in a dialogue window, such as: {'title': 'Message Title', 'message': 'Message Body'}. The domain key can set or change the domain attribute of other fields. This allows you to build more user-friendly interfaces, by having to-many fields list only the selection option that make sense for this case. The value for the domain key looks like this: {'team_ids': [('is_author', '=', True)]} Using the ORM built-in methods The decorators discussed in the previous section allow us to add certain features to our models, such as implementing validations and automatic computations. We also have the basic methods provided by the ORM, used mainly to perform CRUD (create, read, update and delete) operations on our model data. To read data, the main methods provided are search() and browse(). Now we will explore the write operations provided by the ORM, and how they can be extended to support custom logic. Methods for writing model data The ORM provides three methods for the three basic write operations: <Model>.create(values) creates a new record on the model. Returns the created record. <Recordset>.write(values) updates field values on the recordset. Returns nothing. <Recordset>.unlink() deletes the records from the database. Returns nothing. The values argument is a dictionary, mapping field names to values to write. In some cases, we need to extend these methods to add some business logic to be triggered whenever these actions are executed. By placing our logic in the appropriate section of the custom method, we can have the code run before or after the main operations are executed. Using the TodoTask model as an example, we can make a custom create(), which would look like this: @api.model def create(self, vals): # Code before create: should use the `vals` dict new_record = super(TodoTask, self).create(vals) # Code after create: can use the `new_record` created return new_record Python 3 introduced a simplified way to use super() that could have been used in the preceding code samples. We chose to use the Python 2 compatible form. If we don't mind breaking Python 2 support for our code, we can use the simplified form, without the arguments referencing the class name and self. For example: super().create(vals) A custom write() would follow this structure: @api.multi def write(self, vals): # Code before write: can use `self`, with the old values super(TodoTask, self).write(vals) # Code after write: can use `self`, with the updated values return True While extending create() and write()  opens up a lot of possibilities, remember in many cases we don't need to do that, since there are tools also available that may be better suited: For field values that are automatically calculated based on other fields, we should use computed fields. An example of this is to calculate a header total when the values of the lines are changed. To have field default values calculated dynamically, we can use a field default bound to a function instead of a fixed value. To have values set on other fields when a field is changed, we can use onchange functions. An example of this is when picking a customer, setting their currency as the document's currency that can later be manually changed by the user. Keep in mind that on change only works on form view interaction and not on direct write calls. For validations, we should use constraint functions decorated with @api.constraints(fld1,fld2,...). These are like computed fields but, instead of computing values, they are expected to raise errors. Consider carefully if you really need to use extensions to the create or write methods. In most cases, we just need to perform some validation or automatically compute some value, when the record is saved. But we have better tools for this: validations are best implemented with @api.constrains methods, and automatic calculations are better implemented as computed fields. In this case, we need to compute field values when saving. If, for some reason, computed fields are not a valid solution, the best approach is to have our logic at the top of the method, accumulating the changes needed into the vals dictionary that will be passed to the final super() call. For the write() method, having further write operations on the same model will lead to a recursion loop and end with an error when the worker process resources are exhausted. Please consider if this is really needed. If it is, a technique to avoid the recursion loop is to set a flag in the context. For example, we could add code such as the following: if not self.env.context.get('todo_task_writing'): self.with_context(todo_task_writing=True).write( some_values) With this technique, our specific logic is guarded by an if statement, and runs only if a specific marker is not found in the context. Furthermore, our self.write() operations should use with_context to set that marker. This combination ensures that the custom login inside the if statement runs only once, and is not triggered on further write() calls, avoiding the infinite loop. These are common extension examples, but of course any standard method available for a model can be inherited in a similar way to add our custom logic to it. Methods for web client use over RPC We have seen the most important model methods used to generate recordsets and how to write to them, but there are a few more model methods available for more specific actions, as shown here: read([fields]) is similar to the browse method, but, instead of a recordset, it returns a list of rows of data with the fields given as its argument. Each row is a dictionary. It provides a serialized representation of the data that can be sent through RPC protocols and is intended to be used by client programs and not in server logic. search_read([domain], [fields], offset=0, limit=None, order=None) performs a search operation followed by a read on the resulting record list. It is intended to be used by RPC clients and saves them the extra round trip needed when doing a search followed by a read on the results. Methods for data import and export The import and export operations, are also available from the ORM API, through the following methods: load([fields], [data]) is used to import data acquired from a CSV file. The first argument is the list of fields to import, and it maps directly to a CSV top row. The second argument is a list of records, where each record is a list of string values to parse and import, and it maps directly to the CSV data rows and columns. It implements the features of CSV data import, such as the external identifiers support. It is used by the web client Import feature. export_data([fields], raw_data=False) is used by the web client Export function. It returns a dictionary with a data key containing the data: a list of rows. The field names can use the .id and /id suffixes used in CSV files, and the data is in a format compatible with an importable CSV file. The optional raw_data argument allows for data values to be exported with their Python types, instead of the string representation used in CSV. Methods for the user interface The following methods are mostly used by the web client to render the user interface and perform basic interaction: name_get()  returns a list of (ID, name) tuples with the text representing each record. It is used by default for computing the display_name value, providing the text representation of relation fields. It can be extended to implement custom display representations, such as displaying the record code and name instead of only the name. name_search(name='', args=None, operator='ilike', limit=100) returns a list of (ID, name) tuples, where the display name matches the text in the name argument. It is used in the UI while typing in a relation field to produce the list with the suggested records matching the typed text. For example, it is used to implement product lookup both by name and by reference, while typing in a field to pick a product. name_create(name) creates a new record with only the title name to use for it. It is used in the UI for the "quick-create" feature, where you can quickly create a related record by just providing its name. It can be extended to provide specific defaults for the new records created through this feature. default_get([fields]) returns a dictionary with the default values for a new record to be created. The default values may depend on variables such as the current user or the session context. fields_get() is used to describe the model's field definitions, as seen in the View Fields option of the developer menu. fields_view_get() is used by the web client to retrieve the structure of the UI view to render. It can be given the ID of the view as an argument or the type of view we want using view_type='form'. For example, you may try this: self.fields_view_get(view_type='tree'). The Mail and Social features API Odoo has available global messaging and activity planning features, provided by the Discuss app, with the technical name mail. The mail module provides the mail.thread abstract class that makes it simple to add the messaging features to any model. To add the mail.thread features to the To-Do tasks, we just need to inherit from it: class TodoTask(models.Model): _name = 'todo.task' _inherit = ['todo.task', 'mail.thread'] After this, among other things, our model will have two new fields available. For each record (sometimes also called a document) we have: mail_follower_ids stores the followers, and corresponding notification preferences mail_message_ids lists all the related messages The followers can be either partners or channels. A partner represents a specific person or organization. A channel is not a particular person, and instead represents a subscription list. Each follower also has a list of message types that they are subscribed to. Only the selected message types will generate notifications for them. Message subtypes Some types of messages are called subtypes. They are stored in the mail.message.subtype model and accessible in the Technical | Email | Subtypes menu. By default, we have three message subtypes available: Discussions, with mail.mt_comment XMLID, used for the messages created with the Send message link. It is intended to send a notification. Activities, with mail.mt_activities XMLID, used for the messages created with the Schedule activity link. It is intended to send a notification. Note, with mail.mt_note XMLID, used for the messages created with the Log note link. It is not intended to send a notification. Subtypes have the default notification settings described previously, but users are able to change them for specific documents, for example, to mute a discussion they are not interested in. Other than the built-in subtypes, we can also add our own subtypes to customize the notifications for our apps. Subtypes can be generic or intended for a particular model. For the latter case, we should fill in the subtype's res_model field with the name of the model it should apply to. Posting messages Our business logic can make use of this messaging system to send notifications to users. To post a message we use the message_post() method. For example: self.message_post('Hello!') This adds a simple text message, but sends no notification to the followers. That is because by default the mail.mt_note subtype is used for the posted messages. But we can have the message posted with the particular subtype we want. To add a message and have it send notifications to the followers, we should use the following: self.message_post('Hello again!', subtype='mail.mt_comment') We can also add a subject line to the message by adding the subject parameter. The message body is HTML, so we can include markup for text effects, such as <b> for bold text or <i> for italics. The message body will be sanitized for security reasons, so some particular HTML elements may not make it to the final message. Adding followers Also interesting from a business logic viewpoint is the ability to automatically add followers to a document, so that they can then get the corresponding notifications. For this we have several methods available to add followers: message_subscribe(partner_ids=<list of int IDs>) adds Partners message_subscribe(channel_ids=<list of int IDs>) adds Channels message_subscribe_users(user_ids=<list of int IDs>) adds Users The default subtypes will be used. To force subscribing a specific list of subtypes, just add the subtype_ids=<list of int IDs> with the specific subtypes you want to be subscribed. In this article, we went through an explanation of the features the ORM API proposes, and how they can be used when creating our models. We also learned about the mail module and the global messaging features it provides. To look further into ORM, and have a deeper understanding of how recordsets work and can be manipulated, read our book Odoo 11 Development Essentials - Third Edition. ERP tool in focus: Odoo 11 Building Your First Odoo Application How to Scaffold a New module in Odoo 11
Read more
  • 0
  • 0
  • 13388

article-image-apollo-11-source-code-how-it-became-a-small-step-for-a-woman-and-a-huge-leap-for-software-engineering
Sugandha Lahoti
19 Jul 2018
5 min read
Save for later

Apollo 11 source code: A small step for a woman, and a huge leap for 'software engineering'

Sugandha Lahoti
19 Jul 2018
5 min read
Yesterday, reddit saw an explosion of discussion around the original Apollo 11 Guidance Computer (AGC) source code. The code in its entirety was uploaded on GitHub two years ago, thanks to former NASA intern, Chris Garry. And again it seems to have undergone significant updates this week looking at the timestamps on all the files in the repo. This is a project that will always hold a special place for all software professionals around the world. This is the project that made ‘software engineering’ a real discipline. What is AGC and why it mattered for Apollo 11? AGC was a digital computer produced for the Apollo program, installed on board the Apollo 11 Command Module (CM) and Lunar Module (LM). The AGC code is also referred to as ‘COLOSSUS 2A’ and was written in AGC assembly language and stored on rope memory. On any given Apollo mission, there were two AGCs, one for the CM, and one for the LM. The two AGCs were identical and interchangeable. However, their software differed as both the LM and the CM performed different tasks pertaining to the spacecraft. The CM launched the three astronauts to the moon, and back again. The LM helped in the landing of two of the astronauts on the moon while the third astronaut remained in the CM, in orbit around the moon. The woman who coined the term, ‘software engineering’ The AGC code was brought to life by Margaret Hamilton, director of software engineering for the project. In a male-dominated world of tech and engineering of that time, Margaret was an exception. She led a team credited with developing the software for Apollo and Skylab, keeping her head high even through backlash. “People used to say to me, ‘How can you leave your daughter? How can you do this?”  She went on to become the founder and CEO of Hamilton Technologies, Inc. and was also awarded the Presidential Medal of Freedom in 2016. Hamilton is considered one of the pioneers of software engineering, credited for actually coining the term “software engineering”. She first started using the term during the early Apollo missions wanting to give software the same legitimacy as other disciplines. At that time, it was not taken seriously but over time software engineering has become an IEEE Standard. What can we learn from the AGC code developers? Understandably, the AGC specifications and processing power are very underrated as compared to the technology of today. Some still wittingly call it a calculator, instead of a computer. Others say, that the CPU in a microwave oven is probably more powerful than an AGC. Inspite of being a very basic technology in terms of processing power and speed, the Apollo 11 spacecraft was able to complete the first ever manned mission to the moon and back. This is not just a huge testament to the original programming team’s ingenuity and resourcefulness but also of their grit and meticulousness. One would think such a bunch produced serious (boring) code that has flawless execution. Read between the (code) lines and you see a team that just enjoyed every moment of writing the code with quirky naming conventions and humorous notes inside the comments. Back to the present As soon as the code was uploaded on Github two years ago and even now, coders and software programmers all over the world, are dissecting it, particularly interested in the quirky English-descriptions of code explanations. People on Reddit are terming the code files as real programming that doesn’t rely on APIs to do the heavy lifting. People are also loving the naming convention of the source code files and their programs which were 1960s inspired light-hearted jokes. For example, the BURN_BABY_BURN--MASTER IGNITION ROUTINE for the master ignition routine, and the PINBALL_GAME_BUTTONS_AND_LIGHTS.agc for keyboard and display code. Even the programs are quirky. The LUNAR_LANDING_GUIDANCE_EQUATIONS.s, file ended up having two temporary lines of code as permanent. You can read more such interesting reddit comments. However, a point worth noting is that Margaret and by extension, women in tech, are conspicuously missing in this rich discussion. We can start seeing real change only when discussion forums start including various facets of the tool/tech under discussion. People behind the tech are an important facet, and more so when they are in the minority. You can also read The Apollo Guidance Computer: Architecture and Operation the inside scoop on how the AGC functioned, what kind of design decisions and software choices the programmers had to made based on the features and limitations of the AGC among other insights. The github repo for the original Apollo 11 source code also contains material for further reading. Is space the final frontier for AI? NASA to reveal Kepler’s latest AI backed discovery NASA’s Kepler discovers a new exoplanet using Google’s Machine Learning Meet CIMON, the first AI robot to join the astronauts aboard ISS
Read more
  • 0
  • 1
  • 10384

article-image-postgis-extension-pgrouting-for-calculating-driving-distance-tutorial
Pravin Dhandre
19 Jul 2018
5 min read
Save for later

PostGIS extension: pgRouting for calculating driving distance [Tutorial]

Pravin Dhandre
19 Jul 2018
5 min read
pgRouting is an extension of PostGIS and PostgreSQL geospatial database. It adds routing and other network analysis functionality. In this tutorial we will learn to work with pgRouting tool in estimating the driving distance from all nearby nodes which can be very useful in supply chain, logistics and transportation based applications. This tutorial is an excerpt from a book written by Mayra Zurbaran,Pedro Wightman, Paolo Corti, Stephen Mather, Thomas Kraft and Bborie Park titled PostGIS Cookbook - Second Edition. Driving distance is useful when user sheds are needed that give realistic driving distance estimates, for example, for all customers with five miles driving, biking, or walking distance. These estimates can be contrasted with buffering techniques, which assume no barrier to travelling and are useful for revealing the underlying structures of our transportation networks relative to individual locations. Driving distance (pgr_drivingDistance) is a query that calculates all nodes within the specified driving distance of a starting node. This is an optional function compiled with pgRouting; so if you compile pgRouting yourself, make sure that you enable it and include the CGAL library, an optional dependency for pgr_drivingDistance. We will start by loading a test dataset. You can get some really basic sample data from https://docs.pgrouting.org/latest/en/sampledata.html. In the following example, we will look at all users within a distance of three units from our starting point—that is, a proposed bike shop at node 2: SELECT * FROM pgr_drivingDistance( 'SELECT id, source, target, cost FROM chp06.edge_table', 2, 3 ); The preceding command gives the following output: As usual, we just get a list from the pgr_drivingDistance table that, in this case, comprises sequence, node, edge cost, and aggregate cost. PgRouting, like PostGIS, gives us low-level functionality; we need to reconstruct what geometries we need from that low-level functionality. We can use that node ID to extract the geometries of all of our nodes by executing the following script: WITH DD AS ( SELECT * FROM pgr_drivingDistance( 'SELECT id, source, target, cost FROM chp06.edge_table', 2, 3 ) ) SELECT ST_AsText(the_geom) FROM chp06.edge_table_vertices_pgr w, DD d WHERE w.id = d.node; The preceding command gives the following output: But the output seen is just a cluster of points. Normally, when we think of driving distance, we visualize a polygon. Fortunately, we have the pgr_alphaShape function that provides us that functionality. This function expects id, x, and y values for input, so we will first change our previous query to convert to x and y from the geometries in edge_table_vertices_pgr: WITH DD AS ( SELECT * FROM pgr_drivingDistance( 'SELECT id, source, target, cost FROM chp06.edge_table', 2, 3 ) ) SELECT id::integer, ST_X(the_geom)::float AS x, ST_Y(the_geom)::float AS y FROM chp06.edge_table_vertices_pgr w, DD d WHERE w.id = d.node; The output is as follows: Now we can wrap the preceding script up in the alphashape function: WITH alphashape AS ( SELECT pgr_alphaShape(' WITH DD AS ( SELECT * FROM pgr_drivingDistance( ''SELECT id, source, target, cost FROM chp06.edge_table'', 2, 3 ) ), dd_points AS( SELECT id::integer, ST_X(the_geom)::float AS x, ST_Y(the_geom)::float AS y FROM chp06.edge_table_vertices_pgr w, DD d WHERE w.id = d.node ) SELECT * FROM dd_points ') ), So first, we will get our cluster of points. As we did earlier, we will explicitly convert the text to geometric points: alphapoints AS ( SELECT ST_MakePoint((pgr_alphashape).x, (pgr_alphashape).y) FROM alphashape ), Now that we have points, we can create a line by connecting them: alphaline AS ( SELECT ST_Makeline(ST_MakePoint) FROM alphapoints ) SELECT ST_MakePolygon(ST_AddPoint(ST_Makeline, ST_StartPoint(ST_Makeline))) FROM alphaline; Finally, we construct the line as a polygon using ST_MakePolygon. This requires adding the start point by executing ST_StartPoint in order to properly close the polygon. The complete code is as follows: WITH alphashape AS ( SELECT pgr_alphaShape(' WITH DD AS ( SELECT * FROM pgr_drivingDistance( ''SELECT id, source, target, cost FROM chp06.edge_table'', 2, 3 ) ), dd_points AS( SELECT id::integer, ST_X(the_geom)::float AS x, ST_Y(the_geom)::float AS y FROM chp06.edge_table_vertices_pgr w, DD d WHERE w.id = d.node ) SELECT * FROM dd_points ') ), alphapoints AS ( SELECT ST_MakePoint((pgr_alphashape).x, (pgr_alphashape).y) FROM alphashape ), alphaline AS ( SELECT ST_Makeline(ST_MakePoint) FROM alphapoints ) SELECT ST_MakePolygon( ST_AddPoint(ST_Makeline, ST_StartPoint(ST_Makeline)) ) FROM alphaline; Our first driving distance calculation can be better understood in the context of the following diagram, where we can reach nodes 9, 11, 13 from node 2 with a driving distance of 3: With this,  you can calculate the most optimistic distance route across different nodes in your transportation network. Want to explore more with PostGIS, check out PostGIS Cookbook - Second Edition and get access to complete range of PostGIS techniques and related extensions for better analytics on your spatial information. Top 7 libraries for geospatial analysis Using R to implement Kriging - A Spatial Interpolation technique for Geostatistics data Learning R for Geospatial Analysis
Read more
  • 0
  • 0
  • 10083
Banner background image

article-image-implement-an-effective-crm-system-in-odoo-11-tutorial
Sugandha Lahoti
18 Jul 2018
18 min read
Save for later

Implement an effective CRM system in Odoo 11 [Tutorial]

Sugandha Lahoti
18 Jul 2018
18 min read
Until recently, most business and financial systems had product-focused designs while records and fields maintained basic customer information, processes, and reporting typically revolved around product-related transactions. In the past, businesses were centered on specific products, but now the focus has shifted to center the business on the customer. The Customer Relationship Management (CRM) system provides the tools and reporting necessary to manage customer information and interactions. In this article, we will take a look at what it takes to implement a CRM system in Odoo 11 as part of an overall business strategy. We will also install the CRM application and setup salespersons that can be assigned to our customers. This article is an excerpt from the book, Working with Odoo 11 - Third Edition by Greg Moss. In this book, you will learn to configure, manage, and customize your Odoo system. Using CRM as a business strategy It is critical that the sales people share account knowledge and completely understand the features and capabilities of the system. They often have existing tools that they have relied on for many years. Without clear objectives and goals for the entire sales team, it is likely that they will not use the tool. A plan must be implemented to spend time training and encouraging the sharing of knowledge to successfully implement a CRM system. Installing the CRM application If you have not installed the CRM module, log in as the administrator and then click on the Apps menu. In a few seconds, the list of available apps will appear. The CRM will likely be in the top-left corner: Click on Install to set up the CRM application. Look at the CRM Dashboard Like with the installation of the Sales application, Odoo takes you to the Discuss menu. Click on Sales to see the new changes after installing the CRM application. New to Odoo 10 is an improved CRM Dashboard that provides you a friendly welcome message when you first install the application. You can use the dashboard to get an overview of your sales pipelines and get easy access to the most common actions within CRM. Assigning the sales representative or account manager In Odoo 10, like in most CRM systems, the sales representative or account manager plays an important role. Typically, this is the person that will ultimately be responsible for the customer account and a satisfactory customer experience. While most often a company will use real people as their salespeople, it is certainly possible to instead have a salesperson record refer to a group, or even a sub-contracted support service. We will begin by creating a salesperson that will handle standard customer accounts. Note that a sales representative is also a user in the Odoo system. Create a new salesperson by going to the Settings menu, selecting Users, and then clicking the Create button. The new user form will appear. We have filled in the form with values for a fictional salesperson, Terry Zeigler. The following is a screenshot of the user's Access Rights tab: Specifying the name of the user You specify the username. Unlike some systems that provide separate first name and last name fields, with Odoo you specify the full name within a single field. Email address Beginning in Odoo 9, the user and login form prompts for email as opposed to username. This practice has continued in Odoo version 10 as well. It is still possible to use a user name instead of email address, but given the strong encouragement to use email address in Odoo 9 and Odoo 10, it is possible that in future versions of Odoo the requirement to provide an email address may be more strictly enforced. Access Rights The Access Rights tab lets you control which applications the user will be able to access. By default, Odoo will specify Mr.Ziegler as an employee so we will accept that default. Depending on the applications you may have already installed or dependencies Odoo may add in various releases, it is possible that you will have other Access Rights listed. Sales application settings When setting up your sales people in Odoo 10, you have three different options on how much access an individual user has to the sales system: User: Own Documents Only This is the most restrictive access to the sales application. A user with this access level is only allowed to see the documents they have entered themselves or which have been assigned to them. They will not be able to see Leads assigned to other salespeople in the sales application. User: All Documents With this setting, the user will have access to all documents within the sales application. Manager The Manager setting is the highest access level in the Odoo sales system. With this access level, the user can see all Leads as well as access the configuration options of the sales application. The Manager setting also allows the user to access statistical reports. We will leave the Access Rights options unchecked. These are used when working with multiple companies or with multiple currencies. The Preferences tab consists of the following options: Language and Timezone Odoo allows you to select the language for each user. Currently, Odoo supports more than 20 language translations. Specifying the Timezone field allows Odoo to coordinate the display of date and time on messages. Leaving Timezone blank for a user will sometimes lead to unpredictable behavior in the Odoo software. Make sure you specify a timezone when creating a user record. Email Messages and Notifications In Odoo 7, messaging became a central component of the Odoo system. In version 10, support has been improved and it is now even easier to communicate important sales information between colleagues. Therefore, determining the appropriate handling of email, and circumstances in which a user will receive email, is very important. The Email Messages and Notifications option lets you determine when you will receive email messages from notifications that come to your Odoo inbox. For our example, we have chosen All Messages. This is now the new default setting in Odoo 10. However, since we have not yet configured an email server, or if you have not configured an email server yourself, no emails will be sent or received at this stage. Let's review the user options that will be available in communicating by email. Never: Selecting Never suppresses all email messaging for the user. Naturally, this is the setting you will wish to use if you do not have an email server configured. This is also a useful option for users that simply want to use the built-in inbox inside Odoo to retrieve their messages. All Messages (discussions, emails, followed system notifications): This option sends an email notification for any action that would create an entry in your Odoo inbox. Unlike the other options, this action can include system notifications or other automated communications. Signature The Signature section allows you to customize the signature that will automatically be appended to Odoo-generated messages and emails. Manually setting the user password You may have noticed that there is no visible password field in the user record. That is because the default method is to email the user an account verification they can use to set their password. However, if you do not have an email server configured, there is an alternative method for setting the user password. After saving the user record, use the Change Password button at the top of the form. A form will then appear allowing you to set the password for the user. Now in Odoo 10, there is a far more visible button available at the top left of the form. Just click the Change Password button. Assigning a salesperson to a customer Now that we have set up our salesperson, it is time to assign the salesperson their first customer. Previously, no salesperson had been assigned to our one and only customer, Mike Smith. So let's go to the Sales menu and then click on Mike Smith to pull up his customer record and assign him Terry Ziegler as his salesperson. The following screenshot is of the customer screen opened to assign a salesperson: Here, we have set the sales person to Terry Zeigler. By assigning your customers a salesperson, you can then better organize your customers for reports and additional statistical analysis. Understanding Your Pipeline Prior to Odoo 10, the CRM application primarily was a simple collection of Leads and opportunities. While Odoo still uses both Leads and opportunities as part of the CRM application, the concept of a Pipeline now takes center stage. You use the Pipeline to organize your opportunities by what stage they are within your sales process. Click on Your Pipeline in the Sales menu to see the overall layout of the Pipeline screen: In the preceding Pipeline forms, one of the first things to notice is that there are default filters applied to the view. Up in the search box, you will see that there is a filter to limit the records in this view to the Direct Sales team as well as a My Opportunities filter. This effectively limits the records so you only see your opportunities from your primary sales team. Removing the My Opportunities filter will allow you to see opportunities from other salespeople in your organization. Creating new opportunity In Odoo 10, a potential sale is defined by creating a new opportunity. An opportunity allows you to begin collecting information about the scope and potential outcomes for a sale. These opportunities can be created from new Leads, or an opportunity can originate from an existing customer. For our real-world example, let's assume that Mike Smith has called and was so happy with his first order that he now wants to discuss using Silkworm for his local sports team. After a short conversation we decide to create an opportunity by clicking the Create button. You can also use the + buttons within any of the pipeline stages to create an opportunity that is set to that stage in the pipeline. In Odoo 10, the CRM application greatly simplified the form for entering a new opportunity. Instead of bringing up the entire opportunity form with all the fields you get a simple form that collects only the most important information. The following screenshot is of a new opportunity form: Opportunity Title The title of your opportunity can be anything you wish. It is naturally important to choose a subject that makes it easy to identify the opportunity in a list. This is the only field required to create an opportunity in Odoo 10. Customer This field is automatically populated if you create an opportunity from the customer form. You can, however, assign a different customer if you like. This is not a required field, so if you have an opportunity that you do not wish to associate with a customer, that is perfectly fine. For example, you may leave this field blank if you are attending a trade show and expect to have revenue, but do not yet have any specific customers to attribute to the opportunity. Expected revenue Here, you specify the amount of revenue you can expect from the opportunity if you are successful. Inside the full opportunity form there is a field in which you can specify the percentage likelihood that an opportunity will result in a sale. These values are useful in many statistical reports, although they are not required to create an opportunity. Increasingly, more reports look to expected revenue and percentage of opportunity completions. Therefore, depending on your reporting requirements you may wish to encourage sales people to set target goals for each opportunity to better track conversion. Rating Some opportunities are more important than others. You can choose none, one, two, or three stars to designate the relative importance of this opportunity. Introduction to sales stages At the top of the Kanban view, you can see the default stages that are provided by an Odoo CRM installation. In this case, we see New, Qualified, Proposition, and Won. As an opportunity moves between stages, the Kanban view will update to show you where each opportunity currently stands. Here, we can see because this Sports Team Project has just been entered in the New column. Viewing the details of an opportunity If you click the three lines at the top right of the Sports Team Project opportunity in the Kanban view, which appears when you hover the mouse over it, you will see a pop-up menu with your available options. The following screenshot shows the available actions on an opportunity: Actions you can take on an opportunity Selecting the Edit option takes you to the opportunity record and into edit mode for you to change any of the information. In addition, you can delete the record or archive the record so it will no longer appear in your pipeline by default. The color palette at the bottom lets you color code your opportunities in the Kanban view. The small stars on the opportunity card allow you to highlight opportunities for special consideration. You can also easily drag and drop the opportunity into other columns as you work through the various stages of the sale. Using Odoo's OpenChatter feature One of the biggest enhancements brought about in Odoo 7 and expanded on in later versions of Odoo was the new OpenChatter feature that provides social networking style communication to business documents and transactions. As we work our brand new opportunity, we will utilize the OpenChatter feature to demonstrate how to communicate details between team members and generate log entries to document our progress. The best thing about the OpenChatter feature is that it is available for nearly all business documents in Odoo. It also allows you to see a running set of logs of the transactions or operations that have affected the document. This means everything that applies here to the CRM application can also be used to communicate in sales and purchasing, or in communicating about a specific customer or vendor. Changing the status of an opportunity For our example, let's assume that we have prepared our proposal and made the presentation. Bring up the opportunity by using the right-click Menu in the Kanban view or going into the list view and clicking the opportunity in the list. It is time to update the status of our opportunity by clicking the Proposition arrow at the top of the form: Notice that you do not have to edit the record to change the status of the opportunity. At the bottom of the opportunity, you will now see a logged note generated by Odoo that documents the changing of the opportunity from a new opportunity to a proposition. The following screenshot is of OpenChatter displaying a changed stage for the opportunity: Notice how Odoo is logging the events automatically as they take place. Managing the opportunity With the proposal presented, let's take down some details from what we have learned that may help us later when we come back to this opportunity. One method of collecting this information could be to add the details to the Internal Notes field in the opportunity form. There is value, however, in using the OpenChatter feature in Odoo to document our new details. Most importantly, using OpenChatter to log notes gives you a running transcript with date and time stamps automatically generated. With the Generic Notes field, it can be very difficult to manage multiple entries. Another major advantage is that the OpenChatter feature can automatically send messages to team members' inboxes updating them on progress. let's see it in action! Click the Log an Internal note link to attach a note to our opportunity. The following screenshot is for creating a note: The activity option is unique to the CRM application and will not appear in most documents. You can use the small icons at the bottom to add a smiley, attach a document, or open up a full featured editor if you are creating a long note. The full featured editor also allows you to save templates of messages/notes you may use frequently. Depending on your specific business requirements, this could be a great time saver. When you create a note, it is attached to the business document, but no message will be sent to followers. You can even attach a document to the note by using the Attach a File feature. After clicking the Log button, the note is saved and becomes part of the OpenChatter log for that document. Following a business document Odoo brings social networking concepts into your business communication. Fundamental to this implementation is that you can get automatic updates on a business document by following the document. Then, whenever there is a note, action, or a message created that is related to a document you follow, you will receive a message in your Odoo inbox. In the bottom right-hand corner of the form, you are presented with the options for when you are notified and for adding or removing followers from the document. The following screenshot is of the OpenChatter follow options: In this case, we can see that both Terry Zeigler and Administrator are set as followers for this opportunity. The Following checkbox at the top indicates that I am following this document. Using the Add Followers link you can add additional users to follow the document. The items followers are notified are viewed by clicking the arrow to the right of the following button. This brings up a list of the actions that will generate notifications to followers: The checkbox next to Discussions indicates that I should be notified of any discussions related to this document. However, I would not be notified, for example, if the stage changed. When you send a message, by default the customer will become a follower of the document. Then, whenever the status of the document changes, the customer will receive an email. Test out all your processes before integrating with an email server. Modifying the stages of the sale We have seen that Odoo provides a default set of sales stages. Many times, however, you will want to customize the stages to best deliver an outstanding customer experience. Moving an opportunity through stages should trigger actions that create a relationship with the customer and demonstrate your understanding of their needs. A customer in the qualification stage of a sale will have much different needs and much different expectations than a customer that is in the negotiation phase. For our case study, there are sometimes printing jobs that are technically complex to accomplish. With different jerseys for a variety of teams, the final details need to go through a final technical review and approval process before the order can be entered and verified. From a business perspective, the goal is not just to document the stage of the sales cycle; the primary goal is to use this information to drive customer interactions and improve the overall customer experience. To add a stage to the sales process, bring up Your Pipeline and then click on the ADD NEW COLUMN area in the right of the form to bring up a little popup to enter the name for the new stage: After you have added the column to the sales process, you can use your mouse to drag and drop the columns into the order that you wish them to appear. We are now ready to begin the technical approval stage for this opportunity. Drag and drop the Sports Team Project opportunity over to the Technical Approval column in the Kanban view. The following screenshot is of the opportunities Kanban view after adding the technical approval stage: We now see the Technical Approval column in our Kanban view and have moved over the opportunity. You will also notice that any time you change the stage of an opportunity that there will be an entry that will be created in the OpenChatter section at the bottom of the form. In addition to the ability to drag and drop an opportunity into a new stage, you can also change the stage of an opportunity by going into the form view. Closing the sale After a lot of hard work, we have finally won the opportunity, and it is time to turn this opportunity into a quotation. At this point, Odoo makes it easy to take that opportunity and turn it into an actual quotation. Open up the opportunity and click the New Quotation tab at the top of the opportunity form: Unlike Odoo 8, which prompts for more information, in Odoo 10 you get taken to a new quote with the customer information already filled in: We installed the CRM module, created salespeople, and proceeded to develop a system to manage the sales process. To modify stages in the sales cycle and turn the opportunity into a quotation using Odoo 11, grab the latest edition  Working with Odoo 11 - Third Edition. ERP tool in focus: Odoo 11 Building Your First Odoo Application How to Scaffold a New module in Odoo 11
Read more
  • 0
  • 0
  • 7828

article-image-python-founder-guido-van-rossum-goes-on-a-permanent-vacation-from-being-bdfl
Savia Lobo
13 Jul 2018
5 min read
Save for later

Python founder resigns - Guido van Rossum goes ‘on a permanent vacation from being BDFL’

Savia Lobo
13 Jul 2018
5 min read
Python is one of the most popular scripting languages widely adopted and loved due to its simplicity.  Since its humble beginnings in the last 80s as an interpreter for the new, a simple-to-read scripting language, it has now come to dominate all of the tech world. Python has become a vital part of web development stacks such as Perl, PHP, and others have been core to domains like security. It is also used in current popular technologies such as AI, ML, and DL. After 28 years of successfully stewarding the Python community since inventing it back in Dec 1989, Guido van Rossum has decided to take himself out of the decision making process of the community as a Benevolent dictator for life (BDFL). Guido still promises to be a part of the core development group. He also added that he will be available to mentor people but most of the times the community will have to manage on their own. Benevolent dictator for life (BDFL) is a term that Guido's fellow Python enthusiasts came up with for him, as a joke, when discussing minutes of the meeting over email regarding leading Python’s development and adoption. Who will look after the Python community now? Guido Van Rossum said, "I am not going to appoint a successor". True to his leadership style, he has thrown his team of core developers into the deep end by asking them to consider what the Python community's new governance model could be. In his memo, he asked, "So what are you all going to do? Create a democracy? Anarchy? A dictatorship? A federation?" Guido's parting advice to the core dev team Guido expressed confidence in his team to continue to manage the day-to-day tasks and operations just as they’ve been doing under his leadership. The two things he wants the core developers and the community to think deeply about are: How the PEPs are decided and How will the new core developers be inducted? He also emphasized the importance of fostering the right community culture militantly through Python's Community Code of Conduct (CoC). He said, "if you don't like that document your only option might be to leave this group voluntarily. Perhaps there are issues to decide like when should someone be kicked out (this could be banning people from python-dev or python-ideas too since those are also covered by the CoC)." He assured the team that while he has stepped down as the BDFL and from all decision-making duties, he will continue to be an active member of the community and will now be more available as a mentor to those on the core development team. Guido's decision to quit seems to have stemmed partly from the physical, mental, and emotional toll that the role has taken on him over years. He concluded his thread on Transfer of Power by saying, "I'm tired, and need a very long break". How is the Python community taking this decision? The development team hopes Guido will make a come back after his well-deserved break. As a BDFL, Guido has provided them with consistency in design and taste. By having Guido as a monitor, the team has had a very consistent view of how the community should behave and this has been an asset for the whole team. Now they have four ways to explore to govern the Python community Find a new BDFL. This option seems highly unlikely as Guido’s legacy is irreplaceable. Besides, it is practically the least robust to rely on one person to take all key decisions and to commit their full time to the community. That person also needs to be well respected and accepted as a de facto head. Set up an N-virate leadership team (a group of 3 (triumvirate) or 5 (quintumvirate) experts). With such a model, the responsibilities and load will be equally distributed among the chosen members from the core development team. This appears to be the current favorite on the thread that opened yesterday. Become a democracy. In this model, the community gets to vote on all key decisions. This seems like the short-term fix the team is gravitating towards. At least to decide on the immediate task at hand. But many on the team acknowledge that this is not a permanent answer as it will pull the language in too many directions and also is time-consuming. Explore the governance model of other open source communities. This option is as being seriously considered in the discussions. Clearly, the community loves Guido, evident from the deluge of well wishes he's receiving from all over the globe. You know you've done your job well when you hear someone say 'You changed my life'. Guido has changed millions of lives for the better. https://twitter.com/AndrewYNg/status/1017664116482162689 https://twitter.com/anthonypjshaw/status/1017610576640393216 https://twitter.com/generativist/status/1017547690228396032 https://twitter.com/bloodyquantum/status/1017558674024218624 Thank you, Guido, for Python, your heart, and your leadership. We know the community will thrive even in your absence because you've cultivated an excellent culture and a great set of minds. Top 7 Python programming books you need to read Python web development: Django vs Flask in 2018 Python experts talk Python on Twitter: Q&A Recap
Read more
  • 0
  • 0
  • 7795

article-image-writing-perform-test-functions-in-golang-tutorial
Natasha Mathur
10 Jul 2018
9 min read
Save for later

Writing test functions in Golang [Tutorial]

Natasha Mathur
10 Jul 2018
9 min read
Go is a modern programming language built for the 21st-century application development. Hardware and technology have advanced significantly over the past decade, and most of the other languages do not take advantage of these technological advancements.  Go allows us to build network applications that take advantage of concurrency and parallelism made available with multicore systems. Testing is an important part of programming, whether it is in Go or in any other language. Go has a straightforward approach to writing tests, and in this tutorial, we will look at some important tools to help with testing. This tutorial is an excerpt from the book ‘Distributed computing with Go’, written by V.N. Nikhil Anurag. There are certain rules and conventions we need to follow to test our code. They are as follows: Source files and associated test files are placed in the same package/folder The name of the test file for any given source file is <source-file-name>_test.go Test functions need to have the "Test" prefix, and the next character in the function name should be capitalized In the remainder of this tutorial, we will look at three files and their associated tests: variadic.go and variadic_test.go addInt.go and addInt_test.go nil_test.go (there isn't any source file for these tests) Along the way, we will introduce any concepts we might use. variadic.go function In order to understand the first set of tests, we need to understand what a variadic function is and how Go handles it. Let's start with the definition: Variadic function is a function that can accept any number of arguments during function call. Given that Go is a statically typed language, the only limitation imposed by the type system on a variadic function is that the indefinite number of arguments passed to it should be of the same data type. However, this does not limit us from passing other variable types. The arguments are received by the function as a slice of elements if arguments are passed, else nil, when none are passed. Let's look at the code to get a better idea: // variadic.go package main func simpleVariadicToSlice(numbers ...int) []int { return numbers } func mixedVariadicToSlice(name string, numbers ...int) (string, []int) { return name, numbers } // Does not work. // func badVariadic(name ...string, numbers ...int) {} We use the ... prefix before the data type to define a function as a variadic function. Note that we can have only one variadic parameter per function and it has to be the last parameter. We can see this error if we uncomment the line for badVariadic and try to test the code. variadic_test.go We would like to test the two valid functions, simpleVariadicToSlice, and mixedVariadicToSlice, for various rules defined above. However, for the sake of brevity, we will test these: simpleVariadicToSlice: This is for no arguments, three arguments, and also to look at how to pass a slice to a variadic function mixedVariadicToSlice: This is to accept a simple argument and a variadic argument Let's now look at the code to test these two functions: // variadic_test.go package main import "testing" func TestSimpleVariadicToSlice(t *testing.T) { // Test for no arguments if val := simpleVariadicToSlice(); val != nil { t.Error("value should be nil", nil) } else { t.Log("simpleVariadicToSlice() -> nil") } // Test for random set of values vals := simpleVariadicToSlice(1, 2, 3) expected := []int{1, 2, 3} isErr := false for i := 0; i < 3; i++ { if vals[i] != expected[i] { isErr = true break } } if isErr { t.Error("value should be []int{1, 2, 3}", vals) } else { t.Log("simpleVariadicToSlice(1, 2, 3) -> []int{1, 2, 3}") } // Test for a slice vals = simpleVariadicToSlice(expected...) isErr = false for i := 0; i < 3; i++ { if vals[i] != expected[i] { isErr = true break } } if isErr { t.Error("value should be []int{1, 2, 3}", vals) } else { t.Log("simpleVariadicToSlice([]int{1, 2, 3}...) -> []int{1, 2, 3}") } } func TestMixedVariadicToSlice(t *testing.T) { // Test for simple argument & no variadic arguments name, numbers := mixedVariadicToSlice("Bob") if name == "Bob" && numbers == nil { t.Log("Recieved as expected: Bob, <nil slice>") } else { t.Errorf("Received unexpected values: %s, %s", name, numbers) } } Running tests in variadic_test.go Let's run these tests and see the output. We'll use the -v flag while running the tests to see the output of each individual test: $ go test -v ./{variadic_test.go,variadic.go} === RUN TestSimpleVariadicToSlice --- PASS: TestSimpleVariadicToSlice (0.00s) variadic_test.go:10: simpleVariadicToSlice() -> nil variadic_test.go:26: simpleVariadicToSlice(1, 2, 3) -> []int{1, 2, 3} variadic_test.go:41: simpleVariadicToSlice([]int{1, 2, 3}...) -> []int{1, 2, 3} === RUN TestMixedVariadicToSlice --- PASS: TestMixedVariadicToSlice (0.00s) variadic_test.go:49: Received as expected: Bob, <nil slice> PASS ok command-line-arguments 0.001s addInt.go The tests in variadic_test.go elaborated on the rules for the variadic function. However, you might have noticed that TestSimpleVariadicToSlice ran three tests in its function body, but go test treats it as a single test. Go provides a good way to run multiple tests within a single function, and we shall look them in addInt_test.go. For this example, we will use a very simple function as shown in this code: // addInt.go package main func addInt(numbers ...int) int { sum := 0 for _, num := range numbers { sum += num } return sum } addInt_test.go You might have also noticed in TestSimpleVariadicToSlice that we duplicated a lot of logic, while the only varying factor was the input and expected values. One style of testing, known as Table-driven development, defines a table of all the required data to run a test, iterates over the "rows" of the table and runs tests against them. Let's look at the tests we will be testing against no arguments and variadic arguments: // addInt_test.go package main import ( "testing" ) func TestAddInt(t *testing.T) { testCases := []struct { Name string Values []int Expected int }{ {"addInt() -> 0", []int{}, 0}, {"addInt([]int{10, 20, 100}) -> 130", []int{10, 20, 100}, 130}, } for _, tc := range testCases { t.Run(tc.Name, func(t *testing.T) { sum := addInt(tc.Values...) if sum != tc.Expected { t.Errorf("%d != %d", sum, tc.Expected) } else { t.Logf("%d == %d", sum, tc.Expected) } }) } } Running tests in addInt_test.go Let's now run the tests in this file, and we are expecting each of the row in the testCases table, which we ran, to be treated as a separate test: $ go test -v ./{addInt.go,addInt_test.go} === RUN TestAddInt === RUN TestAddInt/addInt()_->_0 === RUN TestAddInt/addInt([]int{10,_20,_100})_->_130 --- PASS: TestAddInt (0.00s) --- PASS: TestAddInt/addInt()_->_0 (0.00s) addInt_test.go:23: 0 == 0 --- PASS: TestAddInt/addInt([]int{10,_20,_100})_->_130 (0.00s) addInt_test.go:23: 130 == 130 PASS ok command-line-arguments 0.001s nil_test.go We can also create tests that are not specific to any particular source file; the only criteria is that the filename needs to have the <text>_test.go form. The tests in nil_test.go elucidate on some useful features of the language which the developer might find useful while writing tests. They are as follows: httptest.NewServer: Imagine the case where we have to test our code against a server that sends back some data. Starting and coordinating a full blown server to access some data is hard. The http.NewServer solves this issue for us. t.Helper: If we use the same logic to pass or fail a lot of testCases, it would make sense to segregate this logic into a separate function. However, this would skew the test run call stack. We can see this by commenting t.Helper() in the tests and rerunning go test. We can also format our command-line output to print pretty results. We will show a simple example of adding a tick mark for passed cases and cross mark for failed cases. In the test, we will run a test server, make GET requests on it, and then test the expected output versus actual output: // nil_test.go package main import ( "fmt" "io/ioutil" "net/http" "net/http/httptest" "testing" ) const passMark = "u2713" const failMark = "u2717" func assertResponseEqual(t *testing.T, expected string, actual string) { t.Helper() // comment this line to see tests fail due to 'if expected != actual' if expected != actual { t.Errorf("%s != %s %s", expected, actual, failMark) } else { t.Logf("%s == %s %s", expected, actual, passMark) } } func TestServer(t *testing.T) { testServer := httptest.NewServer( http.HandlerFunc( func(w http.ResponseWriter, r *http.Request) { path := r.RequestURI if path == "/1" { w.Write([]byte("Got 1.")) } else { w.Write([]byte("Got None.")) } })) defer testServer.Close() for _, testCase := range []struct { Name string Path string Expected string }{ {"Request correct URL", "/1", "Got 1."}, {"Request incorrect URL", "/12345", "Got None."}, } { t.Run(testCase.Name, func(t *testing.T) { res, err := http.Get(testServer.URL + testCase.Path) if err != nil { t.Fatal(err) } actual, err := ioutil.ReadAll(res.Body) res.Body.Close() if err != nil { t.Fatal(err) } assertResponseEqual(t, testCase.Expected, fmt.Sprintf("%s", actual)) }) } t.Run("Fail for no reason", func(t *testing.T) { assertResponseEqual(t, "+", "-") }) } Running tests in nil_test.go We run three tests, where two test cases will pass and one will fail. This way we can see the tick mark and cross mark in action: $ go test -v ./nil_test.go === RUN TestServer === RUN TestServer/Request_correct_URL === RUN TestServer/Request_incorrect_URL === RUN TestServer/Fail_for_no_reason --- FAIL: TestServer (0.00s) --- PASS: TestServer/Request_correct_URL (0.00s) nil_test.go:55: Got 1. == Got 1. --- PASS: TestServer/Request_incorrect_URL (0.00s) nil_test.go:55: Got None. == Got None. --- FAIL: TestServer/Fail_for_no_reason (0.00s) nil_test.go:59: + != - FAIL exit status 1 FAIL command-line-arguments 0.003s We looked at how to write test functions in Go, and learned a few interesting concepts when dealing with a variadic function and other useful test functions. If you found this post useful, do check out the book  'Distributed Computing with Go' to learn more about testing, Goroutines, RESTful web services, and other concepts in Go. Why is Go the go-to language for cloud-native development? – An interview with Mina Andrawos Systems programming with Go in UNIX and Linux How to build a basic server-side chatbot using Go
Read more
  • 0
  • 0
  • 13906
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-unit-testing-with-java-frameworks-junit-and-testng-tutorial
Fatema Patrawala
10 Jul 2018
10 min read
Save for later

Unit testing with Java frameworks: JUnit and TestNG [Tutorial]

Fatema Patrawala
10 Jul 2018
10 min read
Frequent manual testing is too impractical for any but the smallest systems. The only way around this is the use of automated tests. Automated tests are an effective method to reduce the time and cost of building, deploying, and maintaining applications. In order to effectively manage applications, it is of the utmost importance that both the implementation and test codes are as simple as possible. In this tutorial, we will look at implementing simple and easy to learn Java Unit testing frameworks for writing and running test i.e. JUnit and TestNG. You are reading Java unit testing tutorial from the book Test Driven Java Development - Second Edition, written by Alex Garcia and Viktor Farcic. Simplicity of code is one of the core extreme programming (XP) values and the key to Test Driven Development (TDD) and programming in general. It is most often accomplished through division into small units. In Java, units are methods. Being the smallest, the feedback loop they provide is the fastest so we spend most of our time thinking about and working on them. As a counterpart to implementation methods, unit tests should constitute by far the biggest percentage of all tests. So let us look at what is Unit testing and then get into the usage of the frameworks. What is Unit testing? Unit testing is a practice that forces us to test small, individual, and isolated units of code. They are usually methods, even though in some cases classes or even whole applications can be considered to be units, as well. In order to write unit tests, code under tests needs to be isolated from the rest of the application. Preferably, that isolation is already ingrained in the code or it can be accomplished with the use of mocks. If unit tests of a particular method cross the boundaries of that unit, then they become integration tests. As such, it becomes less clear what is under the tests. In case of a failure, the scope of a problem suddenly increases and finding the cause becomes more tedious. Java Unit testing frameworks In this section, two of the most used Java frameworks for unit testing are shown and briefly commented on. We will focus on their syntax and main features by comparing a test class written using both JUnit and TestNG. Although there are slight differences, both frameworks offer the most commonly used functionalities, and the main difference is how tests are executed and organized. Let's start with a question. What is a test? How can we define it? A test is a repeatable process or method that verifies the correct behavior of a tested target in a determined situation with a determined input expecting a predefined output or interactions. In the programming approach, there are several types of tests depending on their scope—functional tests, acceptance tests, and unit tests. Further on, we will explore each of those types of tests in more detail. Let's see how to test a single Java class. The class is quite simple, but enough for our interest: public class Friendships { private final Map<String, List<String>> friendships = new HashMap<>(); public void makeFriends(String person1, String person2) { addFriend(person1, person2); addFriend(person2, person1); } public List<String> getFriendsList(String person) { if (!friendships.containsKey(person)) { return Collections.emptyList(); } return friendships.get(person) } public boolean areFriends(String person1, String person2) { return friendships.containsKey(person1) && friendships.get(person1).contains(person2); } private void addFriend(String person, String friend) { if (!friendships.containsKey(person)) { friendships.put(person, new ArrayList<String>()); } List<String> friends = friendships.get(person); if (!friends.contains(friend)) { friends.add(friend); } } } Testing with JUnit JUnit is a simple and easy-to-learn framework for writing and running tests. Each test is mapped as a method, and each method should represent a specific known scenario in which a part of our code will be executed. The code verification is made by comparing the expected output or behavior with the actual output. The following is the test class written with JUnit. There are some scenarios missing, but for now we are interested in showing what tests look like. We will focus on better ways to test our code and on best practices later in this book. Test classes usually consist of three stages: set up, test, and tear down. Let's start with methods that set up data needed for tests. A setup can be performed on a class or method level: Friendships friendships; @BeforeClass public static void beforeClass() { // This method will be executed once on initialization time } @Before public void before() { friendships = new Friendships(); friendships.makeFriends("Joe",",," "Audrey"); friendships.makeFriends("Joe", "Peter"); friendships.makeFriends("Joe", "Michael"); friendships.makeFriends("Joe", "Britney"); friendships.makeFriends("Joe", "Paul"); } The @BeforeClass annotation specifies a method that will be run once before any of the test methods in the class. It is a useful way to do some general set up that will be used by most (if not all) tests. The @Before annotation specifies a method that will be run before each test method. We can use it to set up test data without worrying that the tests that are run afterwards will change the state of that data. In the preceding example, we're instantiating the Friendships class and adding five sample entries to the Friendships list. No matter what changes will be performed by each individual test, this data will be recreated over and over until all the tests are performed. Common examples of usage of those two annotations are the setting up of database data, the creation of files needed for tests, and so on. Later on, we'll see how external dependencies can and should be avoided using mocks. Nevertheless, functional or integration tests might still need those dependencies and the @Before and @BeforeClass annotations are a good way to set them up. Once the data is set up, we can proceed with the actual tests: @Test public void alexDoesNotHaveFriends() { Assert.assertTrue("Alex does not have friends", friendships.getFriendsList("Alex").isEmpty()); } @Test public void joeHas5Friends() { Assert.assertEquals("Joe has 5 friends", 5, friendships.getFriendsList("Joe").size()); } @Test public void joeIsFriendWithEveryone() { List<String> friendsOfJoe = Arrays.asList("Audrey", "Peter", "Michael", "Britney", "Paul"); Assert.assertTrue(friendships.getFriendsList("Joe") .containsAll(friendsOfJoe)); } In this example, we are using a few of the many different types of asserts. We're confirming that Alex does not have any friends, while Joe is a very popular guy with five friends (Audrey, Peter, Michael, Britney, and Paul). Finally, once the tests are finished, we might need to perform some cleanup: @AfterClass public static void afterClass() { // This method will be executed once when all test are executed } @After public void after() { // This method will be executed once after each test execution } In our example, in the Friendships class, we have no need to clean up anything. If there were such a need, those two annotations would provide that feature. They work in a similar fashion to the @Before and @BeforeClass annotations. @AfterClass is run once all tests are finished. The @After annotation is executed after each test. This runs each test method as a separate class instance. As long as we are avoiding global variables and external resources, such as databases and APIs, each test is isolated from the others. Whatever was done in one, does not affect the rest. The complete source code can be found in the FriendshipsTest class at https://bitbucket.org/vfarcic/tdd-java-ch02-example-junit. Testing with TestNG In TestNG, tests are organized in classes, just as in the case of JUnit. The following Gradle configuration (build.gradle) is required in order to run TestNG tests: dependencies { testCompile group: 'org.testng', name: 'testng', version: '6.8.21' } test.useTestNG() { // Optionally you can filter which tests are executed using // exclude/include filters // excludeGroups 'complex' } Unlike JUnit, TestNG requires additional Gradle configuration that tells it to use TestNG to run tests. The following test class is written with TestNG and is a reflection of what we did earlier with JUnit. Repeated imports and other boring parts are omitted with the intention of focusing on the relevant parts: @BeforeClass public static void beforeClass() { // This method will be executed once on initialization time } @BeforeMethod public void before() { friendships = new Friendships(); friendships.makeFriends("Joe", "Audrey"); friendships.makeFriends("Joe", "Peter"); friendships.makeFriends("Joe", "Michael"); friendships.makeFriends("Joe", "Britney"); friendships.makeFriends("Joe", "Paul"); } You probably already noticed the similarities between JUnit and TestNG. Both are using annotations to specify what the purposes of certain methods are. Besides different names (@Beforeclass versus @BeforeMethod), there is no difference between the two. However, unlike Junit, TestNG reuses the same test class instance for all test methods. This means that the test methods are not isolated by default, so more care is needed in the before and after methods. Asserts are very similar as well: public void alexDoesNotHaveFriends() { Assert.assertTrue(friendships.getFriendsList("Alex").isEmpty(), "Alex does not have friends"); } public void joeHas5Friends() { Assert.assertEquals(friendships.getFriendsList("Joe").size(), 5, "Joe has 5 friends"); } public void joeIsFriendWithEveryone() { List<String> friendsOfJoe = Arrays.asList("Audrey", "Peter", "Michael", "Britney", "Paul"); Assert.assertTrue(friendships.getFriendsList("Joe") .containsAll(friendsOfJoe)); } The only notable difference when compared with JUnit is the order of the assert variables. While the JUnit assert's order of arguments is optional message, expected values, and actual values, TestNG's order is an actual value, expected value, and optional message. Besides the difference in the order of arguments we're passing to the assert methods, there are almost no differences between JUnit and TestNG. You might have noticed that @Test is missing. TestNG allows us to set it on the class level and thus convert all public methods into tests. The @After annotations are also very similar. The only notable difference is the TestNG @AfterMethod annotation that acts in the same way as the JUnit @After annotation. As you can see, the syntax is pretty similar. Tests are organized in to classes and test verifications are made using assertions. That is not to say that there are no more important differences between those two frameworks; we'll see some of them throughout this book. I invite you to explore JUnit and TestNG by yourself. The complete source code with the preceding examples are here. The assertions we have written until now have used only the testing frameworks. However, there are some test utilities that can help us make them nicer and more readable. To summarize, we learned about JUnit and Test NG; the Java unit testing frameworks. We also ran tests on both the frameworks to know the usage and difference between the two. To learn concepts of test-driven development in Java to help you build clean, maintainable and robust code, check out this book Test Driven Java Development - Second Edition. Unit Testing Apps with Android Studio Unit Testing in .NET Core with Visual Studio 2017 for better code quality
Read more
  • 0
  • 0
  • 3959

article-image-understanding-go-internals-defer-panic-and-recover-functions-tutorial
Packt Editorial Staff
09 Jul 2018
8 min read
Save for later

Understanding Go Internals: defer, panic() and recover() functions [Tutorial]

Packt Editorial Staff
09 Jul 2018
8 min read
The Go programming language, often referred to as Golang, is making strides with masterclass developments and architecture by the greatest programming minds.  The Go features are extremely handy, and you can use them all the time. However, there is nothing more rewarding than being able to see and understand what is going on in the background and how Go operates behind the scenes. In this article we will learn to use the defer keyword, panic() and recover() functions in Go. This article is extracted from the First Edition of Mastering Go written by Mihalis Tsoukalos. The concepts discussed in this article (and more) have been updated or improved in the third edition of Mastering Go. The defer keyword The defer keyword postpones the execution of a function until the surrounding function returns. It is widely used in file input and output operations because it saves you from having to remember when to close an opened file: the defer keyword allows you to put the function call that closes an opened file near to the function call that opened it. You will also see defer in action in the section that talks about the panic()  and recover() built-in Go functions. It is very important to remember that deferred functions are executed in Last In First Out (LIFO) order after the return of the surrounding function. Put simply, this means that if you defer function f1() first, function f2() second, and function f3() third in the same surrounding function, when the surrounding function is about to return, function f3() will be executed first, function f2() will be executed second, and function f1() will be the last one to get executed. As this definition of defer is a little unclear, I think that you will understand the use of defer a little better by looking at the Go code and the output of the defer.go  program, which will be presented in three parts. The first part of the program follows: package main import (  "fmt" ) func d1() { for i := 3; i > 0; i-- { defer fmt.Print(i, " ") } } Apart from the import block, the preceding Go code implements a function named d1() with a for loop and a defer statement that will be executed three times. The second part of defer.go   contains the following Go code: func d2() { for i := 3; i > 0; i-- { defer func() { fmt.Print(i, " ") }() } fmt.Println() } In this part of the code, you can see the implementation of another function that is named d2(). The d2() function also contains a for loop and a defer statement that will be also executed three times. However, this time the defer keyword is applied to an anonymous function instead of a single fmt.Print() statement. Additionally, the anonymous function takes no parameters. The last part of the Go code follows: func d3() { for i := 3; i > 0; i-- { defer func(n int) { fmt.Print(n, " ") }(i) } } func main() { d1() d2() fmt.Println() d3() fmt.Println() } Apart from the main() function that calls the d1(), d2(), and d3() functions, you can also see the implementation of the d3() function, which has a for loop that uses the defer keyword on an anonymous function. However, this time the anonymous function requires one integer parameter named n. The Go code tells us that the n parameter takes its value from the i variable used in the for loop. Executing defer.go will create the following output: $ go run defer.go 1 2 3 0 0 0 1 2 3 You will most likely find the generated output complicated and challenging to understand. This underscores the fact that the operation and the results of the use of defer can be tricky if your code is not clear and unambiguous. Let's examine the results in order to get a better idea of how tricky defer can be if you do not pay close attention to your code. We will start with the first line of the output (1 2 3), which is generated by the d1() function. The values of i in d1() are 3, 2, and 1 in that order. The function that is deferred in d1() is the fmt.Print() statement. As a result, when the d1() function is about to return, you get the three values of the i variable of the for loop in reverse order because deferred functions are executed in LIFO order. Now, let us inspect the second line of the output that is produced by the d2() function. It is really strange that we got three zeros instead of 1 2 3 in the output. The reason for this, however, is relatively simple. After the for loop has ended, the value of i is 0, because it is that value of i that made the for loop terminate. However, the tricky part here is that the deferred anonymous function is evaluated after the for loop ends, because it has no parameters. This means that is evaluated three times for an i value of 0, hence the generated output! This kind of confusing code is what might lead to the creation of nasty bugs in your projects, so try to avoid it! Last, we will talk about the third line of the output, which is generated by the d3() function. Due to the parameter of the anonymous function, each time the anonymous function is deferred, it gets and uses the current value of i. As a result, each execution of the anonymous function has a different value to process, thus the generated output. After this, it should be clear that the best approach to the use of defer is the third one, which is exhibited in the d3() function. This is so because you intentionally pass the desired variable in the anonymous function in an easy to understand way. Panic and Recover This technique involves the use of the panic() and recover() functions, and it will be presented in panicRecover.go, which you will review in three parts. Strictly speaking, panic() is a built-in Go function that terminates the current flow of a Go program and starts panicking! On the other hand, the recover() function, which is also a built-in Go function, allows you to take back the control of a goroutine that just panicked using panic(). The first part of the program follows: package main import ( "fmt" ) func a() { fmt.Println("Inside a()") defer func() { if c := recover(); c != nil { fmt.Println("Recover inside a()!") } }() fmt.Println("About to call b()") b() fmt.Println("b() exited!") fmt.Println("Exiting a()") } Apart from the import block, this part includes the implementation of the a() function. The most important part of the a() function is the defer block of code, which implements an anonymous function that will be called when there is a call to panic(). The second code segment of panicRecover.go follows next: func b() { fmt.Println("Inside b()") panic("Panic in b()!") fmt.Println("Exiting b()") } The last part of the program, which illustrates the panic() and recover() functions, is as follows: func main() { a() fmt.Println("main() ended!") } Executing panicRecover.go will create the following output: $ go run panicRecover.go Inside a() About to call b() Inside b() Recover inside a()! main() ended! What just happened here is really impressive! However, as you can see from the output, the a() function did not end normally, because its last two statements did not get executed: fmt.Println("b() exited!") fmt.Println("Exiting a()") Nevertheless, the good thing is that panicRecover.go ended according to our will without panicking because the anonymous function used in defer took control of the situation! Also note that function b() knows nothing about function a(). However, function a() contains Go code that handles the panic condition of function b()! Using the panic function on its own You can also use the panic() function on its own without any attempt to recover, and this subsection will show you its results using the Go code of justPanic.go, which will be presented in two parts. The first part of justPanic.go follows next: package main import ( "fmt" "os" ) As you can see, the use of panic() does not require any extra Go packages. The second part of justPanic.go is shown in the following Go code: func main() { if len(os.Args) == 1 { panic("Not enough arguments!") } fmt.Println("Thanks for the argument(s)!") } If your Go program does not have at least one command line argument, it will call the panic() function. The panic() function takes one parameter, which is the error message that you want to print on the screen. Executing justPanic.go on a macOS High Sierra machine will create the following output: $ go run justPanic.go panic: Not enough arguments! goroutine 1 [running]: main.main() /Users/mtsouk/ch2/code/justPanic.go:10 +0x9e exit status 2 Thus, using the panic() function on its own will terminate the Go program without giving you the opportunity to recover! Therefore use of the panic() and recover() pair is much more practical and professional than just using panic() alone. To summarize, we covered some of the interesting Go topics like; defer keyword; the panic() and recover() functions. To explore other major features and packages in Go, get our latest edition in Go programming, Mastering Go, written by Mihalis Tsoukalos. Implementing memory management with Golang’s garbage collector Why is Go the go-to language for cloud native development? – An interview with Mina Andrawos How to build a basic server side chatbot using Go How Concurrency and Parallelism works in Golang [Tutorial]  
Read more
  • 0
  • 0
  • 4779

article-image-concurrency-and-parallelism-in-golang-tutorial
Natasha Mathur
06 Jul 2018
11 min read
Save for later

How Concurrency and Parallelism works in Golang [Tutorial]

Natasha Mathur
06 Jul 2018
11 min read
Computer and software programs are useful because they do a lot of laborious work very fast and can also do multiple things at once. We want our programs to be able to do multiple things simultaneously, and the success of a programming language can depend on how easy it is to write and understand multitasking programs. Concurrency and parallelism are two terms that are bound to come across often when looking into multitasking and are often used interchangeably. However, they mean two distinctly different things. In this article, we will look at how concurrency and parallelism work in Go using simple examples for better understanding. Let's get started! This article is an excerpt from a book 'Distributed Computing with Go' written by V.N. Nikhil Anurag. The standard definitions given on the Go blog are as follows: Concurrency: Concurrency is about dealing with lots of things at once. This means that we manage to get multiple things done at once in a given period of time. However, we will only be doing a single thing at a time. This tends to happen in programs where one task is waiting and the program decides to run another task in the idle time. In the following diagram, this is denoted by running the yellow task in idle periods of the blue task. Parallelism: Parallelism is about doing lots of things at once. This means that even if we have two tasks, they are continuously working without any breaks in between them. In the diagram, this is shown by the fact that the green task is running independently and is not influenced by the red task in any manner: It is important to understand the difference between these two terms. Let's look at a few concrete examples to further elaborate upon the difference between the two. Concurrency Let's look at the concept of concurrency using a simple example of a few daily routine tasks and the way we can perform them. Imagine you start your day and need to get six things done: Make hotel reservation Book flight tickets Order a dress Pay credit card bills Write an email Listen to an audiobook The order in which they are completed doesn't matter, and for some of the tasks, such as  writing an email or listening to an audiobook, you need not complete them in a single sitting. Here is one possible way to complete the tasks: Order a dress. Write one-third of the email. Make hotel reservation. Listen to 10 minutes of audiobook. Pay credit card bills. Write another one-third of the email. Book flight tickets. Listen to another 20 minutes of audiobook. Complete writing the email. Continue listening to audiobook until you fall asleep. In programming terms, we have executed the above tasks concurrently. We had a complete day and we chose particular tasks from our list of tasks and started to work on them. For certain tasks, we even decided to break them up into pieces and work on the pieces between other tasks. We will eventually write a program which does all of the preceding steps concurrently, but let's take it one step at a time. Let's start by building a program that executes the tasks sequentially, and then modify it progressively until it is purely concurrent code and uses goroutines. The progression of the program will be in three steps: Serial task execution. Serial task execution with goroutines. Concurrent task execution. Code overview The code will consist of a set of functions that print out their assigned tasks as completed. In the cases of writing an email or listening to an audiobook, we further divide the tasks into more functions. This can be seen as follows: writeMail, continueWritingMail1, continueWritingMail2 listenToAudioBook, continueListeningToAudioBook Serial task execution Let's first implement a program that will execute all the tasks in a linear manner. Based on the code overview we discussed previously, the following code should be straightforward: package main import ( "fmt" ) // Simple individual tasks func makeHotelReservation() { fmt.Println("Done making hotel reservation.") } func bookFlightTickets() { fmt.Println("Done booking flight tickets.") } func orderADress() { fmt.Println("Done ordering a dress.") } func payCreditCardBills() { fmt.Println("Done paying Credit Card bills.") } // Tasks that will be executed in parts // Writing Mail func writeAMail() { fmt.Println("Wrote 1/3rd of the mail.") continueWritingMail1() } func continueWritingMail1() { fmt.Println("Wrote 2/3rds of the mail.") continueWritingMail2() } func continueWritingMail2() { fmt.Println("Done writing the mail.") } // Listening to Audio Book func listenToAudioBook() { fmt.Println("Listened to 10 minutes of audio book.") continueListeningToAudioBook() } func continueListeningToAudioBook() { fmt.Println("Done listening to audio book.") } // All the tasks we want to complete in the day. // Note that we do not include the sub tasks here. var listOfTasks = []func(){ makeHotelReservation, bookFlightTickets, orderADress, payCreditCardBills, writeAMail, listenToAudioBook, } func main() { for _, task := range listOfTasks { task() } } We take each of the main tasks and start executing them in simple sequential order. Executing the preceding code should produce unsurprising output, as shown here: Done making hotel reservation. Done booking flight tickets. Done ordering a dress. Done paying Credit Card bills. Wrote 1/3rd of the mail. Wrote 2/3rds of the mail. Done writing the mail. Listened to 10 minutes of audio book. Done listening to audio book. Serial task execution with goroutines We took a list of tasks and wrote a program to execute them in a linear and sequential manner. However, we want to execute the tasks concurrently! Let's start by first introducing goroutines for the split tasks and see how it goes. We will only show the code snippet where the code actually changed here: /******************************************************************** We start by making Writing Mail & Listening Audio Book concurrent. *********************************************************************/ // Tasks that will be executed in parts // Writing Mail func writeAMail() { fmt.Println("Wrote 1/3rd of the mail.") go continueWritingMail1() // Notice the addition of 'go' keyword. } func continueWritingMail1() { fmt.Println("Wrote 2/3rds of the mail.") go continueWritingMail2() // Notice the addition of 'go' keyword. } func continueWritingMail2() { fmt.Println("Done writing the mail.") } // Listening to Audio Book func listenToAudioBook() { fmt.Println("Listened to 10 minutes of audio book.") go continueListeningToAudioBook() // Notice the addition of 'go' keyword. } func continueListeningToAudioBook() { fmt.Println("Done listening to audio book.") } The following is a possible output: Done making hotel reservation. Done booking flight tickets. Done ordering a dress. Done paying Credit Card bills. Wrote 1/3rd of the mail. Listened to 10 minutes of audio book. Whoops! That's not what we were expecting. The output from the continueWritingMail1, continueWritingMail2, and continueListeningToAudioBook functions is missing; the reason being that we are using goroutines. Since goroutines are not waited upon, the code in the main function continues executing and once the control flow reaches the end of the main function, the program ends. What we would really like to do is to wait in the main function until all the goroutines have finished executing. There are two ways we can do this—using channels or using WaitGroup.  We'll use WaitGroup now. In order to use WaitGroup, we have to keep the following in mind: Use WaitGroup.Add(int) to keep count of how many goroutines we will be running as part of our logic. Use WaitGroup.Done() to signal that a goroutine is done with its task. Use WaitGroup.Wait() to wait until all goroutines are done. Pass WaitGroup instance to the goroutines so they can call the Done() method. Based on these points, we should be able to modify the source code to use WaitGroup. The following is the updated code: package main import ( "fmt" "sync" ) // Simple individual tasks func makeHotelReservation(wg *sync.WaitGroup) { fmt.Println("Done making hotel reservation.") wg.Done() } func bookFlightTickets(wg *sync.WaitGroup) { fmt.Println("Done booking flight tickets.") wg.Done() } func orderADress(wg *sync.WaitGroup) { fmt.Println("Done ordering a dress.") wg.Done() } func payCreditCardBills(wg *sync.WaitGroup) { fmt.Println("Done paying Credit Card bills.") wg.Done() } // Tasks that will be executed in parts // Writing Mail func writeAMail(wg *sync.WaitGroup) { fmt.Println("Wrote 1/3rd of the mail.") go continueWritingMail1(wg) } func continueWritingMail1(wg *sync.WaitGroup) { fmt.Println("Wrote 2/3rds of the mail.") go continueWritingMail2(wg) } func continueWritingMail2(wg *sync.WaitGroup) { fmt.Println("Done writing the mail.") wg.Done() } // Listening to Audio Book func listenToAudioBook(wg *sync.WaitGroup) { fmt.Println("Listened to 10 minutes of audio book.") go continueListeningToAudioBook(wg) } func continueListeningToAudioBook(wg *sync.WaitGroup) { fmt.Println("Done listening to audio book.") wg.Done() } // All the tasks we want to complete in the day. // Note that we do not include the sub tasks here. var listOfTasks = []func(*sync.WaitGroup){ makeHotelReservation, bookFlightTickets, orderADress, payCreditCardBills, writeAMail, listenToAudioBook, } func main() { var waitGroup sync.WaitGroup // Set number of effective goroutines we want to wait upon waitGroup.Add(len(listOfTasks)) for _, task := range listOfTasks{ // Pass reference to WaitGroup instance // Each of the tasks should call on WaitGroup.Done() task(&waitGroup) } // Wait until all goroutines have completed execution. waitGroup.Wait() } Here is one possible output order; notice how continueWritingMail1 and continueWritingMail2 were executed at the end after listenToAudioBook and continueListeningToAudioBook: Done making hotel reservation. Done booking flight tickets. Done ordering a dress. Done paying Credit Card bills. Wrote 1/3rd of the mail. Listened to 10 minutes of audio book. Done listening to audio book. Wrote 2/3rds of the mail. Done writing the mail. Concurrent task execution In the final output of the previous part, we can see that all the tasks in listOfTasks are being executed in serial order, and the last step for maximum concurrency would be to let the order be determined by Go runtime instead of the order in listOfTasks. This might sound like a laborious task, but in reality this is quite simple to achieve. All we need to do is add the go keyword in front of task(&waitGroup): func main() { var waitGroup sync.WaitGroup // Set number of effective goroutines we want to wait upon waitGroup.Add(len(listOfTasks)) for _, task := range listOfTasks { // Pass reference to WaitGroup instance // Each of the tasks should call on WaitGroup.Done() go task(&waitGroup) // Achieving maximum concurrency } // Wait until all goroutines have completed execution. waitGroup.Wait() Following is a possible output: Listened to 10 minutes of audio book. Done listening to audio book. Done booking flight tickets. Done ordering a dress. Done paying Credit Card bills. Wrote 1/3rd of the mail. Wrote 2/3rds of the mail. Done writing the mail. Done making hotel reservation. If we look at this possible output, the tasks were executed in the following order: Listen to audiobook. Book flight tickets. Order a dress. Pay credit card bills. Write an email. Make hotel reservations. Now that we have a good idea on what concurrency is and how to write concurrent code using goroutines and WaitGroup, let's dive into parallelism. Parallelism Imagine that you have to write a few emails. They are going to be long and laborious, and the best way to keep yourself entertained is to listen to music while writing them, that is, listening to music "in parallel" to writing the emails. If we wanted to write a program that simulates this scenario, the following is one possible implementation: package main import ( "fmt" "sync" "time" ) func printTime(msg string) { fmt.Println(msg, time.Now().Format("15:04:05")) } // Task that will be done over time func writeMail1(wg *sync.WaitGroup) { printTime("Done writing mail #1.") wg.Done() } func writeMail2(wg *sync.WaitGroup) { printTime("Done writing mail #2.") wg.Done() } func writeMail3(wg *sync.WaitGroup) { printTime("Done writing mail #3.") wg.Done() } // Task done in parallel func listenForever() { for { printTime("Listening...") } } func main() { var waitGroup sync.WaitGroup waitGroup.Add(3) go listenForever() // Give some time for listenForever to start time.Sleep(time.Nanosecond * 10) // Let's start writing the mails go writeMail1(&waitGroup) go writeMail2(&waitGroup) go writeMail3(&waitGroup) waitGroup.Wait() } The output of the program might be as follows: Done writing mail #3. 19:32:57 Listening... 19:32:57 Listening... 19:32:57 Done writing mail #1. 19:32:57 Listening... 19:32:57 Listening... 19:32:57 Done writing mail #2. 19:32:57 The numbers represent the time in terms of Hour:Minutes:Seconds and, as can be seen, they are being executed in parallel. You might have noticed that the code for parallelism looks almost identical to the code for the final concurrency example. However, in the function listenForever, we are printing Listening... in an infinite loop. If the preceding example was written without goroutines, the output would keep printing Listening... and never reach the writeMail function calls. Goroutines are concurrent and, to an extent, parallel; however, we should think of them as being concurrent. The order of execution of goroutines is not predictable and we should not rely on them to be executed in any particular order. We should also take care to handle errors and panics in our goroutines because even though they are being executed in parallel, a panic in one goroutine will crash the complete program. Finally, goroutines can block on system calls, however, this will not block the execution of the program nor slow down the performance of the overall program. We looked at how goroutine can be used to run concurrent programs and also learned how parallelism works in Go. If you found this post useful, do check out the book 'Distributed Computing with Go' to learn more about Goroutines, channels and messages, and other concepts in Go. Golang Decorators: Logging & Time Profiling Essential Tools for Go Programming Why is Go the go-to language for cloud-native development? – An interview with Mina Andrawos
Read more
  • 0
  • 0
  • 25245

article-image-how-to-implement-immutability-functions-in-kotlin
Aaron Lazar
27 Jun 2018
8 min read
Save for later

How to implement immutability functions in Kotlin [Tutorial]

Aaron Lazar
27 Jun 2018
8 min read
Unlike Clojure, Haskell, F#, and the likes, Kotlin is not a pure functional programming language, where immutability is forced; rather, we may refer to Kotlin as a perfect blend of functional programming and OOP languages. It contains the major benefits of both worlds. So, instead of forcing immutability like pure functional programming languages, Kotlin encourages immutability, giving it automatic preference wherever possible. In this article, we'll understand the various methods of implementing immutability in Kotlin. This article has been taken from the book, Functional Kotlin, by Mario Arias and Rivu Chakraborty. In other words, Kotlin has immutable variables (val), but no language mechanisms that would guarantee true deep immutability of the state. If a val variable references a mutable object, its contents can still be modified. We will have a more elaborate discussion and a deeper dive on this topic, but first let us have a look at how we can get referential immutability in Kotlin and the differences between var, val, and const val. By true deep immutability of the state, we mean a property will always return the same value whenever it is called and that the property never changes its value; we can easily avoid this if we have a val  property that has a custom getter. You can find more details at the following link: https://artemzin.com/blog/kotlin-val-does-not-mean-immutable-it-just-means-readonly-yeah/ The difference between var and val So, in order to encourage immutability but still let the developers have the choice, Kotlin introduced two types of variables. The first one is var, which is just a simple variable, just like in any imperative language. On the other hand, val brings us a bit closer to immutability; again, it doesn't guarantee immutability. So, what exactly does the val variable provide us? It enforces read-only, you cannot write into a val variable after initialization. So, if you use a val variable without a custom getter, you can achieve referential immutability. Let's have a look; the following program will not compile: fun main(args: Array<String>) { val x:String = "Kotlin" x+="Immutable"//(1) } As I mentioned earlier, the preceding program will not compile; it will give an error on comment (1). As we've declared variable x as val, x will be read-only and once we initialize x; we cannot modify it afterward. So, now you're probably asking why we cannot guarantee immutability with val ? Let's inspect this with the following example: object MutableVal { var count = 0 val myString:String = "Mutable" get() {//(1) return "$field ${++count}"//(2) } } fun main(args: Array<String>) { println("Calling 1st time ${MutableVal.myString}") println("Calling 2nd time ${MutableVal.myString}") println("Calling 3rd time ${MutableVal.myString}")//(3) } In this program, we declared myString as a val property, but implemented a custom get function, where we tweaked the value of myString before returning it. Have a look at the output first, then we will further look into the program: As you can see, the myString property, despite being val, returned different values every time we accessed it. So, now, let us look into the code to understand such behavior. On comment (1), we declared a custom getter for the val property myString. On comment (2), we pre-incremented the value of count and added it after the value of the field value, myString, and returned the same from the getter. So, whenever we requested the myString property, count got incremented and, on the next request, we got a different value. As a result, we broke the immutable behavior of a val property. Compile time constants So, how can we overcome this? How can we enforce immutability? The const val properties are here to help us. Just modify val myString with const val myString and you cannot implement the custom getter. While val properties are read-only variables, const val, on the other hand, are compile time constants. You cannot assign the outcome (result) of a function to const val. Let's discuss some of the differences between val and const val: The val properties are read-only variables, while const val are compile time constants The val properties can have custom getters, but const val cannot We can have val properties anywhere in our Kotlin code, inside functions, as a class member, anywhere, but const val has to be a top-level member of a class/object You cannot write delegates for the const val properties We can have the val property of any type, be it our custom class or any primitive data type, but only primitive data types and String are allowed with a const val property We cannot have nullable data types with the const val properties; as a result, we cannot have null values for the const val properties either As a result, the const val properties guarantee immutability of value but have lesser flexibility and you are bound to use only primitive data types with const val, which cannot always serve our purposes. Now, that I've used the word referential immutability quite a few times, let us now inspect what it means and how many types of immutability there are. Types of immutability There are basically the following two types of immutability: Referential immutability Immutable values Immutable reference  (referential immutability) Referential immutability enforces that, once a reference is assigned, it can't be assigned to something else. Think of having it as a val property of a custom class, or even MutableList or MutableMap; after you initialize the property, you cannot reference something else from that property, except the underlying value from the object. For example, take the following program: class MutableObj { var value = "" override fun toString(): String { return "MutableObj(value='$value')" } } fun main(args: Array<String>) { val mutableObj:MutableObj = MutableObj()//(1) println("MutableObj $mutableObj") mutableObj.value = "Changed"//(2) println("MutableObj $mutableObj") val list = mutableListOf("a","b","c","d","e")//(3) println(list) list.add("f")//(4) println(list) } Have a look at the output before we proceed with explaining the program: So, in this program we've two val properties—list and mutableObj. We initialized mutableObj with the default constructor of MutableObj, since it's a val property it'll always refer to that specific object; but, if you concentrate on comment (2), we changed the value property of mutableObj, as the value property of the MutableObj class is mutable (var). It's the same with the list property, we can add items to the list after initialization, changing its underlying value. Both list and mutableObj are perfect examples of immutable reference; once initialized, the properties can't be assigned to something else, but their underlying values can be changed (you can refer the output). The reason behind that is the data type we used to assign to those properties. Both the MutableObj class and the MutableList<String> data structures are mutable themselves, so we cannot restrict value changes for their instances. Immutable values The immutable values, on the other hand, enforce no change on values as well; it is really complex to maintain. In Kotlin, the const val properties enforce immutability of value, but they lack flexibility (we already discussed them) and you're bound to use only primitive types, which can be troublesome in real-life scenarios. Immutable collections Kotlin gives preference to immutability wherever possible, but leaves the choice to the developer whether or when to use it. This power of choice makes the language even more powerful. Unlike most languages, where they have either only mutable (like Java, C#, and so on) or only immutable collections (like F#, Haskell, Clojure, and so on), Kotlin has both and distinguishes between them, leaving the developer with the freedom to choose whether to use an immutable or mutable one. Kotlin has two interfaces for collection objects—Collection<out E> and MutableCollection<out E>; all the collection classes (for example, List, Set, or Map) implement either of them. As the name suggests, the two interfaces are designed to serve immutable and mutable collections respectively. Let us have an example: fun main(args: Array<String>) { val immutableList = listOf(1,2,3,4,5,6,7)//(1) println("Immutable List $immutableList") val mutableList:MutableList<Int> = immutableList.toMutableList()//(2) println("Mutable List $mutableList") mutableList.add(8)//(3) println("Mutable List after add $mutableList") println("Mutable List after add $immutableList") } The output is as follows: So, in this program, we created an immutable list with the help of the listOf method of Kotlin, on comment (1). The listOf method creates an immutable list with the elements (varargs) passed to it. This method also has a generic type parameter, which can be skipped if the elements array is not empty. The listOf method also has a mutable version—mutableListOf() which is identical except that it returns MutableList instead. We can convert an immutable list to a mutable one with the help of the toMutableList() extension function, we did the same in comment (2), to add an element to it on comment (3). However, if you check the output, the original Immutable List remains the same without any changes, the item is, however, added to the newly created MutableList instead. So now you know how to implement immutability in Kotlin. If you found this tutorial helpful, and would like to learn more, head on over to purchase the full book, Functional Kotlin, by Mario Arias and Rivu Chakraborty. Extension functions in Kotlin: everything you need to know Building RESTful web services with Kotlin Building chat application with Kotlin using Node.js, the powerful Server-side JavaScript platform
Read more
  • 0
  • 0
  • 6108
article-image-set-up-scala-plugin-for-intellij-ide
Pavan Ramchandani
26 Jun 2018
2 min read
Save for later

How to set up the Scala Plugin in IntelliJ IDE [Tutorial]

Pavan Ramchandani
26 Jun 2018
2 min read
The Scala Plugin is used to turn a normal IntelliJ IDEA into a convenient Scala development environment. In this article, we will discuss how to set up Scala Plugin for IntelliJ IDEA IDE.  If you do not have IntelliJ IDEA, you can download it from here. By default, IntelliJ IDEA does not come with Scala features. Scala Plugin adds Scala features means that we can create Scala/Play Projects, we can create Scala Applications, Scala worksheets, and more. Scala Plugin contains the following technologies: Scala Play Framework SBT Scala.js It supports three popular OS Environments: Windows, Mac, and Linux. Setting up Scala Plugin for IntelliJ IDE Perform the  following steps to install Scala Plugin for IntelliJ IDE to develop our Scala-based projects: Open IntelliJ IDE: Go to  Configure at the bottom right and click on the Plugins option available in the drop-down, as shown here: This opens the Plugins window as shown here: Now click on InstallJetbrainsplugins, as shown in the preceding screenshot. Next, type the word Scala in the search bar to see the ScalaPlugin, as shown here: Click on the Install button to install Scala Plugin for IntelliJ IDEA. Now restart IntelliJ IDEA to see that Scala Plugin features. After we re-open IntelliJ IDEA, if we try to access File | New Project option, we will see Scala option in New Project window as shown in the following screenshot to create new Scala or Play Framework-based SBT projects: We can see the Play Framework option only in the IntelliJ IDEA Ultimate Edition. As we are using CE (Community Edition), we cannot see that option. It's now time to start Scala/Play application development using the IntelliJ IDE. You can start developing some Scala/Play-based applications. To summarize, we got an understanding to Scala Plugin and covered the installation steps for Scala Plugin for IntelliJ. To learn more about solutions for taking reactive programming approach with Scala, please refer the book Scala Reactive Programming. What Scala 3.0 Roadmap looks like! Building Scalable Microservices Exploring Scala Performance
Read more
  • 0
  • 0
  • 11169

article-image-delphi-memory-management-techniques-for-parallel-programming
Pavan Ramchandani
19 Jun 2018
31 min read
Save for later

Delphi: memory management techniques for parallel programming

Pavan Ramchandani
19 Jun 2018
31 min read
Memory management is part of practically every computing system. Multiple programs must coexist inside a limited memory space, and that can only be possible if the operating system is taking care of it. When a program needs some memory, for example, to create an object, it can ask the operating system and it will give it a slice of shared memory. When an object is not needed anymore, that memory can be returned to the loving care of the operating system. In this tutorial, we will touch upon memory management techniques, the most prime factor in parallel programming. The article is an excerpt from a book written by Primož Gabrijelčič, titled Delphi High Performance.  Slicing and dicing memory straight from the operating system is a relatively slow operation. In lots of cases, a memory system also doesn't know how to return small chunks of memory. For example, if you call Windows' VirtualAlloc function to get 20 bytes of memory, it will actually reserve 4 KB (or 4,096 bytes) for you. In other words, 4,076 bytes would be wasted. To fix these and other problems, programming languages typically implement their own internal memory management algorithms. When you request 20 bytes of memory, the request goes to that internal memory manager. It still requests memory from the operating system but then splits it internally into multiple parts. In a hypothetical scenario, the internal memory manager would request 4,096 bytes from the operating system and give 20 bytes of that to the application. The next time the application would request some memory (30 bytes for example), the internal memory manager would get that memory from the same 4,096-byte block. To move from hypothetical to specific, Delphi also includes such a memory manager. From Delphi 2006, this memory manager is called FastMM. It was written as an open source memory manager by Pierre LeRiche with help from other Delphi programmers and was later licensed by Borland. FastMM was a great improvement over the previous Delphi memory manager and, although it does not perform perfectly in the parallel programming world, it still functions very well after more than ten years. Optimizing strings and array allocations When you create a string, the code allocates memory for its content, copies the content into that memory, and stores the address of this memory in the string variable. If you append a character to this string, it must be stored somewhere in that memory. However, there is no place to store the string. The original memory block was just big enough to store the original content. The code must, therefore, enlarge that memory block, and only then can the appended character be stored in the newly acquired space A very similar scenario plays out when you extend a dynamic array. Memory that contains the array data can sometimes be extended in place (without moving), but often this cannot be done. If you do a lot of appending, these constant reallocations will start to slow down the code. The Reallocation demo shows a few examples of such behavior and possible workarounds. The first example, activated by the Append String button, simply appends the '*' character to a string 10 million times. The code looks simple, but the s := s + '*' assignment hides a potentially slow string reallocation: procedure TfrmReallocation.btnAppendStringClick(Sender: TObject); var s: String; i: Integer; begin s := ''; for i := 1 to CNumChars do s := s + '*'; end; By now, you probably know that I don't like to present problems that I don't have solutions for and this is not an exception. In this case, the solution is called SetLength. This function sets a string to a specified size. You can make it shorter, or you can make it longer. You can even set it to the same length as before. In case you are enlarging the string, you have to keep in mind that SetLength will allocate enough memory to store the new string, but it will not initialize it. In other words, the newly allocated string space will contain random data. A click on the SetLength String button activates the optimized version of the string appending code. As we know that the resulting string will be CNumChars long, the code can call SetLength(s, CNumChars) to preallocate all the memory in one step. After that, we should not append characters to the string as that would add new characters at the end of the preallocated string. Rather, we have to store characters directly into the string by writing  to s[i]: procedure TfrmReallocation.btnSetLengthClick(Sender: TObject); var s: String; i: Integer; begin SetLength(s, CNumChars); for i := 1 to CNumChars do s[i] := '*'; end; Comparing the speed shows that the second approach is significantly faster. It runs in 33 ms instead of the original 142 ms. A similar situation happens when you are extending a dynamic array. The code triggered by the Append array button shows how an array may be extended by one element at a time in a loop. Admittedly, the code looks very weird as nobody in their right mind would write a loop like this. In reality, however, similar code would be split into multiple longer functions and may be hard to spot: procedure TfrmReallocation.btnAppendArrayClick(Sender: TObject); var arr: TArray<char>; i: Integer; begin SetLength(arr, 0); for i := 1 to CNumChars do begin SetLength(arr, Length(arr) + 1); arr[High(arr)] := '*'; end; end; The solution is similar to the string case. We can preallocate the whole array by calling the SetLength function and then write the data into the array elements. We just have to keep in mind that the first array element always has index 0: procedure TfrmReallocation.btnSetLengthArrayClick(Sender: TObject); var arr: TArray<char>; i: Integer; begin SetLength(arr, CNumChars); for i := 1 to CNumChars do arr[i-1] := '*'; end; Improvements in speed are similar to the string demo. The original code needs 230 ms to append ten million elements, while the improved code executes in 26 ms. The third case when you may want to preallocate storage space is when you are appending to a list. As an example, I'll look into a TList<T> class. Internally, it stores the data in a TArray<T>, so it again suffers from constant memory reallocation when you are adding data to the list. The short demo code appends 10 million elements to a list. As opposed to the previous array demo, this is a completely normal looking code, found many times in many applications: procedure TfrmReallocation.btnAppendTListClick(Sender: TObject); var list: TList<Char>; i: Integer; begin list := TList<Char>.Create; try for i := 1 to CNumChars do list.Add('*'); finally FreeAndNil(list); end; end; To preallocate memory inside a list, you can set the Capacity property to an expected number of elements in the list. This doesn't prevent the list from growing at a later time; it just creates an initial estimate. You can also use Capacity to reduce memory space used for the list after deleting lots of elements from it. The difference between a list and a string or an array is that, after setting Capacity, you still cannot access list[i] elements directly. Firstly you have to Add them, just as if Capacity was not assigned: procedure TfrmReallocation.btnSetCapacityTListClick(Sender: TObject); var list: TList<Char>; i: Integer; begin list := TList<Char>.Create; try list.Capacity := CNumChars; for i := 1 to CNumChars do list.Add('*'); finally FreeAndNil(list); end; end; Comparing the execution speed shows only a small improvement. The original code executed in 167 ms, while the new version needed 145 ms. The reason for that relatively small change is that TList<T> already manages its storage array. When it runs out of space, it will always at least double the previous size. Internal storage therefore grows from 1 to 2, 4, 8, 16, 32, 64, ... elements. This can, however, waste a lot of memory. In our example, the final size of the internal array is 16,777,216 elements, which is about 60% elements too many. By setting the capacity to the exact required size, we have therefore saved 6,777,216 * SizeOf(Char) bytes or almost 13 megabytes. Other data structures also support the Capacity property. We can find it in TList, TObjectList, TInterfaceList, TStrings, TStringList, TDictionary, TObjectDictionary and others. Memory management functions Besides the various internal functions that the Delphi runtime library (RTL) uses to manage strings, arrays and other built-in data types, RTL also implements various functions that you can use in your program to allocate and release memory blocks. In the next few paragraphs, I'll tell you a little bit about them. Memory management functions can be best described if we split them into a few groups, each including functions that were designed to work together. The first group includes GetMem, AllocMem, ReallocMem, and FreeMem. The procedure GetMem(var P: Pointer; Size: Integer) allocates a memory block of size Size and stores an address of this block in a pointer variable P. This pointer variable is not limited to pointer type, but can be of any pointer type (for example PByte). The new memory block is not initialized and will contain whatever is stored in the memory at that time. Alternatively, you can allocate a memory block with a call to the function AllocMem(Size: Integer): Pointer which allocates a memory block, fills it with zeroes, and then returns its address. To change the size of a memory block, call the procedure ReallocMem(var P: Pointer; Size: Integer). Variable P must contain a pointer to a memory block and Size can be either smaller or larger than the original block size. FastMM will try to resize the block in place. If that fails, it will allocate a new memory block, copy the original data into the new block and return an address of the new block in the P. Just as with the GetMem, newly allocated bytes will not be initialized. To release memory allocated in this way, you should call the FreeMem(var P: Pointer) procedure. The second group includes GetMemory, ReallocMemory, and FreeMemory. These three work just the same as functions from the first group, except that they can be used from C++ Builder. The third group contains just two functions, New and Dispose. These two functions can be used to dynamically create and destroy variables of any type. To allocate such a variable, call New(var X: Pointer) where P is again of any pointer type. The compiler will automatically provide the correct size for the memory block and it will also initialize all managed fields to zero. Unmanaged fields will not be initialized. To release such variables, don't use FreeMem but Dispose(var X: Pointer). In the next section, I'll give a short example of using New and Dispose to dynamically create and destroy variables of a record type. You must never use Dispose to release memory allocated with GetMem or AllocateMem. You must also never use FreeMem to release memory allocated with New. The fourth and last group also contains just two functions, Initialize and Finalize. Strictly speaking, they are not memory management functions. If you create a variable containing managed fields (for example, a record) with a function other than New or AllocMem, it will not be correctly initialized. Managed fields will contain random data and that will completely break the execution of the program. To fix that, you should call Initialize(var V) passing in the variable (and not the pointer to this variable!). An example in the next section will clarify that. Before you return such a variable to the memory manager, you should clean up all references to managed fields by calling Finalize(var V). It is better to use Dispose, which will do that automatically, but sometimes that is not an option and you have to do it manually. Both functions also exist in a form that accepts a number of variables to initialize. This form can be used to initialize or finalize an array of data: procedure Initialize(var V; Count: NativeUInt); procedure Finalize(var V; Count: NativeUInt); In the next section, I'll dig deeper into the dynamic allocation of record variables. I'll also show how most of the memory allocation functions are used in practice. Dynamic record allocation While it is very simple to dynamically create new objects—you just call the Create constructor—dynamic allocation of records and other data types (arrays, strings ...) is a bit more complicated. In the previous section, we saw that the preferred way of allocating such variables is with the New method. The InitializeFinalize demo shows how this is done in practice. The code will dynamically allocate a variable of type TRecord. To do that, we need a pointer variable, pointing to TRecord. The cleanest way to do that is to declare a new type PRecord = ^TRecord: type TRecord = record s1, s2, s3, s4: string; end; PRecord = ^TRecord; Now, we can just declare a variable of type PRecord and call New on that variable. After that, we can use the rec variable as if it was a normal record and not a pointer. Technically, we would have to always write rec^.s1, rec^.s4 and so on, but the Delphi compiler is friendly enough and allows us to drop the ^ character: procedure TfrmInitFin.btnNewDispClick(Sender: TObject); var rec: PRecord; begin New(rec); try rec.s1 := '4'; rec.s2 := '2'; rec.s4 := rec.s1 + rec.s2 + rec.s4; ListBox1.Items.Add('New: ' + rec.s4); finally Dispose(rec); end; end; Technically, you could just use rec: ^TRecord instead of rec: PRecord, but it is customary to use explicitly declared pointer types, such as PRecord. Another option is to use GetMem instead of New, and FreeMem instead of Dispose. In this case, however, we have to manually prepare allocated memory for use with a call to Initialize. We must also prepare it to be released with a call to Finalize before we call FreeMem. If we use GetMem for initialization, we must manually provide the correct size of the allocated block. In this case, we can simply use SizeOf(TRecord). We must also be careful with parameters passed to GetMem and Initialize. You pass a pointer (rec) to GetMem and FreeMem and the actual record data (rec^) to Initialize and Finalize: procedure TfrmInitFin.btnInitFinClick(Sender: TObject); var rec: PRecord; begin GetMem(rec, SizeOf(TRecord)); try Initialize(rec^); rec.s1 := '4'; rec.s2 := '2'; rec.s4 := rec.s1 + rec.s2 + rec.s4; ListBox1.Items.Add('GetMem+Initialize: ' + rec.s4); finally Finalize(rec^); FreeMem (rec); end; end; This demo also shows how the code doesn't work correctly if you allocate a record with GetMem, but then don't call Initialize. To test this, click the third button (GetMem). While in actual code the program may sometimes work and sometimes not, I have taken some care so that GetMem will always return a memory block which will not be initialized to zero and the program will certainly fail: It is certainly possible to create records dynamically and use them instead of classes, but one question still remains—why? Why would we want to use records instead of objects when working with objects is simpler? The answer, in one word, is speed. The demo program, Allocate, shows the difference in execution speed. A click on the Allocate objects button will create ten million objects of type TNodeObj, which is a typical object that you would find in an implementation of a binary tree. Of course, the code then cleans up after itself by destroying all those objects: type TNodeObj = class Left, Right: TNodeObj; Data: NativeUInt; end; procedure TfrmAllocate.btnAllocClassClick(Sender: TObject); var i: Integer; nodes: TArray<TNodeObj>; begin SetLength(nodes, CNumNodes); for i := 0 to CNumNodes-1 do nodes[i] := TNodeObj.Create; for i := 0 to CNumNodes-1 do nodes[i].Free; end; A similar code, activated by the Allocate records button creates ten million records of type TNodeRec, which contains the same fields as TNodeObj: type PNodeRec = ^TNodeRec; TNodeRec = record Left, Right: PNodeRec; Data: NativeUInt; end; procedure TfrmAllocate.btnAllocRecordClick(Sender: TObject); var i: Integer; nodes: TArray<PNodeRec>; begin SetLength(nodes, CNumNodes); for i := 0 to CNumNodes-1 do New(nodes[i]); for i := 0 to CNumNodes-1 do Dispose(nodes[i]); end; Running both methods shows a big difference. While the class-based approach needs 366 ms to initialize objects and 76 ms to free them, the record-based approach needs only 76 ms to initialize records and 56 to free them. Where does that big difference come from? When you create an object of a class, lots of things happen. Firstly, TObject.NewInstance is called to allocate an object. That method calls TObject.InstanceSize to get the size of the object, then GetMem to allocate the memory and in the end, InitInstance which fills the allocated memory with zeros. Secondly, a chain of constructors is called. After all that, a chain of AfterConstruction methods is called (if such methods exist). All in all, that is quite a process which takes some time. Much less is going on when you create a record. If it contains only unmanaged fields, as in our example, a GetMem is called and that's all. If the record contains managed fields, this GetMem is followed by a call to the _Initialize method in the System unit which initializes managed fields. The problem with records is that we cannot declare generic pointers. When we are building trees, for example, we would like to store some data of type T in each node. The initial attempt at that, however, fails. The following code does not compile with the current Delphi compiler: type PNodeRec<T> = ^TNodeRec<T>; TNodeRec<T> = record Left, Right: PNodeRec<T>; Data: T; end; We can circumvent this by moving the TNodeRec<T> declaration inside the generic class that implements a tree. The following code from the Allocate demo shows how we could declare such internal type as a generic object and as a generic record: type TTree<T> = class strict private type TNodeObj<T1> = class Left, Right: TNodeObj<T1>; Data: T1; end; PNodeRec = ^TNodeRec; TNodeRec<T1> = record Left, Right: PNodeRec; Data: T1; end; TNodeRec = TNodeRec<T>; end; If you click the Allocate node<string> button, the code will create a TTree<string> object and then create 10 million class-based nodes and the same amount of record-based nodes. This time, New must initialize the managed field Data: string but the difference in speed is still big. The code needs 669 ms to create and destroy class-based nodes and 133 ms to create and destroy record-based nodes. Another big difference between classes and records is that each object contains two hidden pointer-sized fields. Because of that, each object is 8 bytes larger than you would expect (16 bytes in 64-bit mode). That amounts to 8 * 10,000,000 bytes or a bit over 76 megabytes. Records are therefore not only faster but also save space! FastMM internals To get a full speed out of anything, you have to understand how it works and memory managers are no exception to this rule. To write very fast Delphi applications, you should, therefore, understand how Delphi's default memory manager works. FastMM is not just a memory manager—it is three memory managers in one! It contains three significantly different subsystems—small block allocator, medium block allocator, and large block allocator. The first one, the allocator for small blocks, handles all memory blocks smaller than 2,5 KB. This boundary was determined by observing existing applications. As it turned out, in most Delphi applications, this covers 99% of all memory allocations. This is not surprising, as in most Delphi applications most memory is allocated when an application creates and destroys objects and works with arrays and strings, and those are rarely larger than a few hundred characters. Next comes the allocator for medium blocks, which are memory blocks with a size between 2,5 KB and 160 KB. The last one, allocator for large blocks, handles all other requests. The difference between allocators lies not just in the size of memory that they serve, but in the strategy they use to manage memory. The large block allocator implements the simplest strategy. Whenever it needs some memory, it gets it directly from Windows by calling VirtualAlloc. This function allocates memory in 4 KB blocks so this allocator could waste up to 4,095 bytes per request. As it is used only for blocks larger than 160 KB, this wasted memory doesn't significantly affect the program, though. The medium block allocator gets its memory from the large block allocator. It then carves this larger block into smaller blocks, as they are requested by the application. It also keeps all unused parts of the memory in a linked list so that it can quickly find a memory block that is still free. The small block allocator is where the real smarts of FastMM lies. There are actually 56 small memory allocators, each serving only one size of the memory block. The first one serves 8-byte blocks, the next one 16-byte blocks, followed by the allocator for 24, 32, 40, ... 256, 272, 288, ... 960, 1056, ... 2384, and 2608-byte blocks. They all get memory from the medium block allocator. If you want to see block sizes for all 56 allocators, open FastMM4.pas and search for SmallBlockTypes. What that actually means is that each memory allocation request will waste some memory. If you allocate 28 bytes, they'll be allocated from the 32-byte allocator, so 4 bytes will be wasted. If you allocate 250 bytes, they'll come from the 256-byte allocator and so on. The sizes of memory allocators were carefully chosen so that the amount of wasted memory is typically below 10%, so this doesn't represent a big problem in most applications. Each allocator is basically just an array of equally sized elements (memory blocks). When you allocate a small amount of memory, you'll get back one element of an array. All unused elements are connected into a linked list so that the memory manager can quickly find a free element of an array when it needs one. The following image shows a very simplified representation of FastMM allocators. Only two small block allocators are shown. Boxes with thick borders represent allocated memory. Boxes with thin borders represent unused (free) memory. Free memory blocks are connected into linked lists. Block sizes in different allocators are not to scale: FastMM implements a neat trick which helps a lot when you resize strings or arrays by a small amount. Well, the truth be told, I had to append lots and lots of characters—ten million of them—for this difference to show. If I were appending only a few characters, both versions would run at nearly the same speed. If you can, on the other hand, get your hands on a pre-2006 Delphi and run the demo program there, you'll see that the one-by-one approach runs terribly slow. The difference in speed will be of a few more orders of magnitude larger than in my example. The trick I'm talking about assumes that if you had resized memory once, you'll probably want to do it again, soon. If you are enlarging the memory, it will limit the smallest size of the new memory block to be at least twice the size of the original block plus 32 bytes. Next time you'll want to resize, FastMM will (hopefully) just update the internal information about the allocated memory and return the same block, knowing that there's enough space at the end. All that trickery is hard to understand without an example, so here's one. Let's say we have a string of 5 characters which neatly fits into a 24-byte block. Sorry, what am I hearing? "What? Why!? 5 unicode characters need only 10 bytes!" Oh, yes, strings are more complicated than I told you before. In reality, each Delphi UnicodeString and AnsiString contains some additional data besides the actual characters that make up the string. Parts of the string are also: 4-byte length of string, 4-byte reference count, 2-byte field storing the size of each string character (either 1 for AnsiString or 2 for UnicodeString), and 2-byte field storing the character code page. In addition to that, each string includes a terminating Chr(0) character. For a 5-character string this gives us 4 (length) + 4 (reference count) + 2 (character size) + 2 (codepage) + 5 (characters) * 2 (size of a character) + 2 (terminating Chr(0)) = 24 bytes. When you add one character to this string, the code will ask the memory manager to enlarge a 24-byte block to 26 bytes. Instead of returning a 26-byte block, FastMM will round that up to 2 * 24 + 32 = 80 bytes. Then it will look for an appropriate allocator, find one that serves 80-byte blocks (great, no memory loss!) and return a block from that allocator. It will, of course, also have to copy data from the original block to the new block. This formula, 2 * size + 32, is used only in small block allocators. A medium block allocator only overallocates by 25%, and a large block allocator doesn't implement this behavior at all. Next time you add one character to this string, FastMM will just look at the memory block, determine that there's still enough space inside this 80-byte memory block and return the same memory. This will continue for quite some time while the block grows to 80 bytes in two-byte increments. After that, the block will be resized to 2 * 80 + 32 = 192 bytes (yes, there is an allocator for this size), data will be copied and the game will continue. This behavior indeed wastes some memory but, under most circumstances, significantly boosts the speed of code that was not written with speed in mind. Memory allocation in a parallel world We've seen how FastMM boosts the reallocation speed. The life of a memory manager is simple when there is only one thread of execution inside a program. When the memory manager is dealing out the memory, it can be perfectly safe in the knowledge that nothing can interrupt it in this work. When we deal with parallel processing, however, multiple paths of execution simultaneously execute the same program and work on the same data. Because of that, life from the memory manager's perspective suddenly becomes very dangerous. For example, let's assume that one thread wants some memory. The memory manager finds a free memory block on a free list and prepares to return it. At that moment, however, another thread also needs some memory from the same allocator. This second execution thread (running in parallel with the first one) would also find a free memory block on the free list. If the first thread didn't yet update the free list, that may even be the same memory block! That can only result in one thing—complete confusion and crashing programs. It is extremely hard to write a code that manipulates some data structures (such as a free list) in a manner that functions correctly in a multithreaded world. So hard that FastMM doesn't even try it. Instead of that, it regulates access to each allocator with a lock. Each of the 56 small block allocators get their own lock, as do medium and large block allocators. When a program needs some memory from, say, a 16-byte allocator, FastMM will lock this allocator until the memory is returned to the program. If during this time, another thread requests a memory from the same 16-byte allocator, it will have to wait until the first thread finishes. This indeed fixes all problems but introduces a bottleneck—a part of the code where threads must wait to be processed in a serial fashion. If threads do lots of memory allocation, this serialization will completely negate the speed-up that we expected to get from the parallel approach. Such a memory manager would be useless in a parallel world. To fix that, FastMM introduces memory allocation optimization which only affects small blocks. When accessing a small block allocator, FastMM will try to lock it. If that fails, it will not wait for the allocator to become unlocked but will try to lock the allocator for the next block size. If that succeeds, it will return memory from the second allocator. That will indeed waste more memory but will help with the execution speed. If the second allocator also cannot be locked, FastMM will try to lock the allocator for yet the next block size. If the third allocator can be locked, you'll get back memory from it. Otherwise, FastMM will repeat the process from the beginning. This process can be somehow described with the following pseudo-code: allocIdx := find best allocator for the memory block repeat if can lock allocIdx then break; Inc(allocIdx); if can lock allocIdx then break; Inc(allocIdx); if can lock allocIdx then break; Dec(allocIdx, 2) until false allocate memory from allocIdx allocator unlock allocIdx A careful reader would notice that this code fails when the first line finds the last allocator in the table or the one before that. Instead of adding some conditional code to work around the problem, FastMM rather repeats the last allocator in the list three times. The table of small allocators actually ends with the following sizes: 1,984; 2,176; 2,384; 2,608; 2,608; 2,608. When requesting a block size above 2,384 the first line in the pseudo-code above will always find the first 2,608 allocator, so there will always be two more after it. This approach works great when memory is allocated but hides another problem. And how can I better explain a problem than with a demonstration ...? An example of this problem can be found in the program, ParallelAllocations. If you run it and click the Run button, the code will compare the serial version of some algorithm with a parallel one. I'm aware that I did not explain parallel programming at all, but the code is so simple that even somebody without any understanding of the topic will guess what it does. The core of a test runs a loop with the Execute method on all objects in a list. If a parallelTest flag is set, the loop is executed in parallel, otherwise, it is executed serially. The only mystery part in the code, TParallel.For does exactly what it says—executes a for loop in parallel. if parallelTest then TParallel.For(0, fList.Count - 1, procedure(i: integer) begin fList[i].Execute; end) else for i := 0 to fList.Count - 1 do fList[i].Execute; If you'll be running the program, make sure that you execute it without the debugger (Ctrl + Shift + F9 will do that). Running with the debugger slows down parallel execution and can skew the measurements. On my test machine I got the following results: In essence, parallelizing the program made it almost 4 times faster. Great result! Well, no. Not a great result. You see, the machine I was testing on has 12 cores. If all would be running in parallel, I would expect an almost 12x speed-up, not a mere 4-times improvement! If you take a look at the code, you'll see that each Execute allocates a ton of objects. It is obvious that a problem lies in the memory manager. The question remains though, where exactly lies this problem and how can we find it? I ran into exactly the same problem a few years ago. A highly parallel application which processes gigabytes and gigabytes of data was not running fast enough. There were no obvious problematic points and I suspected that the culprit was FastMM. I tried swapping the memory manager for a more multithreading-friendly one and, indeed, the problem was somehow reduced but I still wanted to know where the original sin lied in my code. I also wanted to continue using FastMM as it offers great debugging tools. In the end, I found no other solution than to dig in the FastMM internals, find out how it works, and add some logging there. More specifically, I wanted to know when a thread is waiting for a memory manager to become unlocked. I also wanted to know at which locations in my program this happens the most. To cut a (very) long story short, I extended FastMM with support for this kind of logging. This extension was later integrated into the main FastMM branch. As these changes are not included in Delphi, you have to take some steps to use this code. Firstly, you have to download FastMM from the official repository at https://github.com/pleriche/FastMM4. Then you have to unpack it somewhere on the disk and add FastMM4 as a first unit in the project file (.dpr). For example, the ParallelAllocation program starts like this: program ParallelAllocation; uses FastMM4 in 'FastMM\FastMM4.pas', Vcl.Forms, ParallelAllocationMain in 'ParallelAllocationMain.pas' {frmParallelAllocation}; When you have done that, you should firstly rebuild your program and test if everything is still working. (It should but you never know ...) To enable the memory manager logging, you have to define a conditional symbol LogLockContention, rebuild (as FastMM4 has to be recompiled) and, of course, run the program without the debugger. If you do that, you'll see that the program runs quite a bit slower than before. On my test machine, the parallel version was only 1.6x faster than the serial one. The logging takes its toll, but that is not important. The important part will appear when you close the program. At that point, the logger will collect all results and sort them by frequency. The 10 most frequent sources of locking in the program will be saved to a file called <programname>_MemoryManager_EventLog.txt. You will find it in the folder with the <programname>.exe. The three most frequent sources of locking will also be displayed on the screen. The following screenshot shows a cropped version of this log. Some important parts are marked out: For starters, we can see that at this location the program waited 19,020 times for a memory manager to become unlocked. Next, we can see that the memory function that caused the problem was FreeMem. Furthermore, we can see that somebody tried to delete from a list (InternalDoDelete) and that this deletion was called from TSpeedTest.Execute, line 130. FreeMem was called because the list in question is actually a TObjectList and deleting elements from the list caused it to be destroyed. The most important part here is the memory function causing the problem—FreeMem. Of course! Allocations are optimized. If an allocator is locked, the next one will be used and so on. Releasing memory, however, is not optimized! When we release a memory block, it must be returned to the same allocator that it came from. If two threads want to release memory to the same allocator at the same time, one will have to wait. I had an idea on how to improve this situation by adding a small stack (called release stack) to each allocator. When FreeMem is called and it cannot lock the allocator, the address of the memory block that is to be released will be stored on that stack. FreeMem will then quickly exit. When a FreeMem successfully locks an allocator, it firstly releases its own memory block. Then it checks if anything is waiting on the release stack and releases these memory blocks too (if there are any). This change is also included in the main FastMM branch, but it is not activated by default as it increases the overall memory consumption of the program. However, in some situations, it can do miracles and if you are developing multithreaded programs you certainly should test it out. To enable release stacks, open the project settings for the program, remove the conditional define LogLockContention (as that slows the program down) and add the conditional define UseReleaseStack. Rebuild, as FastMM4.pas has to be recompiled. On my test machine, I got much better results with this option enabled. Instead of a 3,9x speed-up, the parallel version was 6,3x faster than the serial one. The factor is not even close to 12x, as the threads do too much fighting for the memory, but the improvement is still significant: That is as far as FastMM will take us. For a faster execution, we need a more multithreading-friendly memory manager. To summarize, this article covered memory management techniques offered by Delphi. We looked into optimization, allocation, and internal of storage for efficient parallel programming. If you found this post useful, do check out the book Delphi High Performance to learn more about the intricacies of how to perform High-performance programming with Delphi. Read More: Exploring the Usages of Delphi Network programming 101 with GAWK (GNU AWK) A really basic guide to batch file programming
Read more
  • 0
  • 0
  • 13216

article-image-most-popular-programming-languages-in-2018
Fatema Patrawala
19 Jun 2018
11 min read
Save for later

The 5 most popular programming languages in 2018

Fatema Patrawala
19 Jun 2018
11 min read
Whether you’re new to software engineering or have years of experience under your belt, knowing what to learn can be difficult. Which programming language should you learn first? Which programming language should you learn next? There are hundreds of programming languages in widespread use. Each one has different features, many even have very different applications. Of course, many do not too - and knowing which one to use can be tough. Making those decisions is as important a part of being a developer as it is writing code. Just as English is the international language for most businesses and French is the language of love, different programming languages are better suited for different purposes. Let us take a look at what developers have chosen to be the top programming languages for 2018 in this year’s Packt Skill Up Survey. Source: Packt Skill Up Survey 2018 Java reigns supreme Java, the accessible and ever-present programming language continues to be widespread year on year. It is a 22 year old language which if put into human perspective is incredible. You would have been old enough to have finished college, have a celebratory alcoholic drink, gamble in Iowa and get married without parental consent in Mississippi! With time and age, Java has proven its consistency as a reliable programming language for engineers and developers. Our Skill Up 2018 survey data reveals it’s still the most popular programming language. Perhaps one of the reasons for this is Java’s flexibility and adaptability. Programs can run on several different types of machines; as long as the computer has a Java Runtime Environment (JRE) installed, a Java program can run on it. Most types of computers will be compatible with a JRE. PCs running on Windows, Macs, Unix or Linux computers, even large mainframe computers and mobile phones will be compatible with Java. Another reason for Java’s popularity is because it is object oriented. Java code is robust because Java objects contain no references to data external to themselves. This is why Java is so widely used across industry. It’s reliable and secure. When it's time for mobile developers to build Android apps, the first and most popular option is Java. Java is the official language for Android development which means it has great support from Google and most apps on Playstore are built on Java. If one talks about job opportunities in field of Java, there are plenty such as ‘Java-UI Developers’, ’Android Developers’ and many others. Hence, there are numerous jobs opportunities available in Java, J2EE combining with other new technologies. These technologies are among the highest paid jobs in IT industry today. According to Payscale, an average Java Developer salary in the USA is around $102,000 with salaries for job postings nationwide being 77% higher than average salaries. Some of the widely known domains where Java is used extensively is financial services, banking, stock market, retail and scientific and research communities. Finally, it’s worth noting that the demand for Java developers is pretty high and given it’s the language required in many engineering roles. It does make sense to start learning it if you don’t yet know it. Start learning Java. Explore Packt’s latest Java eBooks and videos Java Tutorials: Design a RESTful web API with Java JavaScript retains the runner up spot JavaScript for years kept featuring in the list of top programming languages and this time it is 2nd after Java. JavaScript has continued to be everywhere from front end web pages to mobile web apps and everything in between. A web developer can add personality to websites by using JavaScript. It is the native language of the browser. If you want to build single-page web apps, there is really only one language option for building client-side single-page apps, and that is JavaScript. It is supported by all popular browsers like Microsoft Internet Explorer (beginning with version 3.0), Firefox, Safari, Opera, Google Chrome, etc. JavaScript has been the most versatile and popular among developers because, it is simple to learn, gives extended functionality to web pages, is an inexpensive language. In other words, it does not require any special compilers or text editors to run the script. And it is simple to implement and is relatively fast for end users. After the release of Node.js in 2009, the "JavaScript everywhere" paradigm has become a reality. This server-side JavaScript framework allows to unify web application development around a single programming language, rather than rely on a different language. npm, Node.js's package manager, is the largest ecosystem of open source libraries in the world. Node.js is also being adopted in IoT projects due to its speed, number of plugins and scalability convenience. Another open-source JavaScript library known as React Native lets you build cross platform mobile applications in order to make a smooth transition from web to mobile. According to Daxx, JavaScript developers grab some of the highest paid tech salaries with an average of $96,000 in the US. There are plenty more sources that name JavaScript as one of the most sought-after skills in 2017. ITJobsWatch ranked JavaScript as the second most in-demand programming language in the UK, a conclusion based on the number of job ads posted over the last three months. Read More: 5 JavaScript Machine learning libraries you need to know Python on the rise Python is a general-purpose language; often described as utilitarian, a word which makes it sound a little boring. It is in fact anything but that. Why is it considered among the top programming languages? Simple: it is a truly universal language, applicable to a range of problems and areas. If programmers begin working with Python for one job or career, they can easily jump to another, even if it’s in an unrelated industry. Google uses Python in a number of applications. Additionally it has a developer portal devoted to Python. It features free classes so employees can learn more about the language. Here are just a few reasons why Python is so popular: Python comes with plenty of documentation, guides, tutorials and a developer community which is incredibly active for timely help and support. Python has Google as one of its corporate sponsors. Google contributes to the growing list of documentation and support tools, and effectively acts as a high profile evangelist for Python. Python is one of the most popular languages for data science. It’s used to build machine learning and AI systems. It comes with excellent sets of task specific libraries from Numpy and Scipy for scientific computing to Django for web development. Ask any Python developer and they have to agree Python is speedy, reliable and efficient. You can deploy Python applications on any environment and there is little to no performance loss no matter which platform. Python is easy to learn probably because it lets you think like programmer, as it is easily readable and almost looks like everyday English. The learning curve is very gradual than all other programming languages which tend to be quite steep. Python contains everything from data structures, tools, support, Python community, the Python Software Foundation and everything combined makes it a favorite among the developers. According to Gooroo, a platform that provides tech skill and salary analytics, Python is one of the highest-paying programming languages in the USA. In fact, at $103,492 per year, Python developers are on an average the second best-paid in the country. The language is used for system operations, web development, server and administrative tools, deployment, scientific modeling, and now in building AI and machine learning systems among others. Read More: What are professionals planning to learn this year? Python, deep learning, yes. But also... C# is not left behind Since the introduction of C# in 2002 by Microsoft, C#’s popularity has remained relatively high. It is a general-purpose language designed for developing apps on the Microsoft platform. C# can be used to create almost anything but it’s particularly strong at building Windows desktop applications and games. It can also be used to develop web applications and has become increasingly popular for mobile development too. Cross-platform tools such as Xamarin allow apps written in C# to be used on almost any mobile device. C# is widely used to create games using the Unity game engine, which is the most popular game engine today. Although C#’s syntax is more logical and consistent than C++, it is a complex language. Mastering it may take more time than languages like Python. The average pay for a C# Developer according to Payscale is $69,006 per year. With C# you have solid prospects as big finance corporations are using C# as their choice of language. In the News: Exciting New Features in C# 8.0 C# Tutorial: Behavior Scripting in C# and Javascript for game developers SQL remains strong The acronym SQL stands for Structured Query Language. This essentially means “a programming language that is used to communicate with database in a structured way”. Much like how Javascript is necessary for making websites exciting and more than just a static page, SQL is one of the only two languages widely used to communicate with databases. SQL is one of the very few languages where you describe what you want, not how to get it.  That means the database can be much more intelligent about how it decides to build its response, and is robust to changes in the computing environment it runs on. It is about a set theory which forces you to think very clearly about what it is you want, and express that in a precise way. More importantly, it gives you a vocabulary and set of tools for thinking about the problem you’re trying to solve without reference to the specific idioms of your application. SQL is a critical skill for many data-related roles. Data scientists and analysts will need to know SQL, for example. But as data reaches different parts of modern business, such as marketing and product management, it’s becoming a useful language for non-developers as well to learn. Source: Packt Skill Up Survey 2018 By far MySQL is still the most commonly used databases for web based applications in 2018, according to the report. It’s freeware, but it is frequently updated with features and security improvements. It offers a lot of functionality even for a free database engine. There are a variety of user interfaces that can be implemented. It can be made to work with other databases, including DB2 and Oracle. Organizations that need a robust database management tool but are on a budget, MySQL will be a perfect choice for them. The average salary for a Senior SQL Developer according to Glassdoor is $100,271 in the US. Unlike other areas in IT industry, job options and growth criteria for SQL developer is completely different. You may work as a database administrator, system manager, SQL professionals etc it completely depends on the functional knowledge and experience you have gained. Read More: 12 most common MySQL errors you should be aware of C++ also among favorites C++, the general purpose language for systems programming, was designed in 1979 by Bjarne Stroustrup. Despite competition from other powerful languages such as Java, Python, and SQL, it is still prevalent among developers. People have been predicting its demise for more than 20 years, but it's still growing. Basically, nothing that can handle complexity runs as fast as C++. C++ is designed for fairly hardcore applications, and it's generally used together with some scripting language or other. It is for building systems that need high performance, high reliability, small footprint, low energy consumption, all of these good things. That’s why you see telecom and financial applications built on C++. C++ is also considered to be one of the best solutions for creating applications that process music and film. There is even an extensive list of websites and tools created based on C++. Financial analysts predict that earnings in this specialization will reach at least $102,000 per year. C++ Tutorials: Getting started with C++ Features Read More: Working with Shaders in C++ to create 3D games Other programming languages like C, PHP, Swift, Go etc were also on the developer’s choice list. The creation of supporting technologies in the last few years has given rise to speculation that programming languages are converging in terms of their capabilities. For example, Node.js enables Javascript developers to create back-end functionalities - relinquishing the need to learn PHP or even hiring a separate back-end developer altogether. As of now, we can only speculate on future tech trends. However, definitely it will be worth keeping an eye out for which horse to bet on! 20 ways to describe programming in 5 words A really basic guide to batch file programming What is Mob Programming?
Read more
  • 0
  • 1
  • 10553
article-image-step-by-step-guide-to-creating-odoo-addon-modules
Sugandha Lahoti
19 Jun 2018
15 min read
Save for later

A step by step guide to creating Odoo Addon Modules

Sugandha Lahoti
19 Jun 2018
15 min read
Odoo uses a client/server architecture in which clients are web browsers accessing the Odoo server via RPC. Both the server and client extensions are packaged as modules which are optionally loaded in a database. Odoo modules can either add brand new business logic to an Odoo system or alter and extend existing business logic. Everything in Odoo starts and ends with modules. In this article, we will cover the basics of creating Odoo Addon Modules. Recipes that we will cover in this. Creating and installing a new addon module Completing the addon module manifest Organizing the addon module file structure Adding Models Our main goal here is to understand how an addon module is structured and the typical incremental workflow to add components to it. This post is an excerpt from the book Odoo 11 Development Cookbook - Second Edition by Alexandre Fayolle and Holger Brunn. With this book, you can create fast and efficient server-side applications using the latest features of Odoo v11. For this article, you should have Odoo installed. You are also expected to be comfortable in discovering and installing extra addon modules. Creating and installing a new addon module In this recipe, we will create a new module, make it available in our Odoo instance, and install it. Getting ready We will need an Odoo instance ready to use. For explanation purposes, we will assume Odoo is available at ~/odoo-dev/odoo, although any other location of your preference can be used. We will also need a location for our Odoo modules. For the purpose of this recipe, we will use a local-addons directory alongside the odoo directory, at ~/odoo-dev/local-addons. How to do it... The following steps will create and install a new addon module: Change the working directory in which we will work and create the addons directory where our custom module will be placed: $ cd ~/odoo-dev $ mkdir local-addons Choose a technical name for the new module and create a directory with that name for the module. For our example, we will use my_module: $ mkdir local-addons/my_module A module's technical name must be a valid Python identifier; it must begin with a letter, and only contain letters (preferably lowercase), numbers, and underscore characters. Make the Python module importable by adding an __init__.py file: $ touch local-addons/my_module/__init__.py Add a minimal module manifest for Odoo to detect it. Create a __manifest__.py file with this line: {'name': 'My module'} Start your Odoo instance including our module directory in the addons path: $ odoo/odoo-bin --addons-path=odoo/addon/,local-addons/ If the --save option is added to the Odoo command, the addons path will be saved in the configuration file. Next time you start the server, if no addons path option is provided, this will be used. Make the new module available in your Odoo instance; log in to Odoo using admin, enable the Developer Mode in the About box, and in the Apps top menu, select Update Apps List. Now, Odoo should know about our Odoo module. Select the Apps menu at the top and, in the search bar in the top-right, delete the default Apps filter and search for my_module. Click on its Install button, and the installation will be concluded. How it works... An Odoo module is a directory containing code files and other assets. The directory name used is the module's technical name. The name key in the module manifest is its title. The __manifest__.py file is the module manifest. It contains a Python dictionary with information about the module, the modules it depends on, and the data files that it will load. In the example, a minimal manifest file was used, but in real modules, we will want to add a few other important keys. These are discussed in the Completing the module manifest recipe, which we will see next. The module directory must be Python-importable, so it also needs to have an __init__.py file, even if it's empty. To load a module, the Odoo server will import it. This will cause the code in the __init__.py file to be executed, so it works as an entry point to run the module Python code. Due to this, it will usually contain import statements to load the module Python files and submodules. Known modules can be installed directly from the command line using the --init or -i option. This list is initially set when you create a new database, from the modules found on the addons path provided at that time. It can be updated in an existing database with the Update Module List menu. Completing the addon module manifest The manifest is an important piece for Odoo modules. It contains important information about the module and declares the data files that should be loaded. Getting ready We should have a module to work with, already containing a __manifest__.py manifest file. You may want to follow the previous recipe to provide such a module to work with. How to do it... We will add a manifest file and an icon to our addon module: To create a manifest file with the most relevant keys, edit the module __manifest__.py file to look like this: { 'name': "Title", 'summary': "Short subtitle phrase", 'description': """Long description""", 'author': "Your name", 'license': "AGPL-3", 'website': "http://www.example.com", 'category': 'Uncategorized', 'version': '11.0.1.0.0', 'depends': ['base'], 'data': ['views.xml'], 'demo': ['demo.xml'], } To add an icon for the module, choose a PNG image to use and copy it to static/description/icon.png. How it works... The remaining content is a regular Python dictionary, with keys and values. The example manifest we used contains the most relevant keys: name: This is the title for the module. summary: This is the subtitle with a one-line description. description: This is a long description written in plain text or the ReStructuredText (RST) format. It is usually surrounded by triple quotes, and is used in Python to delimit multiline texts. For an RST quickstart reference, visit http://docutils.sourceforge.net/docs/user/rst/quickstart.html. author: This is a string with the name of the authors. When there is more than one, it is common practice to use a comma to separate their names, but note that it still should be a string, not a Python list. license: This is the identifier for the license under which the module is made available. It is limited to a predefined list, and the most frequent option is AGPL-3. Other possibilities include LGPL-3, Other OSI approved license, and Other proprietary. website: This is a URL people should visit to know more about the module or the authors. category: This is used to organize modules in areas of interest. The list of the standard category names available can be seen at https://github.com/odoo/odoo/blob/11.0/odoo/addons/base/module/module_data.xml. However, it's also possible to define other new category names here. version: This is the modules' version numbers. It can be used by the Odoo app store to detect newer versions for installed modules. If the version number does not begin with the Odoo target version (for example, 11.0), it will be automatically added. Nevertheless, it will be more informative if you explicitly state the Odoo target version, for example, using 11.0.1.0.0 or 11.0.1.0 instead of 1.0.0 or 1.0. depends: This is a list with the technical names of the modules it directly depends on. If none, we should at least depend on the base module. Don't forget to include any module defining XML IDs, Views, or Models referenced by this module. That will ensure that they all load in the correct order, avoiding hard-to-debug errors. data: This is a list of relative paths to the data files to load with module installation or upgrade. The paths are relative to the module root directory. Usually, these are XML and CSV files, but it's also possible to have YAML data files. demo: This is the list of relative paths to the files with demonstration data to load. These will only be loaded if the database was created with the Demo Data flag enabled. The image that is used as the module icon is the PNG file at static/description/icon.png. Odoo is expected to have significant changes between major versions, so modules built for one major version are likely to not be compatible with the next version without conversion and migration work. Due to this, it's important to be sure about a module's Odoo target version before installing it. There's more… Instead of having the long description in the module manifest, it's possible to have it in its own file. Since version 8.0, it can be replaced by a README file, with either a .txt, .rst, or an .md (Markdown) extension. Otherwise, include a description/index.html file in the module. This HTML description will override a description defined in the manifest file. There are a few more keys that are frequently used: application: If this is True, the module is listed as an application. Usually, this is used for the central module of a functional area auto_install: If this is True, it indicates that this is a "glue" module, which is automatically installed when all of its dependencies are installed installable: If this is True (the default value), it indicates that the module is available for installation Organizing the addon module file structure An addon module contains code files and other assets such as XML files and images. For most of these files, we are free to choose where to place them inside the module directory. However, Odoo uses some conventions on the module structure, so it is advisable to follow them. Getting ready We are expected to have an addon module directory with only the __init__.py and __manifest__.py files. In this recipe, we suppose this is local-addons/my_module. How to do it... To create the basic skeleton for the addon module, perform the given steps: Create the directories for code files: $ cd local-addons/my_module $ mkdir models $ touch models/__init__.py $ mkdir controllers $ touch controllers/__init__.py $ mkdir views $ mkdir security $ mkdir data $ mkdir demo $ mkdir i18n $ mkdir -p static/description Edit the module's top __init__.py file so that the code in subdirectories is loaded: from . import models from . import controllers This should get us started with a structure containing the most used directories, similar to this one: . ├── __init__.py ├── __manifest__.py │ ├── controllers │ └── __init__.py ├── data ├── demo ├── i18n ├── models │ └── __init__.py ├── security ├── static │ └── description └── views How it works... To provide some context, an Odoo addon module can have three types of files:  The Python code is loaded by the __init__.py files, where the .py files and code subdirectories are imported. Subdirectories containing code Python, in turn, need their own __init__.py. Data files that are to be declared in the data and demo keys of the __manifest__.py module manifest in order to be loaded are usually XML and CSV files for the user interface, fixture data, and demonstration data. There can also be YAML files, which can include some procedural instructions that are run when the module is loaded, for instance, to generate or update records programmatically rather than statically in an XML file. Web assets such as JavaScript code and libraries, CSS, and QWeb/HTML templates also play an important part. There are declared through an XML file extending the master templates to add these assets to the web client or website pages. The addon files are to be organized in these directories: models/ contains the backend code files, creating the Models and their business logic. A file per Model is recommended, with the same name as the model, for example, library_book.py for the library.book model. views/ contains the XML files for the user interface, with the actions, forms, lists, and so on. As with models, it is advised to have one file per model. Filenames for website templates are expected to end with the _template suffix. data/ contains other data files with module initial data. demo/ contains data files with demonstration data, useful for tests, training, or module evaluation. i18n/ is where Odoo will look for the translation .pot and .po files. These files don't need to be mentioned in the manifest file. security/ contains the data files defining access control lists, usually a ir.model.access.csv file, and possibly an XML file to define access Groups and Record Rules for row level security. controllers/ contains the code files for the website controllers, and for modules providing that kind of feature. static/ is where all web assets are expected to be placed. Unlike other directories, this directory name is not just a convention, and only files inside it can be made available for the Odoo web pages. They don't need to be mentioned in the module manifest, but will have to be referred to in the web template. When adding new files to a module, don't forget to declare them either in the __manifest__.py (for data files) or __init__.py (for code files); otherwise, those files will be ignored and won't be loaded. Adding models Models define the data structures to be used by our business applications. This recipe shows how to add a basic model to a module. We will use a simple book library example to explain this; we want a model to represent books. Each book has a name and a list of authors. Getting ready We should have a module to work with. We will use an empty my_module for our explanation. How to do it... To add a new Model, we add a Python file describing it and then upgrade the addon module (or install it, if it was not already done). The paths used are relative to our addon module location (for example, ~/odoo-dev/local-addons/my_module/): Add a Python file to the models/library_book.py module with the following code: from odoo import models, fields class LibraryBook(models.Model): _name = 'library.book' name = fields.Char('Title', required=True) date_release = fields.Date('Release Date') author_ids = fields.Many2many( 'res.partner', string='Authors' ) Add a Python initialization file with code files to be loaded by the models/__init__.py module with the following code: from . import library_book Edit the module Python initialization file to have the models/ directory loaded by the module: from . import models Upgrade the Odoo module, either from the command line or from the apps menu in the user interface. If you look closely at the server log while upgrading the module, you should see this line: odoo.modules.registry: module my_module: creating or updating database table After this, the new library.book model should be available in our Odoo instance. If we have the technical tools activated, we can confirm that by looking it up at Settings | Technical | Database Structure | Models. How it works... Our first step was to create a Python file where our new module was created. Odoo models are objects derived from the Odoo Model Python class. When a new model is defined, it is also added to a central model registry. This makes it easier for other modules to make modifications to it later. Models have a few generic attributes prefixed with an underscore. The most important one is _name, providing a unique internal identifier to be used throughout the Odoo instance. The model fields are defined as class attributes. We began defining the name field of the Char type. It is convenient for models to have this field, because by default, it is used as the record description when referenced from other models. We also used an example of a relational field, author_ids. It defines a many-to-many relation between Library Books and the partners. A book can have many authors and each author can have written many books. Next, we must make our module aware of this new Python file. This is done by the __init__.py files. Since we placed the code inside the models/ subdirectory, we need the previous __init__ file to import that directory, which should, in turn, contain another __init__ file importing each of the code files there (just one, in our case). Changes to Odoo models are activated by upgrading the module. The Odoo server will handle the translation of the model class into database structure changes. Although no example is provided here, business logic can also be added to these Python files, either by adding new methods to the Model's class or by extending the existing methods, such as create() or write(). Thus we learned about the structure of an Odoo addon module and learned, step-by-step, how to create a simple module from scratch. To know more about, how to define access rules for your data;  how to expose your data models to end users on the back end and on the front end; and how to create beautiful PDF versions of your data, read this book Odoo 11 Development Cookbook - Second Edition. ERP tool in focus: Odoo 11 Building Your First Odoo Application How to Scaffold a New module in Odoo 11
Read more
  • 0
  • 0
  • 25743

article-image-implementing-an-api-design-first-approach-for-building-apis
Packt Editorial Staff
15 Jun 2018
9 min read
Save for later

Implement an API Design-first approach for building APIs [Tutorial]

Packt Editorial Staff
15 Jun 2018
9 min read
The Monster Records & Associates (MRA) –a fictional music records company, having realised that its biggest asset is in fact its data, embarked on a digital transformation with the aim to offer its product and offerings completely online and via APIs. This article is an excerpt taken from the book Implementing Oracle API Platform Cloud Service, written  by Andy Bell, Sander Rensen, Luis Weir, Phil Wilkins. In this post we are is going to take  you through an interesting MRA case study who adopted an API design-first approach for building its APIs. We will go through the process and steps performed by MRA for this implementation. The Problem Scenario MRA had embarked on a digital transformation journey with the objective to become a digital organisation capable of offering tailored (à la carte) offerings to artists such as handling of an artist’s online presence to on-demand distribution of an artist's digital media to Music Streaming Services such as Spotify, Apple Music, Google Play Music, Amazon Prime Music, Pandora, Deezer to name a few. Having fully acknowledged that their most valuable asset is in fact their media data, MRA wanted to materialise in such assets and determined that the quickest and most effective way to achieve this was by exposing a public API capable of providing access, on-demand, to MRA's Media Catalogue assets such as artists, songs and albums. Figure 1: MRA's Media Catalogue API The idea being, once such assets became accessible via an API, streaming services could, on-demand and 24x7, explore MRA's repertoire, purchase rights-to-use and start streaming. In addition, the API could also open the door to a brand new global audience: millions of app developers constantly innovating. If only a fraction of such a huge audience leveraged MRA's Media Catalogue API, it would still represent a considerable success for MRA. However, as with everything, there is a challenge to realise such vision. MRA like many other organizations, had a level of experience with systems integrations and Service Oriented Architectures (SOA). One of the lessons learnt from SOA however was that the cycles for designing, building, prototyping, and testing SOAP-based Web Services could be quite lengthy and expensive. An API differentiates from a service in that the former represents the RESTful interface a consumer application interacts with, whereas the latter is the actual implementation (the code) behind an API. A HTTP endpoint exposed by a service is defined as an unmanaged API. When a service endpoint is accessed via an API Gateway where policies such as app-key validation, authentication/authorization and other policies are enforced, then it becomes a managed API. The book, Implementing Oracle API Platform Cloud Service, refers to managed APIs as simply APIs and unmanaged APIs as simply service endpoints. Especially when it came to capturing and accommodating the feedback from Client Application Developers (API consumers), MRA had very bad experiences as in the majority of occasions they came to realize very late in the software lifecycle that the Web Service developed did not meet the expectations of its consumers. Figure 2: feedback-loops in traditional web service design Refactoring web services in this approach wasn't just time consuming but also an expensive exercise as both the Service design (WSDL) and code had to be refactored and re-tested in order to accommodate the feedback received and before application developers could try a service again. Naturally service designers and developers avoided as much as possible making changes, thus challenging feedback received from application developers, which in turn created friction amongst both teams but in some occasions meant application developers finding alternative routes to solve their needs rather than using the web service. This was the worst possible scenario as it meant that the investments made in implementing a web service could've been wasted. API design-first process Learning from experiences and acknowledging the challenges that such waterfall like process imposed to a digital transformation initiative, MRA were quite keen to adopt a more agile, interactive but also quicker way to deliver modern RESTful based APIs. The idea was clear. By engaging application developers (API consumers) in the initial stages of the design process, feedback would be captured and reflected back in the interface design (API) early as well. Not only this would shorten feedback loops, but ensure that once the underlying services are implemented, it would expose an interface already endorsed and tested by its consumers, as opposed to risk building a service that won't satisfy the client expectations and needs late in the process. Figure 3: API design-first approach vs traditional service design The implication of this approach though, is that the tooling and notation to define the API, had to be both simple, yet rich in capability such as the task of designing and mocking API endpoints is quick and easy, given that if the process becomes cumbersome it would defeat its purpose. We elaborate on the different steps undertaken by MRA when designing its Media Catalogue API using Apiary and related tools in the book, Implementing Oracle API Platform Cloud Service. Here are the steps: Defining the API type Defining the API’s domain semantics Creating the API definition with its main resources Trying the API mock Defining MSON Data Structures Pushing the API Blueprint to GitHub Publishing the API mock in Oracle API Platform CS Setting up Dredd for continuous testing of API endpoints against the API blueprint Defining the API type: A fundamental step when designing any API is to first define what the type is. This is important as it will determine the guiding principles to consider when doing the design. We have three types of APIs: Single-Purpose APIs: These are APIs that serve a unique and specific purpose, typically derived from an unambiguous need associated with a user journey or use case. Multi-Purpose APIs: These APIs are more generic in nature and are meant to satisfy not just one but multiple use cases and scenarios. They are not bound (coupled) to a specific user journey or system of engagement (for example, a mobile app) therefore ideal for reuse enterprise-wide. MRA’s Media Catalogue API: MRA's Media Catalogue API was specifically targeted at two main audiences: Music Streaming Services and Application Developers in general. Therefore, the API had to be both Public and Multi-Purpose. Defining the API’s domain semantics: This step elaborates proper understanding of the API's bounded context, Media Catalogue. To do so, entities, key attributes, and relationships within the bounded context itself were identified and also defined using semantics appropriate for the purpose of the API. Creating the API definition with its main resources: This step shows how to create an API and define its main resources, parameters, and sample payloads.It involves steps followed by MRA when creating the Media Catalogue API definition and its associated API mock. Trying the API mock: This part describes how Apiary's automatically generated API mocks can be used to satisfy one of the most important steps in API design-first: try an API early in the lifecycle, before the API is actually implemented. This is a critical step as collecting feedback from API consumers early can potentially save numerous hours in code refactoring later in the project. Defining MSON Data Structures: The Markdown Syntax for Object Notation (MSON) is a plain-text syntax for the description and validation of Data Structures in API Blueprint. It provides a way to represent objects (for example, an artist) in a human-readable plain text form. This part involves steps to define the Artist, Album and Song objects using the MSON notation. Pushing the API Blueprint to GitHub: API Blueprints can be pushed into GitHub repositories, so they can be version controlled but most importantly it can follow a similar GitHub cycle as any other code asset. This step is also required in order to configure Dredd to validate API endpoints against API blueprint definitions. Publishing the API mock in Oracle API Platform CS: Although Apiary provides an API mock URL that can be can be accessed directly, it is recommended that instead, the API mock is published and accessed via the Oracle API Platform Cloud Service. Setting up Dredd for continuous testing of API endpoints against the API blueprint: The last step of the API design-first process is to configure Dredd to continuously validate that an API endpoint exposed through the API Gateway is always compliant with its corresponding API Blueprint definition. The idea is to ensure that Client Application code is not broken once an API Policy is changed to point to a Backend Service once it has been built and deployed. We discussed the API design-first approach for building its APIs. MRA's business scenario demanded the need for more efficient and leaner process for implementing APIs. We saw how an API design-first process could effectively help organizations such as MRA gain greater speed, agility, and efficiencies. Here’s a summary of the basic steps to realize such process. Choose your API type: We introduced the conceptual concepts such as Single Purpose and Multi-Purpose APIs to decide on what type of API to adopt. Define your APIs: The need for creating an API definition and an API mock in Apiary based on API Blueprints and the Markdown Syntax for Object Notation (MSON). Create & publish API: Creation and publication of an API using the Oracle API Platform Cloud Service. Continuously test: Finally, the configuration of Dredd to verify API endpoints compliance with the API definition. You just enjoyed an excerpt from the book Implementing Oracle API Platform Cloud Services. Grab the latest edition of this book to work with the newest Oracle APIs, and interface with an increasingly complex array of services your clients want. What are the best programming languages for building APIs? Glancing at the Fintech growth story – Powered by ML, AI & APIs What RESTful APIs can do for Cloud, IoT, social media and other emerging technologies  
Read more
  • 0
  • 0
  • 8143