Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech Guides - Programming

81 Articles
article-image-what-are-best-programming-languages-building-apis
Antonio Cucciniello
11 Jun 2017
4 min read
Save for later

What are the best programming languages for building APIs?

Antonio Cucciniello
11 Jun 2017
4 min read
Are you in the process of designing your first web application? Maybe you have built some in the past but are looking for a change in language to increase your skill set, or try out something new. If you fit into those categories, you are in the right place. With all of the information out there, it could be hard to decide on what programming language to select for your next product or project. Because any programming language can ultimately be used to write APIs, some can be better and more efficient to use than others. Today we will be discussing what should be taken into consideration when choosing the programming language to build out APIs for your web app. Comfort is essential when it comes to programming languages This goes out to any developer who has experience in a certain language. If you already have experience in a language, you will ultimately have an easier time developing, understanding the concepts involved, and you will be able to make more progress right out of the gate. This translates to improved code and performance as well because you can spend more time on that rather than learning a brand new programming language. For example, if I have been developing in Python for a few years, but I have the option between using PHP or Python as a programming language for the project, I simply select Python due to the time saved already spent learning Python. This is extremely important because when trying to do something new, you want to limit the amount of unknowns that you will have in the project. That will help your learning and help to achieve better results. If you are a brand new developer with zero programming experience, the following section might help you narrow your results. Libraries and frameworks that support developing APIs The next question to ask in the process of eliminating potential programming languages to build out your API is: Does the language come with plenty of different options for libraries or frameworks that aid in the developing of APIs? To continue with the Python example in the previous section, there is the Django REST framework that is specifically built on top of Django. Django is a web development framework for Python, made for creating an API in the programming language faster and easier. Did you hear faster and easier? Why yes you did, and that is why this is important. These libraries and frameworks allow you to speed up the development process by containing functions and objects that handle plenty of the repetitive or dirty work in building an API. Once you have spent some time researching what is available to you in terms of libraries and frameworks for languages, it is time to check out how active the communities are. Support and community The next question to ask yourself in this process is: Are the frameworks and libraries for this programming language still being supported? If so, how active is the community of developers? Do they have continuous or regular updates to their software and capabilities? Do the updates help improve security and usability? Given that not many people use the language, nor is it being updated for bug fixes in the future, you may not want to continue using it. Another thing to pay attention to is the community of users. Are there plenty of resources for you to learn from? How clear and available is the documentation? Are there experienced developers who have blog posts on the necessary topics to learn? Are there questions being asked and answered on Stack Overflow? Are there any hard resources such as magazines or textbooks that show you how to use these languages and frameworks? Potential languages for building APIs From my experience, there are a number of better programming languages.Here is an example framework for some of these languages, which you can use to start developing your next API: Language Framework Java Spring JavaScript(Node) Express Python Django PHP Laravel Ruby Ruby on Rails   Ultimately, the programming language you select is dependent on several factors: your experience with the language, the frameworks available for API building, and how active both the support and the community are. Do not be afraid to try something new! You can always learn, but if you are concerned about speed and ease of development, use these criteria to help select the language of use. Leave a comment down below and let us know which programming language is your favorite and how you will use it in your future applications!
Read more
  • 0
  • 0
  • 46716

article-image-common-big-data-design-patterns
Sugandha Lahoti
08 Jul 2018
17 min read
Save for later

Common big data design patterns

Sugandha Lahoti
08 Jul 2018
17 min read
Design patterns have provided many ways to simplify the development of software applications. Now that organizations are beginning to tackle applications that leverage new sources and types of big data, design patterns for big data are needed. These big data design patterns aim to reduce complexity, boost the performance of integration and improve the results of working with new and larger forms of data. This article intends to introduce readers to the common big data design patterns based on various data layers such as data sources and ingestion layer, data storage layer and data access layer. This article is an excerpt from Architectural Patterns by Pethuru Raj, Anupama Raman, and Harihara Subramanian. In this book, you will learn the importance of architectural and design patterns in business-critical applications. Data sources and ingestion layer Enterprise big data systems face a variety of data sources with non-relevant information (noise) alongside relevant (signal) data. Noise ratio is very high compared to signals, and so filtering the noise from the pertinent information, handling high volumes, and the velocity of data is significant. This is the responsibility of the ingestion layer. The common challenges in the ingestion layers are as follows: Multiple data source load and prioritization Ingested data indexing and tagging Data validation and cleansing Data transformation and compression The preceding diagram depicts the building blocks of the ingestion layer and its various components. We need patterns to address the challenges of data sources to ingestion layer communication that takes care of performance, scalability, and availability requirements. In this section, we will discuss the following ingestion and streaming patterns and how they help to address the challenges in ingestion layers. We will also touch upon some common workload patterns as well, including: Multisource extractor Multidestination Protocol converter Just-in-time (JIT) transformation Real-time streaming pattern Multisource extractor An approach to ingesting multiple data types from multiple data sources efficiently is termed a Multisource extractor. Efficiency represents many factors, such as data velocity, data size, data frequency, and managing various data formats over an unreliable network, mixed network bandwidth, different technologies, and systems: The multisource extractor system ensures high availability and distribution. It also confirms that the vast volume of data gets segregated into multiple batches across different nodes. The single node implementation is still helpful for lower volumes from a handful of clients, and of course, for a significant amount of data from multiple clients processed in batches. Partitioning into small volumes in clusters produces excellent results. Data enrichers help to do initial data aggregation and data cleansing. Enrichers ensure file transfer reliability, validations, noise reduction, compression, and transformation from native formats to standard formats. Collection agent nodes represent intermediary cluster systems, which helps final data processing and data loading to the destination systems. The following are the benefits of the multisource extractor: Provides reasonable speed for storing and consuming the data Better data prioritization and processing Drives improved business decisions Decoupled and independent from data production to data consumption Data semantics and detection of changed data Scaleable and fault tolerance system The following are the impacts of the multisource extractor: Difficult or impossible to achieve near real-time data processing Need to maintain multiple copies in enrichers and collection agents, leading to data redundancy and mammoth data volume in each node High availability trade-off with high costs to manage system capacity growth Infrastructure and configuration complexity increases to maintain batch processing Multidestination pattern In multisourcing, we saw the raw data ingestion to HDFS, but in most common cases the enterprise needs to ingest raw data not only to new HDFS systems but also to their existing traditional data storage, such as Informatica or other analytics platforms. In such cases, the additional number of data streams leads to many challenges, such as storage overflow, data errors (also known as data regret), an increase in time to transfer and process data, and so on. The multidestination pattern is considered as a better approach to overcome all of the challenges mentioned previously. This pattern is very similar to multisourcing until it is ready to integrate with multiple destinations (refer to the following diagram). The router publishes the improved data and then broadcasts it to the subscriber destinations (already registered with a publishing agent on the router). Enrichers can act as publishers as well as subscribers: Deploying routers in the cluster environment is also recommended for high volumes and a large number of subscribers. The following are the benefits of the multidestination pattern: Highly scalable, flexible, fast, resilient to data failure, and cost-effective Organization can start to ingest data into multiple data stores, including its existing RDBMS as well as NoSQL data stores Allows you to use simple query language, such as Hive and Pig, along with traditional analytics Provides the ability to partition the data for flexible access and decentralized processing Possibility of decentralized computation in the data nodes Due to replication on HDFS nodes, there are no data regrets Self-reliant data nodes can add more nodes without any delay The following are the impacts of the multidestination pattern: Needs complex or additional infrastructure to manage distributed nodes Needs to manage distributed data in secured networks to ensure data security Needs enforcement, governance, and stringent practices to manage the integrity and consistency of data Protocol converter This is a mediatory approach to provide an abstraction for the incoming data of various systems. The protocol converter pattern provides an efficient way to ingest a variety of unstructured data from multiple data sources and different protocols. The message exchanger handles synchronous and asynchronous messages from various protocol and handlers as represented in the following diagram. It performs various mediator functions, such as file handling, web services message handling, stream handling, serialization, and so on: In the protocol converter pattern, the ingestion layer holds responsibilities such as identifying the various channels of incoming events, determining incoming data structures, providing mediated service for multiple protocols into suitable sinks, providing one standard way of representing incoming messages, providing handlers to manage various request types, and providing abstraction from the incoming protocol layers. Just-In-Time (JIT) transformation pattern The JIT transformation pattern is the best fit in situations where raw data needs to be preloaded in the data stores before the transformation and processing can happen. In this kind of business case, this pattern runs independent preprocessing batch jobs that clean, validate, corelate, and transform, and then store the transformed information into the same data store (HDFS/NoSQL); that is, it can coexist with the raw data: The preceding diagram depicts the datastore with raw data storage along with transformed datasets. Please note that the data enricher of the multi-data source pattern is absent in this pattern and more than one batch job can run in parallel to transform the data as required in the big data storage, such as HDFS, Mongo DB, and so on. Real-time streaming pattern Most modern businesses need continuous and real-time processing of unstructured data for their enterprise big data applications. Real-time streaming implementations need to have the following characteristics: Minimize latency by using large in-memory Event processors are atomic and independent of each other and so are easily scalable Provide API for parsing the real-time information Independent deployable script for any node and no centralized master node implementation The real-time streaming pattern suggests introducing an optimum number of event processing nodes to consume different input data from the various data sources and introducing listeners to process the generated events (from event processing nodes) in the event processing engine: Event processing engines (event processors) have a sizeable in-memory capacity, and the event processors get triggered by a specific event. The trigger or alert is responsible for publishing the results of the in-memory big data analytics to the enterprise business process engines and, in turn, get redirected to various publishing channels (mobile, CIO dashboards, and so on). Big data workload patterns Workload patterns help to address data workload challenges associated with different domains and business cases efficiently. The big data design pattern manifests itself in the solution construct, and so the workload challenges can be mapped with the right architectural constructs and thus service the workload. The following diagram depicts a snapshot of the most common workload patterns and their associated architectural constructs: Workload design patterns help to simplify and decompose the business use cases into workloads. Then those workloads can be methodically mapped to the various building blocks of the big data solution architecture. Data storage layer Data storage layer is responsible for acquiring all the data that are gathered from various data sources and it is also liable for converting (if needed) the collected data to a format that can be analyzed. The following sections discuss more on data storage layer patterns. ACID versus BASE versus CAP Traditional RDBMS follows atomicity, consistency, isolation, and durability (ACID) to provide reliability for any user of the database. However, searching high volumes of big data and retrieving data from those volumes consumes an enormous amount of time if the storage enforces ACID rules. So, big data follows basically available, soft state, eventually consistent (BASE), a phenomenon for undertaking any search in big data space. Database theory suggests that the NoSQL big database may predominantly satisfy two properties and relax standards on the third, and those properties are consistency, availability, and partition tolerance (CAP). With the ACID, BASE, and CAP paradigms, the big data storage design patterns have gained momentum and purpose. We will look at those patterns in some detail in this section. The patterns are: Façade pattern NoSQL pattern Polyglot pattern Façade pattern This pattern provides a way to use existing or traditional existing data warehouses along with big data storage (such as Hadoop). It can act as a façade for the enterprise data warehouses and business intelligence tools. In the façade pattern, the data from the different data sources get aggregated into HDFS before any transformation, or even before loading to the traditional existing data warehouses: The façade pattern allows structured data storage even after being ingested to HDFS in the form of structured storage in an RDBMS, or in NoSQL databases, or in a memory cache. The façade pattern ensures reduced data size, as only the necessary data resides in the structured storage, as well as faster access from the storage. NoSQL pattern This pattern entails getting NoSQL alternatives in place of traditional RDBMS to facilitate the rapid access and querying of big data. The NoSQL database stores data in a columnar, non-relational style. It can store data on local disks as well as in HDFS, as it is HDFS aware. Thus, data can be distributed across data nodes and fetched very quickly. Let's look at four types of NoSQL databases in brief: Column-oriented DBMS: Simply called a columnar store or big table data store, it has a massive number of columns for each tuple. Each column has a column key. Column family qualifiers represent related columns so that the columns and the qualifiers are retrievable, as each column has a column key as well. These data stores are suitable for fast writes. Key-value pair database: A key-value database is a data store that, when presented with a simple string (key), returns an arbitrarily large data (value). The key is bound to the value until it gets a new value assigned into or from a database. The key-value data store does not need to have a query language. It provides a way to add and remove key-value pairs. A key-value store is a dictionary kind of data store, where it has a list of words and each word represents one or more definitions. Graph database: This is a representation of a system that contains a sequence of nodes and relationships that creates a graph when combined. A graph represents three data fields: nodes, relationships, and properties. Some types of graph store are referred to as triple stores because of their node-relationship-node structure. You may be familiar with applications that provide evaluations of similar or likely characteristics as part of the search (for example, a user bought this item also bought... is a good illustration of graph store implementations). Document database: We can represent a graph data store as a tree structure. Document trees have a single root element or sometimes even multiple root elements as well. Note that there is a sequence of branches, sub-branches, and values beneath the root element. Each branch can have an expression or relative path to determine the traversal path from the origin node (root) and to any given branch, sub-branch, or value. Each branch may have a value associated with that branch. Sometimes the existence of a branch of the tree has a specific meaning, and sometimes a branch must have a given value to be interpreted correctly. The following table summarizes some of the NoSQL use cases, providers, tools and scenarios that might need NoSQL pattern considerations. Most of this pattern implementation is already part of various vendor implementations, and they come as out-of-the-box implementations and as plug and play so that any enterprise can start leveraging the same quickly. NoSQL DB to Use Scenario Vendor / Application / Tools Columnar database Application that needs to fetch entire related columnar family based on a given string: for example, search engines SAP HANA / IBM DB2 BLU / ExtremeDB / EXASOL / IBM Informix / MS SQL Server / MonetDB Key Value Pair database Needle in haystack applications (refer to the Big data workload patterns given in this section) Redis / Oracle NoSQL DB / Linux DBM / Dynamo / Cassandra Graph database Recommendation engine: application that provides evaluation of Similar to / Like: for example, User that bought this item also bought ArangoDB / Cayley / DataStax / Neo4j / Oracle Spatial and Graph / Apache Orient DB / Teradata Aster Document database Applications that evaluate churn management of social media data or non-enterprise data Couch DB / Apache Elastic Search / Informix / Jackrabbit / Mongo DB / Apache SOLR Polyglot pattern Traditional (RDBMS) and multiple storage types (files, CMS, and so on) coexist with big data types (NoSQL/HDFS) to solve business problems. Most modern business cases need the coexistence of legacy databases. At the same time, they would need to adopt the latest big data techniques as well. Replacing the entire system is not viable and is also impractical. The polyglot pattern provides an efficient way to combine and use multiple types of storage mechanisms, such as Hadoop, and RDBMS. Big data appliances coexist in a storage solution: The preceding diagram represents the polyglot pattern way of storing data in different storage types, such as RDBMS, key-value stores, NoSQL database, CMS systems, and so on. Unlike the traditional way of storing all the information in one single data source, polyglot facilitates any data coming from all applications across multiple sources (RDBMS, CMS, Hadoop, and so on) into different storage mechanisms, such as in-memory, RDBMS, HDFS, CMS, and so on. Data access layer Data access in traditional databases involves JDBC connections and HTTP access for documents. However, in big data, the data access with conventional method does take too much time to fetch even with cache implementations, as the volume of the data is so high. So we need a mechanism to fetch the data efficiently and quickly, with a reduced development life cycle, lower maintenance cost, and so on. Data access patterns mainly focus on accessing big data resources of two primary types: End-to-end user-driven API (access through simple queries) Developer API (access provision through API methods) In this section, we will discuss the following data access patterns that held efficient data access, improved performance, reduced development life cycles, and low maintenance costs for broader data access: Connector pattern Lightweight stateless pattern Service locator pattern Near real-time pattern Stage transform pattern The preceding diagram represents the big data architecture layouts where the big data access patterns help data access. We discuss the whole of that mechanism in detail in the following sections. Connector pattern The developer API approach entails fast data transfer and data access services through APIs. It creates optimized data sets for efficient loading and analysis. Some of the big data appliances abstract data in NoSQL DBs even though the underlying data is in HDFS, or a custom implementation of a filesystem so that the data access is very efficient and fast. The connector pattern entails providing developer API and SQL like query language to access the data and so gain significantly reduced development time. As we saw in the earlier diagram, big data appliances come with connector pattern implementation. The big data appliance itself is a complete big data ecosystem and supports virtualization, redundancy, replication using protocols (RAID), and some appliances host NoSQL databases as well. The preceding diagram shows a sample connector implementation for Oracle big data appliances. The data connector can connect to Hadoop and the big data appliance as well. It is an example of a custom implementation that we described earlier to facilitate faster data access with less development time. Lightweight stateless pattern This pattern entails providing data access through web services, and so it is independent of platform or language implementations. The data is fetched through restful HTTP calls, making this pattern the most sought after in cloud deployments. WebHDFS and HttpFS are examples of lightweight stateless pattern implementation for HDFS HTTP access. It uses the HTTP REST protocol. The HDFS system exposes the REST API (web services) for consumers who analyze big data. This pattern reduces the cost of ownership (pay-as-you-go) for the enterprise, as the implementations can be part of an integration Platform as a Service (iPaaS): The preceding diagram depicts a sample implementation for HDFS storage that exposes HTTP access through the HTTP web interface. Near real-time pattern For any enterprise to implement real-time data access or near real-time data access, the key challenges to be addressed are: Rapid determination of data: Ensure rapid determination of data and make swift decisions (within a few seconds, not in minutes) before the data becomes meaningless Rapid analysis: Ability to analyze the data in real time and spot anomalies and relate them to business events, provide visualization, and generate alerts at the moment that the data arrived Some examples of systems that would need real-time data analysis are: Radar systems Customer services applications ATMs Social media platforms Intrusion detection systems Storm and in-memory applications such as Oracle Coherence, Hazelcast IMDG, SAP HANA, TIBCO, Software AG (Terracotta), VMware, and Pivotal GemFire XD are some of the in-memory computing vendor/technology platforms that can implement near real-time data access pattern applications: As shown in the preceding diagram, with multi-cache implementation at the ingestion phase, and with filtered, sorted data in multiple storage destinations (here one of the destinations is a cache), one can achieve near real-time access. The cache can be of a NoSQL database, or it can be any in-memory implementations tool, as mentioned earlier. The preceding diagram depicts a typical implementation of a log search with SOLR as a search engine. Stage transform pattern In the big data world, a massive volume of data can get into the data store. However, all of the data is not required or meaningful in every business case. The stage transform pattern provides a mechanism for reducing the data scanned and fetches only relevant data. HDFS has raw data and business-specific data in a NoSQL database that can provide application-oriented structures and fetch only the relevant data in the required format: Combining the stage transform pattern and the NoSQL pattern is the recommended approach in cases where a reduced data scan is the primary requirement. The preceding diagram depicts one such case for a recommendation engine where we need a significant reduction in the amount of data scanned for an improved customer experience. The implementation of the virtualization of data from HDFS to a NoSQL database, integrated with a big data appliance, is a highly recommended mechanism for rapid or accelerated data fetch. We discussed big data design patterns by layers such as data sources and ingestion layer, data storage layer and data access layer. To know more about patterns associated with object-oriented, component-based, client-server, and cloud architectures, read our book Architectural Patterns. Why we need Design Patterns? Implementing 5 Common Design Patterns in JavaScript (ES8) An Introduction to Node.js Design Patterns
Read more
  • 0
  • 0
  • 36260

article-image-what-difference-between-declarative-and-imperative-programming
Antonio Cucciniello
10 Mar 2018
4 min read
Save for later

What is the difference between declarative and imperative programming?

Antonio Cucciniello
10 Mar 2018
4 min read
Declarative programming and imperative programming are two different approaches that offer a different way of working on a given project or application. But what is the difference between declarative and imperative programming? And when should you use one over the other? What is declarative programming? Let us first start with declarative programming. This is the form or style of programming where we are most concerned with what we want as the answer, or what would be returned. Here, we as developers are not concerned with how we get there, simply concerned with the answer that is received. What is imperative programming? Next, let's take a look at imperative programming. This is the form and style of programming in which we care about how we get to an answer, step by step. We want the same result ultimately, but we are telling the complier to do things a certain way in order to achieve that correct answer we are looking for. An analogy If you are still confused, hopefully this analogy will clear things up for you. The analogy is comparing wanting to learn a topic or outsourcing work to have someone else do it. If you are outsourcing work, you might not care about how the work is completed; rather you care just what the final product or result of the work looks like. This is likened to declarative programming. You know exactly what you want and you program a function to give you just that. Let us say you did not want to outsource work for your business or project. Instead, you tell yourself that you want to learn this skill for the long term. Now, when attempting to complete the task, you care about how it is actually done. You need to know the individual steps along the way in order to get it working properly. This is similar to imperative programming. Why you should use declarative programming Reusability Since the way the result is achieved does not necessarily matter here, it allows for the functions you build to be more general and could potentially be used for multiple purposes and not just one. Not rewriting code can speed up the program you are currently writing and any others that use the same functionality in the future. Reducing Errors Given that in declarative programming you tend to write functions that do not change state as you would in functional programming, the chances of errors arising are smaller and it allows for your application to become more stable. The removal of side effects from your functions allows you to know exactly what comes in and what comes out, allows for a more predictable program. Potential drawbacks of declarative programming Lack of Control In declarative programming, you may use functions that someone else created, in order to achieve the desired results. But you may need specific things to be completed behind the scenes to make your result come out properly. You do not have this control in declarative programming as you would in imperative programming. Inefficiency When the implementation is controlled by something else, you may have problems making your code efficient. In applications where there may be a time constraint, you will need to program the individual steps in order to make sure your program is running as efficient as possible. There are benefits and disadvantages to both forms. Overall, it is entirely up to you, the programmer, to decide which implementation you would like to follow in your code. If you are solely focused on the data, perhaps consider using the declarative programming style. If you care more about the implementation and how something works, maybe stick to an imperative programming approach. More importantly, you can have a mix of both styles. It is extremely flexible for you. You are in charge here.
Read more
  • 0
  • 0
  • 26858

article-image-bootstrap-vs-material-design-for-your-next-web-or-app-development-project
Guest Contributor
08 Oct 2019
8 min read
Save for later

Should you use Bootstrap or Material Design for your next web or app development project?

Guest Contributor
08 Oct 2019
8 min read
Superior user experience is becoming increasingly important for businesses as it helps them to engage users and boost brand loyalty. Front-end website and app development platforms, namely Bootstrap vs Material Design empower developers to create websites with a robust structure and advanced functionality, thereby delivering outstanding business solutions and unbeatable user experience. Both Twitter’s Bootstrap vs Material Design are used by developers to create functional and high-quality websites and apps. If you are an aspiring front-end developer, here’s a direct comparison between the two, so you can choose the one that’s better suited for your upcoming project. BootStrap Bootstrap is an open-source, intuitive, and powerful framework used for responsive mobile-first solutions on the web. For several years, Bootstrap has helped developers create splendid mobile-ready front-end websites. In fact, Bootstrap is the most popular  CSS framework as it’s easy to learn and offers a consistent design by using re-usable components. Let’s dive deeper into the pros and cons of Bootstrap. Pros High speed of development If you have limited time for the website or app development, Bootstrap is an ideal choice. It offers ready-made blocks of code that can get you started within no time. So, you don’t have to start coding from scratch. Bootstrap also provides ready-made themes, templates, and other resources that can be downloaded and customized to suit your needs, allowing you to create a unique website as quickly as possible. Bootstrap is mobile first Since July 1, 2019, Google started using mobile-friendliness as a critical ranking factor for all websites. This is because users prefer using sites that are compatible with the screen size of the device they are using. In other words, they prefer accessing responsive sites. Bootstrap is an ideal choice for responsive sites as it has an excellent fluid grid system and responsive utility classes that make the task at hand easy and quick. Enjoys  a strong community support Bootstrap has a huge number of resources available on its official website and enjoys immense support from the developers’ community. Consequently, it helps all developers fix issues promptly. At present, Bootstrap is being developed and maintained on GitHub by Mark Otto, currently Principal Design & Brand Architect at GitHub, with nearly 19 thousand commits and 1087 contributors. The team regularly releases updates to fix any new issues and improve the effectiveness of the framework. For instance, currently, the Bootstrap team is working on releasing version 4.3 that will drop jQuery for regular JavaScript. This is primarily because jQuery adds 30KB to the webpage size and is tricky to configure with bundlers like Webpack. Similarly, Flexbox is a new feature added to the Bootstrap 4 framework. In fact, Bootstrap version 4 is rich with features, such as a Flexbox-based grid, responsive sizing and floats, auto margins, vertical centering, and new spacing utilities. Further, you will find plenty of websites offering Bootstrap tutorials, a wide collection of themes, templates, plugins, and user interface kit that can be used as per your taste and nature of the project. Cons All Bootstrap sites look the same The Twitter team introduced Bootstrap with the objective of helping developers use a standardized interface to create websites within a short time. However, one of the major drawbacks of this framework is that all websites created using this framework are highly recognizable as Bootstrap sites. Open Airbnb, Twitter, Apple Music, or Lyft. They all look the same with bold headlines, rounded sans-serif fonts, and lots of negative space. Bootstrap sites can be heavy Bootstrap is notorious for adding unnecessary bloat to websites as the files generated are huge in size. This leads to longer loading time and battery draining issues. Further, if you delete them manually, it defeats the purpose of using the framework. So, if you use this popular front-end UI library in your project, make sure you pay extra attention to page weight and page speed. May not be suitable for simple websites Bootstrap may not be the right front-end framework for all types of websites, especially the ones that don’t need a full-fledged framework. This is because, Bootstrap’s theme packages are incredibly heavy with battery-draining scripts. Also, Bootstrap has CSS weighing in at 126KB and 29KB of JavaScript that can increase the site’s loading time. In such cases, Bootstrap alternatives, namely Foundation, Skeleton, Pure, and Semantic UI adaptable and lightweight frameworks that can meet your developmental needs and improve your site’s user-friendliness. Material Design When compared to Bootstrap vs Material Design is hard to customize and learn. However, this design language was introduced by Google in 2014 with the objective of enhancing Android app’s design and user interface. The language is quite popular among developers as it offers a quick and effective way for web development. It includes responsive transitions and animations, lighting and shadows effects, and grid-based layouts. When developing a website or app using Material Design, designers should play to its strengths but be wary of its cons. Let’s see why. Pros Offers numerous components  Material Design offers numerous components that provide a base design, guidelines, and templates. Developers can work on this to create a suitable website or application for the business. The Material Design concept offers the necessary information on how to use each component. Moreover, Material Design Lite is quite popular for its customization. Many designers are creating customized components to take their projects to the next level. Is compatible across various browsers Both Bootstrap vs Material Design have a sound browser compatibility as they are compatible across most browsers. Material Design supports Angular Material and React Material User Interface. It also uses the SASS preprocessor. Doesn’t require JavaScript frameworks Bootstrap completely depends on JavaScript frameworks. However, Material Design doesn’t need any JavaScript frameworks or libraries to design websites or apps. In fact, the platform provides a material design framework that allows developers to create innovative components such as cards and badges. Cons The animations and vibrant colors can be distracting Material Design extensively uses animated transitions and vibrant colors and images that help bring the interface to life. However, these animations can adversely affect the human brain’s ability to gather information. It is affiliated to Google Since Material Design is a Google-promoted framework, Android is its prominent adopter. Consequently, developers looking to create apps on a platform-independent UX may find it tough to work with Material Design. However, when Google introduced the language, it had broad vision for Material Design that encompasses many platforms, including iOS. The tech giant has several Google Material Design components for iOS that can be used to render interesting effects using a flexible header, standard material colors, typography, and sliding tabs Carries performance overhead Material Design extensively uses animations that carry a lot of overhead. For instance, effects like drop shadow, color fill, and transform/translate transitions can be jerky and unpleasant for regular users. Wrapping up: Should you use Bootstrap vs Material Design for your next web or app development project? Bootstrap is great for responsive, simple, and professional websites. It enjoys immense support and documentation, making it easy for developers to work with it. So, if you are working on a project that needs to be completed within a short time, opt for Bootstrap. The framework is mainly focused on creating responsive, functional, and high-quality websites and apps that enhance the user experience. Notice how these websites have used Bootstrap to build responsive and mobile-first sites. (Source: cssreel) (Source: Awwwards) Material Design, on the other hand, is specific as a design language and great for building websites that focus on appearance, innovative designs, and beautiful animations. You can use Material Design for your portfolio sites, for instance. The framework is pretty detailed and straightforward to use and helps you create websites with striking effects. Check out how these websites and apps use the customized themes, popups, and buttons of Material Design. (Source:  Nimbus 9) (Source: Digital Trends) What do you think? Which framework works better for you? Bootstrap vs Material Design. Let us know in the comments section below. Author Bio Gaurav Belani is a Senior SEO and Content Marketing Analyst at The 20 Media, a Content Marketing agency that specializes in data-driven SEO. He has more than seven years of experience in Digital Marketing and along with that loves to read and write about AI, Machine Learning, Data Science and much more about the emerging technologies. In his spare time, he enjoys watching movies and listening to music. Connect with him on Twitter and Linkedin. Material-UI v4 releases with CSS specificity, Classes boilerplate, migration to Typescript and more Warp: Rust’s new web framework Learn how to Bootstrap a Spring application [Tutorial] Bootstrap 5 to replace jQuery with vanilla JavaScript How to use Bootstrap grid system for responsive website design?  
Read more
  • 0
  • 0
  • 25605

article-image-famous-gang-of-four-design-patterns
Sugandha Lahoti
10 Jul 2018
14 min read
Save for later

Meet the famous 'Gang of Four' design patterns

Sugandha Lahoti
10 Jul 2018
14 min read
A design pattern is a reusable solution to a recurring problem in software design. It is not a finished piece of code but a template that helps to solve a particular problem or family of problems. In this article, we will talk about the Gang of Four design patterns. The gang of four, authors Erich Gamma, Richard Helm, Ralph Johnson and John Vlissides, initiated the concept of Design Pattern in Software development. These authors are collectively known as Gang of Four (GOF). We are going to focus on the design patterns from the Scala point of view. All different design patterns can be grouped into the following types: Creational Structural Behavioral These three groups contain the famous Gang of Four design patterns.  In the next few subsections, we will explain the main characteristics of the listed groups and briefly present the actual design patterns that fall under them. This article is an excerpt from Scala Design Patterns - Second Edition by Ivan Nikolov. In this book, you will learn how to write efficient, clean, and reusable code with Scala. Creational design patterns The creational design patterns deal with object creation mechanisms. Their purpose is to create objects in a way that is suitable to the current situation, which could lead to unnecessary complexity and the need for extra knowledge if they were not there. The main ideas behind the creational design patterns are as follows: Knowledge encapsulation about the concrete classes Hiding details about the actual creation and how objects are combined We will be focusing on the following creational design patterns in this article: The abstract factory design pattern The factory method design pattern The lazy initialization design pattern The singleton design pattern The object pool design pattern The builder design pattern The prototype design pattern The following few sections give a brief definition of what these patterns are. The abstract factory design pattern This is used to encapsulate a group of individual factories that have a common theme. When used, the developer creates a specific implementation of the abstract factory and uses its methods in the same way as in the factory design pattern to create objects. It can be thought of as another layer of abstraction that helps to instantiate classes. The factory method design pattern This design pattern deals with the creation of objects without explicitly specifying the actual class that the instance will have—it could be something that is decided at runtime based on many factors. Some of these factors can include operating systems, different data types, or input parameters. It gives developers the peace of mind of just calling a method rather than invoking a concrete constructor. The lazy initialization design pattern This design pattern is an approach to delay the creation of an object or the evaluation of a value until the first time it is needed. It is much more simplified in Scala than it is in an object-oriented language such as Java. The singleton design pattern This design pattern restricts the creation of a specific class to just one object. If more than one class in the application tries to use such an instance, then this same instance is returned for everyone. This is another design pattern that can be easily achieved with the use of basic Scala features. The object pool design pattern This design pattern uses a pool of objects that are already instantiated and ready for use. Whenever someone requires an object from the pool, it is returned, and after the user is finished with it, it puts it back into the pool manually or automatically. A common use for pools are database connections, which generally are expensive to create; hence, they are created once and then served to the application on request. The builder design pattern The builder design pattern is extremely useful for objects with many possible constructor parameters that would otherwise require developers to create many overrides for the different scenarios an object could be created in. This is different to the factory design pattern, which aims to enable polymorphism. Many of the modern libraries today employ this design pattern. As we will see later, Scala can achieve this pattern really easily. The prototype design pattern This design pattern allows object creation using a clone() method from an already created instance. It can be used in cases when a specific resource is expensive to create or when the abstract factory pattern is not desired. Structural design patterns Structural design patterns exist in order to help establish the relationships between different entities in order to form larger structures. They define how each component should be structured so that it has very flexible interconnecting modules that can work together in a larger system. The main features of structural design patterns include the following: The use of the composition to combine the implementations of multiple objects Help build a large system made of various components by maintaining a high level of flexibility In this article, we will focus on the following structural design patterns: The adapter design pattern The decorator design pattern The bridge design pattern The composite design pattern The facade design pattern The flyweight design pattern The proxy design pattern The next subsections will put some light on what these patterns are about. The adapter design pattern The adapter design pattern allows the interface of an existing class to be used from another interface. Imagine that there is a client who expects your class to expose a doWork() method. You might have the implementation ready in another class, but the method is called differently and is incompatible. It might require extra parameters too. This could also be a library that the developer doesn't have access to for modifications. This is where the adapter can help by wrapping the functionality and exposing the required methods. The adapter is useful for integrating the existing components. In Scala, the adapter design pattern can be easily achieved using implicit classes. The decorator design pattern Decorators are a flexible alternative to sub classing. They allow developers to extend the functionality of an object without affecting other instances of the same class. This is achieved by wrapping an object of the extended class into one that extends the same class and overrides the methods whose functionality is supposed to be changed. Decorators in Scala can be built much more easily using another design pattern called stackable traits. The bridge design pattern The purpose of the bridge design pattern is to decouple an abstraction from its implementation so that the two can vary independently. It is useful when the class and its functionality vary a lot. The bridge reminds us of the adapter pattern, but the difference is that the adapter pattern is used when something is already there and you cannot change it, while the bridge design pattern is used when things are being built. It helps us to avoid ending up with multiple concrete classes that will be exposed to the client. You will get a clearer understanding when we delve deeper into the topic, but for now, let's imagine that we want to have a FileReader class that supports multiple different platforms. The bridge will help us end up with FileReader, which will use a different implementation, depending on the platform. In Scala, we can use self-types in order to implement a bridge design pattern. The composite design pattern The composite is a partitioning design pattern that represents a group of objects that are to be treated as only one object. It allows developers to treat individual objects and compositions uniformly and to build complex hierarchies without complicating the source code. An example of composite could be a tree structure where a node can contain other nodes, and so on. The facade design pattern The purpose of the facade design pattern is to hide the complexity of a system and its implementation details by providing the client with a simpler interface to use. This also helps to make the code more readable and to reduce the dependencies of the outside code. It works as a wrapper around the system that is being simplified and, of course, it can be used in conjunction with some of the other design patterns mentioned previously. The flyweight design pattern The flyweight design pattern provides an object that is used to minimize memory usage by sharing it throughout the application. This object should contain as much data as possible. A common example given is a word processor, where each character's graphical representation is shared with the other same characters. The local information then is only the position of the character, which is stored internally. The proxy design pattern The proxy design pattern allows developers to provide an interface to other objects by wrapping them. They can also provide additional functionality, for example, security or thread-safety. Proxies can be used together with the flyweight pattern, where the references to shared objects are wrapped inside proxy objects. Behavioral design patterns Behavioral design patterns increase communication flexibility between objects based on the specific ways they interact with each other. Here, creational patterns mostly describe a moment in time during creation, structural patterns describe a more or less static structure, and behavioral patterns describe a process or flow. They simplify this flow and make it more understandable. The main features of behavioral design patterns are as follows: What is being described is a process or flow The flows are simplified and made understandable They accomplish tasks that would be difficult or impossible to achieve with objects In this article, we will focus our attention on the following behavioral design patterns: The value object design pattern The null object design pattern The strategy design pattern The command design pattern The chain of responsibility design pattern The interpreter design pattern The iterator design pattern The mediator design pattern The memento design pattern The observer design pattern The state design pattern The template method design pattern The visitor design pattern The following subsections will give brief definitions of the aforementioned behavioral design patterns. The value object design pattern Value objects are immutable and their equality is based not on their identity, but on their fields being equal. They can be used as data transfer objects, and they can represent dates, colors, money amounts, numbers, and more. Their immutability makes them really useful in multithreaded programming. The Scala programming language promotes immutability, and value objects are something that naturally occur there. The null object design pattern Null objects represent the absence of a value and they define a neutral behavior. This approach removes the need to check for null references and makes the code much more concise. Scala adds the concept of optional values, which can replace this pattern completely. The strategy design pattern The strategy design pattern allows algorithms to be selected at runtime. It defines a family of interchangeable encapsulated algorithms and exposes a common interface to the client. Which algorithm is chosen could depend on various factors that are determined while the application runs. In Scala, we can simply pass a function as a parameter to a method, and depending on the function, a different action will be performed. The command design pattern This design pattern represents an object that is used to store information about an action that needs to be triggered at a later time. The information includes the following: The method name The owner of the method Parameter values The client then decides which commands need to be executed and when by the invoker. This design pattern can easily be implemented in Scala using the by-name parameters feature of the language. The chain of responsibility design pattern The chain of responsibility is a design pattern where the sender of a request is decoupled from its receiver. This way, it makes it possible for multiple objects to handle the request and to keep logic nicely separated. The receivers form a chain where they pass the request and, if possible, they process it, and if not, they pass it to the next receiver. There are variations where a handler might dispatch the request to multiple other handlers at the same time. This somehow reminds us of function composition, which in Scala can be achieved using the stackable traits design pattern. The interpreter design pattern The interpreter design pattern is based on the ability to characterize a well-known domain with a language with a strict grammar. It defines classes for each grammar rule in order to interpret sentences in the given language. These classes are likely to represent hierarchies as grammar is usually hierarchical as well. Interpreters can be used in different parsers, for example, SQL or other languages. The iterator design pattern The iterator design pattern is when an iterator is used to traverse a container and access its elements. It helps to decouple containers from the algorithms performed on them. What an iterator should provide is sequential access to the elements of an aggregate object without exposing the internal representation of the iterated collection. The mediator design pattern This pattern encapsulates the communication between different classes in an application. Instead of interacting directly with each other, objects communicate through the mediator, which reduces the dependencies between them, lowers the coupling, and makes the overall application easier to read and maintain. The memento design pattern This pattern provides the ability to roll back an object to its previous state. It is implemented with three objects—originator, caretaker, and memento. The originator is the object with the internal state; the caretaker will modify the originator, and a memento is an object that contains the state that the originator returns. The originator knows how to handle a memento in order to restore its previous state. The observer design pattern This design pattern allows the creation of publish/subscribe systems. There is a special object called subject that automatically notifies all the observers when there are any changes in the state. This design pattern is popular in various GUI toolkits and generally where event handling is needed. It is also related to reactive programming, which is enabled by libraries such as Akka. We will see an example of this towards the end of this book. The state design pattern This design pattern is similar to the strategy design pattern, and it uses a state object to encapsulate different behavior for the same object. It improves the code's readability and maintainability by avoiding the use of large conditional statements. The template method design pattern This design pattern defines the skeleton of an algorithm in a method and then passes some of the actual steps to the subclasses. It allows developers to alter some of the steps of an algorithm without having to modify its structure. An example of this could be a method in an abstract class that calls other abstract methods, which will be defined in the children. The visitor design pattern The visitor design pattern represents an operation to be performed on the elements of an object structure. It allows developers to define a new operation without changing the original classes. Scala can minimize the verbosity of this pattern compared to the pure object-oriented way of implementing it by passing functions to methods. Choosing a design pattern As we already saw, there are a huge number of design patterns. In many cases, they are suitable to be used in combinations as well. Unfortunately, there is no definite answer regarding how to choose the concept of designing our code. There are many factors that could affect the final decision, and you should ask yourselves the following questions: Is this piece of code going to be fairly static or will it change in the future? Do we have to dynamically decide what algorithms to use? Is our code going to be used by others? Do we have an agreed interface? What libraries are we planning to use, if any? Are there any special performance requirements or limitations? This is by no means an exhaustive list of questions. There is a huge amount of factors that could dictate our decision in how we build our systems. It is, however, really important to have a clear specification, and if something seems missing, it should always be checked first. By now, we have a fair idea about what a design pattern is and how it can affect the way we write our code. We've iterated through the most famous Gang of Four design patterns out there, and we have outlined the main differences between them. To know more on how to incorporate functional patterns effectively in real-life applications, read our book Scala Design Patterns - Second Edition. Implementing 5 Common Design Patterns in JavaScript (ES8) An Introduction to Node.js Design Patterns
Read more
  • 0
  • 0
  • 20759

article-image-what-difference-between-functional-and-object-oriented-programming
Antonio Cucciniello
17 Sep 2017
5 min read
Save for later

What is the difference between functional and object oriented programming?

Antonio Cucciniello
17 Sep 2017
5 min read
There are two very popular programming paradigms in software development that developers design and program to. They are known as object oriented programming and functional programming. You've probably heard of these terms before, but what exactly are they and what is the difference between functional and object oriented programming? Let's take a look. What is object oriented programming? Object oriented programming is a programming paradigm in which you program using objects to represent things you are programming about (sometimes real world things). These objects could be data structures. The objects hold data about them in attributes. The attributes in the objects are manipulated through methods or functions that are given to the object. For instance, we might have a Person object that represents all of the data a person would have: weight, height, skin color, hair color, hair length, and so on. Those would be the attributes. Then the person object would also have things that it can do such as: pick box up, put box down, eat, sleep, etc. These would be the functions that play with the data the object stores. Engineers who program using object oriented design say that it is a style of programming that allows you to model real world scenarios much simpler. This allows for a good transition from requirements to code that works like the customer or user wants it to. Some examples of object oriented languages include C++, Java, Python, C#, Objective-C, and Swift. Want to learn object oriented programming? We recommend you start with Learning Object Oriented Programming. What is functional programming? Functional programming is the form of programming that attempts to avoid changing state and mutable data. In a functional program, the output of a function should always be the same, given the same exact inputs to the function. This is because the outputs of a function in functional programming purely relies on arguments of the function, and there is no magic that is happening behind the scenes. This is called eliminating side effects in your code. For example, if you call function getSum() it calculates the sum of two inputs and returns the sum. Given the same inputs for x and y, we will always get the same output for sum. This allows the function of a program to be extremely predictable. Each small function does its part and only its part. It allows for very modular and clean code that all works together in harmony. This is also easier when it comes to unit testing. Some examples of Functional Programming Languages include Lisp, Clojure, and F#. Problems with object oriented programming There are a few problems with object oriented programing. Firstly, it is known to be not as reusable. Because some of your functions depend on the class that is using them, it is hard to use some functions with another class. It is also known to be typically less efficient and more complex to deal with. Plenty of times, some object oriented designs are made to model large architectures and can be extremely complicated. Problems with functional programming Functional programming is not without its flaws either. It really takes a different mindset to approach your code from a functional standpoint. It's easy to think in object oriented terms, because it is similar to how the object being modeled happens in the real world. Functional programming is all about data manipulation. Converting a real world scenario to just data can take some extra thinking. Due to its difficulty when learning to program this way, there are fewer people that program using this style, which could make it hard to collaborate with someone else or learn from others because there will naturally be less information on the topic. A comparison between functional and object oriented programming Both programming concepts have a goal of wanting to create easily understandable programs that are free of bugs and can be developed fast. Both concepts have different methods for storing the data and how to manipulate the data. In object oriented programming, you store the data in attributes of objects and have functions that work for that object and do the manipulation. In functional programming, we view everything as data transformation. Data is not stored in objects, it is transformed by creating new versions of that data and manipulating it using one of the many functions. I hope you have a clearer picture of what the difference between functional and object oriented programming. They can both be used separately or can be mixed to some degree to suite your needs. Ultimately you should take into the consideration the advantages and disadvantages of using both before making that decision. Antonio Cucciniello is a Software Engineer with a background in C, C++ and JavaScript (Node.js) from New Jersey. His most recent project called Edit Docs is an Amazon Echo skill that allows users to edit Google Drive files using your voice. He loves building cool things with software, reading books on self-help and improvement, finance, and entrepreneurship. Follow him on Twitter @antocucciniello, and follow him on GitHub here.
Read more
  • 0
  • 1
  • 19661
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-common-problems-in-delphi-parallel-programming
Pavan Ramchandani
27 Jul 2018
12 min read
Save for later

Common problems in Delphi parallel programming

Pavan Ramchandani
27 Jul 2018
12 min read
This tutorial will be explaining how to find performance bottlenecks and apply the correct algorithm to fix them when working with Delphi. Also, teach you how to improve your algorithms before taking you through parallel programming. The article is an excerpt from a book written by Primož Gabrijelčič, titled Delphi High Performance. Never access UI from a background thread Let's start with the biggest source of hidden problems—manipulating a user interface from a background thread. This is, surprisingly, quite a common problem—even more so as all Delphi resources on multithreaded programming will simply say to never do that. Still, it doesn't seem to touch some programmers, and they will always try to find an excuse to manipulate a user interface from a background thread. Indeed, there may be a situation where VCL or FireMonkey may be manipulated from a background thread, but you'll be treading on thin ice if you do that. Even if your code works with the current Delphi, nobody can guarantee that changes in graphical libraries introduced in future Delphis won't break your code. It is always best to cleanly decouple background processing from a user interface. Let's look at an example which nicely demonstrates the problem. The ParallelPaint demo has a simple form, with eight TPaintBox components and eight threads. Each thread runs the same drawing code and draws a pattern into its own TPaintBox. As every thread accesses only its own Canvas, and no other user interface components, a naive programmer would therefore assume that drawing into paintboxes directly from background threads would not cause problems. A naive programmer would be very much mistaken. If you run the program, you will notice that although the code paints constantly into some of the paint boxes, others stop to be updated after some time. You may even get a Canvas does not allow drawing exception. It is impossible to tell in advance which threads will continue painting and which will not. The following image shows an example of an output. The first two paint boxes in the first row, and the last one in the last row were not updated anymore when I grabbed the image: The lines are drawn in the DrawLine method. It does nothing special, just sets the color for that line and draws it. Still, that is enough to break the user interface when this is called from multiple threads at once, even though each thread uses its own Canvas: procedure TfrmParallelPaint.DrawLine(canvas: TCanvas; p1, p2: TPoint; color: TColor); begin Canvas.Pen.Color := color; Canvas.MoveTo(p1.X, p1.Y); Canvas.LineTo(p2.X, p2.Y); end; Is there a way around this problem? Indeed there is. Delphi's TThread class implements a method, Queue, which executes some code in the main thread. Queue takes a procedure or anonymous method as a parameter and sends it to the main thread. After some short time, the code is then executed in the main thread. It is impossible to tell how much time will pass before the code is executed, but that delay will typically be very short, in the order of milliseconds. As it accepts an anonymous method, we can use the magic of variable capturing and write the corrected code, as shown here: procedure TfrmParallelPaint.QueueDrawLine(canvas: TCanvas; p1, p2: TPoint; color: TColor); begin TThread.Queue(nil, procedure begin Canvas.Pen.Color := color; Canvas.MoveTo(p1.X, p1.Y); Canvas.LineTo(p2.X, p2.Y); end); end; In older Delphis you don't have such a nice Queue method but only a version of Synchronize that accepts a normal  method. If you have to use this method, you cannot count on anonymous method mechanisms to handle parameters. Rather, you have to copy them to fields and then Synchronize a parameterless method operating on these fields. The following code fragment shows how to do that: procedure TfrmParallelPaint.SynchronizedDraw; begin FCanvas.Pen.Color := FColor; FCanvas.MoveTo(FP1.X, FP1.Y); FCanvas.LineTo(FP2.X, FP2.Y); end; procedure TfrmParallelPaint.SyncDrawLine(canvas: TCanvas; p1, p2: TPoint; color: TColor); begin FCanvas := canvas; FP1 := p1; FP2 := p2; FColor := color; TThread.Synchronize(nil, SynchronizedDraw); end; If you run the corrected program, the final result should always be similar to the following image, with all eight  TPaintBox components showing a nicely animated image: Simultaneous reading and writing The next situation which I'm regularly seeing while looking at a badly-written parallel code is simultaneous reading and writing from/to a shared data structure, such as a list.  The SharedList program demonstrates how things can go wrong when you share a data structure between threads. Actually, scrap that, it shows how things will go wrong if you do that. This program creates a shared list, FList: TList<Integer>. Then it creates one background thread which runs the method ListWriter and multiple background threads, each running the ListReader method. Indeed, you can run the same code in multiple threads. This is a perfectly normal behavior and is sometimes extremely useful. The ListReader method is incredibly simple. It just reads all the elements in a list and does that over and over again. As I've mentioned before, the code in my examples makes sure that problems in multithreaded code really do occur, but because of that, my demo code most of the time also looks terribly stupid. In this case, the reader just reads and reads the data because that's the best way to expose the problem: procedure TfrmSharedList.ListReader; var i, j, a: Integer; begin for i := 1 to CNumReads do for j := 0 to FList.Count - 1 do a := FList[j]; end; The ListWriter method is a bit different. It also loops around, but it also sleeps a little inside each loop iteration. After the Sleep, the code either adds to the list or deletes from it. Again, this is designed so that the problem is quick to appear: procedure TfrmSharedList.ListWriter; var i: Integer; begin for i := 1 to CNumWrites do begin Sleep(1); if FList.Count > 10 then FList.Delete(Random(10)) else FList.Add(Random(100)); end; end; If you start the program in a debugger, and click on the Shared lists button, you'll quickly get an EArgumentOutOfRangeException exception. A look at the stack trace will show that it appears in the line a := FList[j];. In retrospect, this is quite obvious. The code in ListReader starts the inner for loop and reads the FListCount. At that time, FList has 11 elements so Count is 11. At the end of the loop, the code tries to read FList[10], but in the meantime ListWriter has deleted one element and the list now only has 10 elements. Accessing element [10] therefore raises an exception. We'll return to this topic later, in the section about Locking. For now you should just keep in mind that sharing data structures between threads causes problems. Sharing a variable OK, so rule number two is "Shared structures bad". What about sharing a simple variable? Nothing can go wrong there, right? Wrong! There are actually multiple ways something can go wrong. The program IncDec demonstrates one of the bad things that can happen. The code contains two methods: IncValue and DecValue. The former increments a shared FValue: integer; some number of times, and the latter decrements it by the same number of times: procedure TfrmIncDec.IncValue; var i: integer; value: integer; begin for i := 1 to CNumRepeat do begin value := FValue; FValue := value + 1; end; end; procedure TfrmIncDec.DecValue; var i: integer; value: integer; begin for i := 1 to CNumRepeat do begin value := FValue; FValue := value - 1; end; end; A click on the Inc/Dec button sets the shared value to 0, runs IncValue, then DecValue, and logs the result: procedure TfrmIncDec.btnIncDec1Click(Sender: TObject); begin FValue := 0; IncValue; DecValue; LogValue; end; I know you can all tell what FValue will hold at the end of this program. Zero, of course. But what will happen if we run IncValue and DecValue in parallel? That is, actually, hard to predict! A click on the Multithreaded button does almost the same, except that it runs IncValue and DecValue in parallel. How exactly that is done is not important at the moment (but feel free to peek into the code if you're interested): procedure TfrmIncDec.btnIncDec2Click(Sender: TObject); begin FValue := 0; RunInParallel(IncValue, DecValue); LogValue; end; Running this version of the code may still sometimes put zero in FValue, but that will be extremely rare. You most probably won't be able to see that result unless you are very lucky. Most of the time, you'll just get a seemingly random number from the range -10,000,000 to 10,000,000 (which is the value of the CNumRepeatconstant). In the following image, the first number is a result of the single-threaded code, while all the rest were calculated by the parallel version of the algorithm: To understand what's going on, you should know that Windows (and all other operating systems) does many things at once. At any given time, there are hundreds of threads running in different programs and they are all fighting for the limited number of CPU cores. As our program is the active one (has focus), its threads will get most of the CPU time, but still they'll sometimes be paused for some amount of time so that other threads can run. Because of that, it can easily happen that IncValue reads the current value of FValue into value (let's say that the value is 100) and is then paused. DecValue reads the same value and then runs for some time, decrementing FValue. Let's say that it gets it down to -20,000. (That is just a number without any special meaning.) After that, the IncValue thread is awakened. It should increment the value to -19,999, but instead of that it adds 1 to 100 (stored in value), gets 101, and stores that into FValue. Ka-boom! In each repetition of the program, this will happen at different times and will cause a different result to be calculated. You may complain that the problem is caused by the two-stage increment and decrement, but you'd be wrong. I dare you—go ahead, change the code so that it will modify FValue with Inc(FValue) and Dec(FValue) and it still won't work correctly. Well, I hear you say, so I shouldn't even modify one variable from two threads at the same time? I can live with that. But surely, it is OK to write into a variable from one thread and read from another? The answer, as you can probably guess given the general tendency of this section, is again—no, you may not. There are some situations where this is OK (for example, when a variable is only one byte long) but, in general, even simultaneous reading and writing can be a source of weird problems. The ReadWrite program demonstrates this problem. It has a shared buffer, FBuf: Int64, and a pointer variable used to read and modify the data, FPValue: PInt64. At the beginning, the buffer is initialized to an easily recognized number and a pointer variable is set to point to the buffer: FPValue := @FBuf; FPValue^ := $7777777700000000; The program runs two threads. One just reads from the location and stores all the read values into a list. This value is created with Sorted and Duplicates properties, set in a way that prevents it from storing duplicate values: procedure TfrmReadWrite.Reader; var i: integer; begin for i := 1 to CNumRepeat do FValueList.Add(FPValue^); end; The second thread repeatedly writes two values into the shared location: procedure TfrmReadWrite.Writer; var i: integer; begin for i := 1 to CNumRepeat do begin FPValue^ := $7777777700000000; FPValue^ := $0000000077777777; end; end; At the end, the contents of the FValueList list are logged on the screen. We would expect to see only two values—$7777777700000000 and $0000000077777777. In reality, we see four, as the following screenshot demonstrates: The reason for that strange result is that Intel processors in 32-bit mode can't write a 64-bit number (as int64 is) in one step. In other words, reading and writing 64-bit numbers in 32-bit code is not atomic. When multithreading programmers talk about something being atomic, they want to say that an operation will execute in one indivisible step. Any other thread will either see a state before the operation or a state after the operation, but never some undefined intermediate state. How do values $7777777777777777 and $0000000000000000 appear in the test application? Let's say that FValue^ contains $7777777700000000. The code then starts writing $0000000077777777 into FValue by firstly storing a $77777777 into the bottom four bytes. After that it starts writing $00000000 into the upper four bytes of FValue^, but in the meantime Reader reads the value and gets $7777777777777777. In a similar way, Reader will sometimes see $0000000000000000 in the FValue^. We'll look into a way to solve this situation immediately, but in the meantime, you may wonder—when is it okay to read/write from/to a variable at the same time? Sadly, the answer is—it depends. Not even just on the CPU family (Intel and ARM processors behave completely differently), but also on a specific architecture used in a processor. For example, older and newer Intel processors may not behave the same in that respect. You can always depend on access to byte-sized data being atomic, but that is that. Access (reads and writes) to larger quantities of data (words, integers) is atomic only if the data is correctly aligned. You can access word sized data atomically if it is word aligned, and integer data if it is double-word aligned. If the code was compiled in 64-bit mode, you can also atomically access in 64 data if it is quad-word aligned. When you are not using data packing (such as packed records) the compiler will take care of alignment and data access should automatically be atomic. You should, however, still check the alignment in code, if nothing else to prevent stupid programming errors. If you want to write and read larger amounts of data, modify the data, or if you want to work on shared data structures, correct alignment will not be enough. You will need to introduce synchronization into your program. If you found this post useful, do check out the book Delphi High Performance to learn more about the intricacies of how to perform High-performance programming with Delphi. Delphi: memory management techniques for parallel programming Parallel Programming Patterns Concurrency programming 101: Why do programmers hang by a thread?
Read more
  • 0
  • 1
  • 19053

article-image-declarative-ui-programming-faceoff-apples-swiftui-vs-googles-flutter
Guest Contributor
14 Jun 2019
5 min read
Save for later

Declarative UI programming faceoff: Apple’s SwiftUI vs Google’s Flutter

Guest Contributor
14 Jun 2019
5 min read
Apple recently announced a new declarative UI framework for its operating system - SwiftUI, at its annual developer conference WWDC 2019. SwiftUI will power all of Apple’s devices (MacBooks, watches, tv’s, iPads and smartphones). You can integrate SwiftUI views with objects from the UIKit, AppKit, and WatchKit frameworks to take further advantage of platform-specific functionality. It's said to be productive for developers and would save effort while writing codes. SwiftUI documentation,  states that, “Declare the content and layout for any state of your view. SwiftUI knows when that state changes, and updates your view’s rendering to match.”   This means that the developers simply have to describe the current UI state to the response of events and leave the in-between transitions to the framework. The UI updates the state automatically as it changes. Benefits of a Declarative UI language Without describing the control flow, the declarative UI language expresses the logic of computation. You describe what elements you need and how they would look like without having to worry about its exact position and its visual style. Some of the benefits of Declarative UI language are: Increased speed of development. Seamless integration between designers and coders. Forces separation between logic and presentation.    Changes in UI don’t require recompilation SwiftUI’s declarative syntax is quite similar to Google’s Flutter which also runs on declarative UI programming. Flutter contains beautiful widgets with captivating logos, fonts, and expressive style. The use of Flutter has significantly increased in 2019 and is among the fastest developing skills in the developer community. Similar to Flutter, SwiftUI provides layout structure, controls, and views for the application’s user interface. This is the first time Apple’s stepping up to the declarative UI programming and has described SwiftUI as a modern way to declare user interfaces. In the imperative method, developers had to manually construct a fully functional UI entity and later change it using methods and setters. In SwiftUI the application layout just needs to be described once, vastly reducing the code complexity. Apart from declarative UI, SwiftUI also features Xcode, which contains software development tools and is an integrated development environment for the OS.  If any code modifications are made inside Xcode, developers now can preview the codes in real-time and tweak parameters. Swift UI also features dark mode, drag and drop building tools by Xcode and interface layout.  Languages such as Hebrew and Arabic are also incorporated. However, one of the drawbacks of SwiftUI is that it will only support apps that will continue to relay forward with iOS13. It’s a sort of limited tool in this sense and the production would take at least a year or two if an older iOS version is to be supported. SwiftUI vs Flutter Development   Apple’s answer to Google is simple here. Flutter is compatible with both Android and iOS whereas SwiftUI is a new member of Apple’s ecosystem. Developers use Flutter for cross-platform apps with a single codebase. It highlights that Flutter is pushing other languages to adopt its simplistic way of developing UI. Now with the introduction of SwiftUI, which works on the same mechanism as Flutter, Apple has announced itself to the world of declarative UI programming. What does it mean for developers who build exclusively for iOS? Well, now they can make Native Apps for their client’s who do not prefer the Flutter way. SwiftUI will probably reduce the incentive for Apple-only developers to adopt Flutter. Many have pointed out that Apple has just introduced a new framework for essentially the same UI experience. We have to wait and see what Swift UI has under its closet for the longer run. Developers in communities like Reddit and others are actively sharing their thoughts on the recent arrival of SwiftUI. Many agree on the fact that “SwiftUI is flutter with no Android support”.   Developers who’d target “Apple only platform” through SwiftUI, will eventually return to Flutter to target all other platforms, which makes Flutter could benefit from SwiftUI and not the other way round. The popularity of the react native is no brainer. Native mobile app development for iOS and Android is always high on cost and companies usually work with 2 different sets of teams. Cross-platform solutions drastically bridge the gaps in terms of developmental costs. One could think of Flutter as React native with the full support of native features (one doesn’t have to depend on native platforms for solutions and Flutter renders similar performance to native). Like React Native, Flutter uses reactive-style views. However, while React Native transpiles to native widgets, Flutter compiles all the way to native code. Conclusion SwiftUI is about making development interactive, faster and easier. The latest inbuilt graphical UI design tool allows designers to assemble a user interface without having to write any code. Once the code is modified, it instantly appears in the visual design tool. Codes can be assembled, redefined and tested in real time with previews that could run on a range of Apple's devices. However, SwiftUI is still under development and will take its time to mature. On the other hand, Flutter app development services continue to deliver scalable solutions for startups/enterprises. Building native apps are not cheap and Flutter with the same feel of native provides cost-effective services. It still remains a competitive cross-platform network with or without SwiftUI’s presence. Author Bio Keval Padia is the CEO of Nimblechapps, a prominent Mobile app development company based in India. He has a good knowledge of Mobile App Design and User Experience Design. He follows different tech blogs and current updates of the field lure him to express his views and thoughts on certain topics.
Read more
  • 0
  • 0
  • 13110

article-image-developers-guide-to-software-architecture-patterns
Sugandha Lahoti
06 Aug 2018
11 min read
Save for later

Developer's guide to Software architecture patterns

Sugandha Lahoti
06 Aug 2018
11 min read
As we all know, patterns are a kind of simplified and smarter solution for a repetitive concern or recurring challenge in any field of importance. In the field of software engineering, there are primarily many designs, integration, and architecture patterns. In this article, we will cover the need for software patterns and describe the most prominent and dominant software architecture patterns. This article is an excerpt from Architectural Patterns by Pethuru Raj, Anupama Raman, and Harihara Subramanian. Why software patterns? There is a bevy of noteworthy transformations happening in the IT space, especially in software engineering. The complexity of recent software solutions is continuously going up due to the continued evolution of the business expectations. With complex software, not only does the software development activity become very difficult, but also the software maintenance and enhancement tasks become tedious and time-consuming. Software patterns come as a soothing factor for software architects, developers, and operators. Types of software patterns Several newer types of patterns are emerging in order to cater to different demands. This section throws some light on these. An architecture pattern expresses a fundamental structural organization or schema for complex systems. It provides a set of predefined subsystems, specifies their unique responsibilities, and includes the decision-enabling rules and guidelines for organizing the relationships between them. The architecture pattern for a software system illustrates the macro-level structure for the whole software solution. A design pattern provides a scheme for refining the subsystems or components of a system, or the relationships between them. It describes a commonly recurring structure of communicating components that solves a general design problem within a particular context. The design pattern for a software system prescribes the ways and means of building the software components. There are other patterns, too. The dawn of the big data era mandates for distributed computing. The monolithic and massive nature of enterprise-scale applications demands microservices-centric applications. Here, application services need to be found and integrated in order to give an integrated result and view. Thus, there are integration-enabled patterns. Similarly, there are patterns for simplifying software deployment and delivery. Other complex actions are being addressed through the smart leverage of simple as well as composite patterns. Software architecture patterns Let's look at some of the prominent and dominant software architecture patterns. Object-oriented architecture (OOA) Objects are the fundamental and foundational building blocks for all kinds of software applications. Therefore, the object-oriented architectural style has become the dominant one for producing object-oriented software applications. Ultimately, a software system is viewed as a dynamic collection of cooperating objects, instead of a set of routines or procedural instructions. We know that there are proven object-oriented programming methods and enabling languages, such as C++, Java, and so on. The properties of inheritance, polymorphism, encapsulation, and composition being provided by OOA come in handy in producing highly modular (highly cohesive and loosely coupled), usable and reusable software applications. The object-oriented style is suitable if we want to encapsulate logic and data together in reusable components. Also, the complex business logic that requires abstraction and dynamic behavior can effectively use this OOA. Component-based assembly (CBD) architecture Monolithic and massive applications can be partitioned into multiple interactive and smaller components. When components are found, bound, and composed, we get the full-fledged software applications.  CBA does not focus on issues such as communication protocols and shared states. Components are reusable, replaceable, substitutable, extensible, independent, and so on. Design patterns such as the dependency injection (DI) pattern or the service locator pattern can be used to manage dependencies between components and promote loose coupling and reuse. Such patterns are often used to build composite applications that combine and reuse components across multiple applications. Aspect-oriented programming (AOP) aspects are another popular application building block. By deft maneuvering of this unit of development, different applications can be built and deployed. The AOP style aims to increase modularity by allowing the separation of cross-cutting concerns. AOP includes programming methods and tools that support the modularization of concerns at the level of the source code. Agent-oriented software engineering (AOSE) is a programming paradigm where the construction of the software is centered on the concept of software agents. In contrast to the proven object-oriented programming, which has objects (providing methods with variable parameters) at its core, agent-oriented programming has externally specified agents with interfaces and messaging capabilities at its core. They can be thought of as abstractions of objects. Exchanged messages are interpreted by receiving agents, in a way specific to its class of agents. Domain-driven design (DDD) architecture Domain-driven design is an object-oriented approach to designing software based on the business domain, its elements and behaviors, and the relationships between them. It aims to enable software systems that are a correct realization of the underlying business domain by defining a domain model expressed in the language of business domain experts. The domain model can be viewed as a framework from which solutions can then be readied and rationalized. DDD is good if we have a complex domain and we wish to improve communication and understanding within the development team. DDD can also be an ideal approach if we have large and complex enterprise data scenarios that are difficult to manage using the existing techniques. Client/server architecture This pattern segregates the system into two main applications, where the client makes requests to the server. In many cases, the server is a database with application logic represented as stored procedures. This pattern helps to design distributed systems that involve a client system and a server system and a connecting network. The main benefits of the client/server architecture pattern are: Higher security: All data gets stored on the server, which generally offers a greater control of security than client machines. Centralized data access: Because data is stored only on the server, access and updates to the data are far easier to administer than in other architectural styles. Ease of maintenance: The server system can be a single machine or a cluster of multiple machines. The server application and the database can be made to run on a single machine or replicated across multiple machines to ensure easy scalability and high availability. However, the traditional two-tier client/server architecture pattern has numerous disadvantages. Firstly, the tendency of keeping both application and data on a server can negatively impact system extensibility and scalability. The server can be a single point of failure. The reliability is the main worry here. To address these issues, the client-server architecture has evolved into the more general three-tier (or N-tier) architecture. This multi-tier architecture not only surmounts the issues just mentioned but also brings forth a set of new benefits. Multi-tier distributed computing architecture The two-tier architecture is neither flexible nor extensible. Hence, multi-tier distributed computing architecture has attracted a lot of attention. The application components can be deployed in multiple machines (these can be co-located and geographically distributed). Application components can be integrated through messages or remote procedure calls (RPCs), remote method invocations (RMIs), common object request broker architecture (CORBA), enterprise Java beans (EJBs), and so on. The distributed deployment of application services ensures high availability, scalability, manageability, and so on. Web, cloud, mobile, and other customer-facing applications are deployed using this architecture. Thus, based on the business requirements and the application complexity, IT teams can choose the simple two-tier client/server architecture or the advanced N-tier distributed architecture to deploy their applications. These patterns are for simplifying the deployment and delivery of software applications to their subscribers and users. Layered/tiered architecture This pattern is an improvement over the client/server architecture pattern. This is the most commonly used architectural pattern. Typically, an enterprise software application comprises three or more layers: presentation/user interface layer, business logic layer, and data persistence layer. The presentation layer is primarily usded for user interface applications (thick clients) or web browsers (thin clients). With the fast proliferation of mobile devices, mobile browsers are also being attached to the presentation layer. Such tiered segregation comes in handy in managing and maintaining each layer accordingly. The power of plug-in and play gets realized with this approach. Additional layers can be fit in as needed. There are model view controller (MVC) pattern-compliant frameworks hugely simplifying enterprise-grade and web-scale applications. MVC is a web application architecture pattern. The main advantage of the layered architecture is the separation of concerns. That is, each layer can focus solely on its role and responsibility. The layered and tiered pattern makes the application: Maintainable Testable Easy to assign specific and separate roles Easy to update and enhance layers separately This architecture pattern is good for developing web-scale, production-grade, and cloud-hosted applications quickly and in a risk-free fashion. When there are business and technology changes, this layered architecture comes in handy in embedding newer things in order to meet varying business requirements. Event-driven architecture (EDA) The world is eventually becoming event-driven. That is, applications have to be sensitive and responsive proactively, pre-emptively, and precisely. Whenever there is an event happening, applications have to receive the event information and plunge into the necessary activities immediately. The request and reply notion paves the way for the fire and forgets tenet. The communication becomes asynchronous. There is no need for the participating applications to be available online all the time. EDA is typically based on an asynchronous message-driven communication model to propagate information throughout an enterprise. It supports a more natural alignment with an organization's operational model by describing business activities as series of events. EDA does not bind functionally disparate systems and teams into the same centralized management model. EDA ultimately leads to highly decoupled systems. The common issues being introduced by system dependencies are getting eliminated through the adoption of the proven and potential EDA. We have seen various forms of events used in different areas. There are business and technical events. Systems update their status and condition emitting events to be captured and subjected to a variety of investigations in order to precisely understand the prevailing situations. The submission of web forms and clicking on some hypertexts generate events to be captured. Incremental database synchronization mechanisms, RFID readings, email messages, short message service (SMS), instant messaging, and so on are events not to be taken lightly. There are event processing engines, message-oriented middleware (MoM) solutions such as message queues and brokers to collect and stock event data and messages. Millions of events can be collected, parsed, and delivered through multiple topics through these MoM solutions. As event sources/producers publish notifications, event receivers can choose to listen to or filter out specific events and make proactive decisions in real-time on what to do next. EDA style is built on the fundamental aspects of event notifications to facilitate immediate information dissemination and reactive business process execution. In an EDA environment, information can be propagated to all the services and applications in real-time. The EDA pattern enables highly reactive enterprise applications. Real-time analytics is the new normal with the surging popularity of the EDA pattern. Service-oriented architecture (SOA) With the arrival of service paradigms, software packages and libraries are being developed as a collection of services. Services are capable of running independently of the underlying technology. Also, services can be implemented using any programming and script languages. Services are self-defined, autonomous, and interoperable, publicly discoverable, assessable, accessible, reusable, and compostable. Services interact with one another through messaging. There are service providers/developers and consumers/clients. Every service has two parts: the interface and the implementation. The interface is the single point of contact for requesting services. Interfaces give the required separation between services. All kinds of deficiencies and differences of service implementation get hidden by the service interface. Precisely speaking, SOA enables application functionality to be provided as a set of services, and the creation of personal as well as professional applications that make use of software services. In short, SOA is for service-enablement and service-based integration of monolithic and massive applications. The complexity of enterprise process/application integration gets moderated through the smart leverage of the service paradigm. To summarize, we detailed the prominent and dominant software architecture patterns and how they are used for producing and running any kind of enterprise-class and production-grade software applications. To know more about patterns associated with object-oriented, component-based, client-server, and cloud architectures, grab the book Architectural Patterns. Why we need Design Patterns? Implementing 5 Common Design Patterns in JavaScript (ES8) An Introduction to Node.js Design Patterns
Read more
  • 0
  • 0
  • 12768

article-image-brief-history-python
Sam Wood
14 Oct 2015
4 min read
Save for later

A Brief History of Python

Sam Wood
14 Oct 2015
4 min read
From data to web development, Python has come to stand as one of the most important and most popular open source programming languages being used today. But whilst some see it as almost a new kid on the block, Python is actually older than both Java, R, and JavaScript. So what are the origins of our favorite open source language? In the beginning... Python's origins lie way back in distant December 1989, making it the same age as Taylor Swift. Created by Guido van Rossum (the Python community's Benevolent Dictator for Life) as a hobby project to work on during week around Christmas, Python is famously named not after the constrictor snake but rather the British comedy troupe Monty Python's Flying Circus. (We're quite thankful for this at Packt - we have no idea what we'd put on the cover if we had to pick for 'Monty' programming books!) Python was born out of the ABC language, a terminated project of the Dutch CWI research institute that van Rossum worked for, and the Amoeba distributed operating system. When Amoeba needed a scripting language, van Rossum created Python. One of the principle strengths of this new language was how easy it was to extend, and its support for multiple platforms - a vital innovation in the days of the first personal computers. Capable of communicating with libraries and differing file formats, Python quickly took off. Computer Programming for Everybody Python grew throughout the early nineties, acquiring lambda, reduce(), filter() and map() functional programming tools (supposedly courtesy of a Lisp hacker who missed them and thus submitted working patches), key word arguments, and built in support for complex numbers. During this period, Python also served a central role in van Rossum's Computer Programming for Everybody initiative. The CP4E's goal was to make programming more accessible to the 'layman' and encourage a basic level of coding literacy as an equal essential knowledge alongside English literacy and math skills. Because of Python's focus on clean syntax and accessibility, it played a key part in this. Although CP4E is now inactive, learning Python remains easy and Python is one of the most common languages that new would-be programmers are pointed at to learn. Going Open with 2.0 As Python grew in the nineties, one of the key issues in uptake was its continued dependence on van Rossum. 'What if Guido was hit by a bus?' Python users lamented, 'or if he dropped dead of exhaustion or if he is rubbed out by a member of a rival language following?' In 2000, Python 2.0 was released by the BeOpen Python Labs team. The ethos of 2.0 was very much more open and community oriented in its development process, with much greater transparency. Python moved its repository to SourceForge, granting write access to its CVS tree more people and an easy way to report bugs and submit patches. As the release notes stated, 'the most important change in Python 2.0 may not be to the code at all, but to how Python is developed'. Python 2.7 is still used today - and will be supported until 2020. But the word from development is clear - there will be no 2.8. Instead, support remains focused upon 2.7's usurping younger brother - Python 3. The Rise of Python 3 In 2008, Python 3 was released on an almost-unthinkable premise - a complete overhaul of the language, with no backwards compatibility. The decision was controversial, and born in part of the desire to clean house on Python. There was a great emphasis on removing duplicative constructs and modules, to ensure that in Python 3 there was one - and only one - obvious way of doing things. Despite the introduction of tools such as '2to3' that could identify quickly what would need to be changed in Python 2 code to make it work in Python 3, many users stuck with their classic codebases. Even today, there is no assumption that Python programmers will be working with Python 3. Despite flame wars raging across the Python community, Python 3's future ascendancy was something of an inevitability. Python 2 remains a supported language (for now). But as much as it may still be the default choice of Python, Python 3 is the language's future. The Future Python's userbase is vast and growing - it's not going away any time soon. Utilized by the likes of Nokia, Google, and even NASA for it's easy syntax, it looks to have a bright future ahead of it supported by a huge community of OS developers. Its support of multiple programming paradigms, including object-oriented Python programming, functional Python programming, and parallel programming models makes it a highly adaptive choice - and its uptake keeps growing.
Read more
  • 0
  • 0
  • 12165
article-image-what-is-the-history-behind-c-programming-and-unix
Packt Editorial Staff
17 Oct 2019
9 min read
Save for later

What is the history behind C Programming and Unix?

Packt Editorial Staff
17 Oct 2019
9 min read
If you think C programming and Unix are unrelated, then you are making a big mistake. Back in the 1970s and 1980s, if the Unix engineers at Bell Labs had decided to use another programming language instead of C to develop a new version of Unix, then we would be talking about that language today. The relationship between the two is simple; Unix is the first operating system that is implemented with a high-level C programming language, got its fame and power from Unix. Of course, our statement about C being a high-level programming language is not true in today’s world. This article is an excerpt from the book Extreme C by Kamran Amini. Kamran teaches you to use C’s power. Apply object-oriented design principles to your procedural C code. You will gain new insight into algorithm design, functions, and structures. You’ll also understand how C works with UNIX, how to implement OO principles in C, and what multiprocessing is. In this article, we are going to look at the history of C programming and Unix. Multics OS and Unix Even before having Unix, we had the Multics OS. It was a joint project launched in 1964 as a cooperative project led by MIT, General Electric, and Bell Labs. Multics OS was a huge success because it could introduce the world to a real working and secure operating system. Multics was installed everywhere from universities to government sites. Fast-forward to 2019, and every operating system today is borrowing some ideas from Multics indirectly through Unix. In 1969, because of the various reasons that we will talk about shortly, some people at Bell Labs, especially the pioneers of Unix, such as Ken Thompson and Dennis Ritchie, gave up on Multics and, subsequently, Bell Labs quit the Multics project. But this was not the end for Bell Labs; they had designed their simpler and more efficient operating system, which was called Unix. It is worthwhile to compare the Multics and Unix operating systems. In the following list, you will see similarities and differences found while comparing Multics and Unix: Both follow the onion architecture as their internal structure. We mean that they both have the same rings in their onion architecture, especially kernel and shell rings. Therefore, programmers could write their own programs on top of the shell ring. Also, Unix and Multics expose a list of utility programs, and there are lots of utility programs such as ls and pwd. In the following sections, we will explain the various rings found in the Unix architecture. Multics needed expensive resources and machines to be able to work. It was not possible to install it on ordinary commodity machines, and that was one of the main drawbacks that let Unix thrive and finally made Multics obsolete after about 30 years. Multics was complex by design. This was the reason behind the frustration of Bell Labs employees and, as we said earlier, the reason why they left the project. But Unix tried to remain simple. In the first version, it was not even multitasking or multi-user! You can read more about Unix and Multics online, and follow the events that happened in that era. Both were successful projects, but Unix has been able to thrive and survive to this day. It is worth sharing that Bell Labs has been working on a new distributed operating system called Plan 9, which is based on the Unix project.   Figure 1-1: Plan 9 from Bell Labs Suffice to say that Unix was a simplification of the ideas and innovations that Multics presented; it was not something new, and so, I can quit talking about Unix and Multics history at this point. So far, there are no traces of C in the history because it has not been invented yet. The first versions of Unix were purely written using assembly language. Only in 1973 was Unix version 4 written using C. Now, we are getting close to discussing C itself, but before that, we must talk about BCPL and B because they have been the gateway to C. About BCPL and B BCPL was created by Martin Richards as a programming language invented for the purpose of writing compilers. The people from Bell Labs were introduced to the language when they were working as part of the Multics project. After quitting the Multics project, Bell Labs first started to write Unix using assembly programming language. That’s because, back then, it was an anti-pattern to develop an operating system using a programming language other than assembly. For instance, it was strange that the people at the Multics project were using PL/1 to develop Multics but, by doing that, they showed that operating systems could be successfully written using a higher-level programming language other than assembly. As a result, Multics became the main inspiration for using another language for developing Unix. The attempt to write operating system modules using a programming language other than assembly remained with Ken Thompson and Dennis Ritchie at Bell Labs. They tried to use BCPL, but it turned out that they needed to apply some modifications to the language to be able to use it in minicomputers such as the DEC PDP-7. These changes led to the B programming language. While we won’t go too deep into the properties of the B language here you can read more about it and the way it was developed at the following links: The B Programming Language  The Development of the C Language Dennis Ritchie authored the latter article himself, and it is a good way to explain the development of the C programming language while still sharing valuable information about B and its characteristics. B also had its shortcomings in terms of being a system programming language. B was typeless, which meant that it was only possible to work with a word (not a byte) in each operation. This made it hard to use the language on machines with a different word length. Therefore, over time, further modifications were made to the language until it led to developing the NB (New B) language, which later derived the structures from the B language. These structures were typeless in B, but they became typed in C. And finally, in 1973, the fourth version of Unix could be developed using C, which still had many assembly codes. In the next section, we talk about the differences between B and C, and why C is a top-notch modern system programming language for writing an operating system. The way to C programming and Unix I do not think we can find anyone better than Dennis Ritchie himself to explain why C was invented after the difficulties met with B. In this section, we’re going to list the causes that prompted Dennis Ritchie, Ken Thompson, and others create a new programming language instead of using B for writing Unix. Limitations of the B programming language: B could only work with words in memory: Every single operation should have been performed in terms of words. Back then, having a programming language that was able to work with bytes was a dream. This was because of the available hardware at the time, which addressed the memory in a word-based scheme. B was typeless: More accurately, B was a single-type language. All variables were from the same type: word. So, if you had a string with 20 characters (21 plus the null character at the end), you had to divide it up by words and store it in more than one variable. For example, if a word was 4 bytes, you would have 6 variables to store 21 characters of the string. Being typeless meant that multiple byte-oriented algorithms, such as string manipulation algorithms, were not efficiently written with B: This was because B was using the memory words not bytes, and they could not be used efficiently to manage multi-byte data types such as integers and characters. B didn’t support floating-point operations: At the time, these operations were becoming increasingly available on the new hardware, but there was no support for that in the B language. Through the availability of machines such as PDP-1, which could address memory on a byte basis, B showed that it could be inefficient in addressing bytes of memory: This became even clearer with B pointers, which could only address the words in the memory, and not the bytes. In other words, for a program wanting to access a specific byte or a byte range in the memory, more computations had to be done to calculate the corresponding word index. The difficulties with B, particularly its slow development and execution on machines that were available at the time, forced Dennis Ritchie to develop a new language. This new language was called NB, or New B at first, but it eventually turned out to be C. This newly developed language, C, tried to cover the difficulties and flaws of B and became a de facto programming language for system development, instead of the assembly language. In less than 10 years, newer versions of Unix were completely written in C, and all newer operating systems that were based on Unix got tied with C and its crucial presence in the system. As you can see, C was not born as an ordinary programming language, but instead, it was designed by having a complete set of requirements in mind. You may consider languages such as Java, Python, and Ruby to be higher-level languages, but they cannot be considered as direct competitors as they are different and serve different purposes. For instance, you cannot write a device driver or a kernel module with Java or Python, and they themselves have been built on top of a layer written in C. Unlike some programming languages, C is standardized by ISO, and if it is required to have a certain feature in the future, then the standard can be modified to support the new feature. To summarize In this article, we began with the relationship between Unix and C. Even in non-Unix operating systems, you see some traces of a similar design to Unix systems. We also looked at the history of C and explained how Unix appeared from Multics OS and how C was derived from the B programming language. The book Extreme C, written by Kamran Amini will help you make the most of C's low-level control, flexibility, and high performance. Is Dark an AWS Lambda challenger? Microsoft mulls replacing C and C++ code with Rust calling it a “modern safer system programming language” with great memory safety features Is Scala 3.0 a new language altogether? Martin Odersky, its designer, says “yes and no”
Read more
  • 0
  • 0
  • 11773

article-image-top-7-python-programming-books-need-to-read
Aaron Lazar
22 Jun 2018
9 min read
Save for later

Top 7 Python programming books you need to read

Aaron Lazar
22 Jun 2018
9 min read
Python needs no introduction. It’s one of the top rated and growing programming languages, mainly because of its simplicity and wide applicability to solve a range of problems. Developers like yourself, beginners and experts alike, are looking to skill themselves up with Python. So I thought I would put together a list of Python programming books that I think are the best for learning Python - whether you're a beginner or experienced Python developer. Books for beginning to learn Python Learning Python, by Fabrizio Romano What the book is about This book explores the essentials of programming, covering data structures while showing you how to manipulate them. It talks about control flows in a program and teaches you how to write clean and reusable code. It reveals different programming paradigms and shows you how to optimize performance as well as debug your code effectively. Close to 450 pages long, the content spans twelve well thought out chapters. You’ll find interesting content on Functions, Memory Management and GUI app development with PyQt. Why Learn from Fabrizio Fabrizio has been creating software for over a decade. He has a master's degree in computer science engineering from the University of Padova and is also a certified Scrum master. He has delivered talks at the last two editions of EuroPython and at Skillsmatter in London. The Approach Taken The book is very easy to follow, and takes an example driven approach. As you end the book, you will be able to build a website in Python. Whether you’re new to Python or programming on the whole, you’ll have no trouble at all in following the examples. Download Learning Python FOR FREE. Learning Python, by Mark Lutz What the book is about This is one of the top most books on Python. A true bestseller, the book is perfectly fit for both beginners to programming, as well as developers who already have experience working with another language. Over 1,500 pages long, and covering content over 41 chapters, the book is a true shelf-breaker! Although this might be a concern to some, the content is clear and easy to read, providing great examples wherever necessary. You’ll find in-depth content ranging from Python syntax, to Functions, Modules, OOP and more. Why Learn from Mark Mark is the author of several Python books and has been using Python since 1992. He is a world renowned Python trainer and has taught close to 260 virtual and on-site Python classes to roughly 4,000 students. The Approach Taken The book is a great read, complete with helpful illustrations, quizzes and exercises. It’s filled with examples and also covers some advanced language features that recently have become more common in modern Python. You can find the book here, on Amazon. Intermediate Python books Modern Python Cookbook, by Steven Lott What the book is about Modern Python Cookbook is a great book for those already well versed with Python programming. The book aims to help developers solve the most common problems that they’re faced with, during app development. Spanning 824 pages, the book is divided into 13 chapters that cover solutions to problems related to data structures, OOP, functional programming, as well as statistical programming. Why Learn from Steven Steven has over 4 decades of programming experience, over a decade of which has been with Python. He has written several books on Python and has created some tutorial videos as well. Steven’s writing style is one to envy, as he manages to grab the attention of the readers while also imparting vast knowledge through his books. He’s also a very enthusiastic speaker, especially when it comes to sharing his knowledge. The Approach Taken The book takes a recipe based approach; presenting some of the most common, as well as uncommon problems Python developers face, and following them up with a quick and helpful solution. The book describes not just the how and the what, but the why of things. It will leave you able to create applications with flexible logging, powerful configuration, command-line options, automated unit tests, and good documentation. Find Modern Python Cookbook on the Packt store. Python Crash Course, by Eric Matthes What the book is about This one is a quick paced introduction to Python and assumes that you have knowledge of some other programming language. This is actually somewhere in between Beginner and Intermediate, but I've placed it under Intermediate because of its fast-paced, no-fluff-just-stuff approach. It will be difficult to follow if you’re completely new to programming. The book is 560 pages long and is covered over 20 chapters. It covers topics ranging from the Python libraries like NumPy and matplotlib, to building 2D games and even working with data and visualisations. All in all, it’s a complete package! Why Learn from Eric Eric is a high school math and science teacher. He has over a decade’s worth of programming experience and is a teaching enthusiast, always willing to share his knowledge. He also teaches an ‘Introduction to Programming’ class every fall. The Approach Taken The book has a great selection of projects that caters to a wide range of audience who’re planning to use Python to solve their programming problems. It thoughtfully covers both Python 2 and 3. You can find the book here on Amazon. Fluent Python, by Luciano Ramalho What the book is about The book is an intermediate guide that assumes you have already dipped your feet into the snake pit. It takes you through Python’s core language features and libraries, showing you how to make your code shorter, faster, and more readable at the same time. The book flows over almost 800 pages, with 21 chapters. You’ll indulge yourself in topics on the likes of Functions as well as objects, metaprogramming, etc. Why Learn from Luciano Luciano Ramalho is a member of the Python Software Foundation and co-founder of Garoa Hacker Clube, the first hackerspace in Brazil. He has been working with Python since 1998. He has taught Python web development in the Brazilian media, banking and government sectors and also speaks at PyCon US, OSCON, PythonBrazil and FISL. The Approach Taken The book is mainly based on the language features that are either unique to Python or not found in many other popular languages. It covers the core language and some of its libraries. It has a very comprehensive approach and touches on nearly every point of the language that is pythonic, describing not just the how and the what, but the why. You can find the book here, on Amazon. Advanced Python books The Hitchhiker's Guide to Python, by Kenneth Reitz & Tanya Schlusser What the book is about This isn’t a book that teaches Python. Rather, it’s a book that shows experienced developers where, when and how to use Python to solve problems. The book contains a list of best practices and how to apply these practices in real-world python projects. It focuses on giving great advice about writing good python code. It is spread over 11 chapters and 338 pages. You’ll find interesting topics like choosing an IDE, how to manage code, etc. Why Learn from Kenneth and Tanya Kenneth Reitz is a member of the Python Software Foundation. Until recently, he was the product owner of Python at Heroku. He is a known speaker at several conferences. Tanya is an independent consultant who has over two decades of experience in half a dozen languages. She is an active member of the Chicago Python User’s Group, Chicago’s PyLadies, and has also delivered data science training to students and industry analysts. The Approach Taken The book is highly opinionated and talks about what the best tools and techniques are to build Python apps. It is a book about best practices and covers how to write and ship high quality code, and is very insightful. The book also covers python libraries/frameworks that are focused on capabilities such as data persistence, data manipulation, web, CLI, and performance. You can get the book here on Amazon. Secret Recipes of the Python Ninja, by Cody Jackson What the book is about Now this is a one-of-a-kind book. Again, this one is not going to teach you about Python Programming, rather it will show you tips and tricks that you might not have known you could do with Python. In close to 400 pages, the book unearth secrets related to the implementation of the standard library, by looking at how modules actually work. You’ll find interesting topics on the likes of the CPython interpreter, which is a treasure trove of secret hacks that not many programmers are aware of, the PyPy project, as well as explore the PEPs of the latest versions to discover some interesting hacks. Why Learn from Cody Cody Jackson is a military veteran and the founder of Socius Consulting, an IT and business management consulting company. He has been involved in the tech industry since 1994. He is a self-taught Python programmer and also the author of the book series Learning to Program Using Python. He’s always bubbling with ideas and ways about improving the way he codes and has brilliantly delivered content through this book. The Approach Taken Now this one is highly opinionated too - the idea is to learn the skills from a Python Ninja. The book takes a recipe based approach, putting a problem before you and then showing you how you can wield Python to solve it. Whether you’re new to Python or are an expert, you’re sure to find something interesting in the book. The recipes are easy to follow and waste no time on lengthy explanations. You can find the book here on Amazon and here on the Packt website. So there you have it. Those were my top 7 books on Python Programming. There are loads of books available on Amazon, and quite a few from Packt that you can check out, but the above are a list of those that are a must-have for anyone who’s developing in Python. Read Next What are data professionals planning to learn this year? Python, deep learning, yes. But also… Python web development: Django vs Flask in 2018 Why functional programming in Python matters: Interview with best selling author, Steven Lott What the Python Software Foundation & Jetbrains 2017 Python Developer Survey had to reveal
Read more
  • 0
  • 0
  • 11401

article-image-how-do-you-become-a-developer-advocate
Packt Editorial Staff
11 Oct 2019
8 min read
Save for later

How do you become a developer advocate?

Packt Editorial Staff
11 Oct 2019
8 min read
Developer advocates are people with a strong technical background, whose job is to help developers be successful with a platform or technology. They act as a bridge between the engineering team and the developer community. A developer advocate does not only fill in the gap between developers and the platform but also looks after the development of developers in terms of traction and progress on their projects. Developer advocacy, is broadly referred to as "developer relations". Those who practice developer advocacy have fallen into in this profession in one way or another. As the processes and theories in the world of programming have evolved over several years, so has the idea of developer advocacy. This is the result of developer advocates who work in the wild using their own initiatives. This article is an excerpt from the book Developer, Advocate! by Geertjan Wielenga. This book serves as a rallying cry to inspire and motivate tech enthusiasts and burgeoning developer advocates to take their first steps within the tech community. The question then arises, how does one become a developer advocate? Here are some experiences shared by some well-known developer advocates on how they started the journey that landed them to this role. Is developer advocacy taught in universities? Bruno Borges, Principal Product Manager at Microsoft says, for most developer advocates or developer relations personnel, it was something that just happened. Developer advocacy is not a discipline that is taught in universities; there's no training specifically for this. Most often, somebody will come to realize that what they already do is developer relations. This is a discipline that is a conjunction of several other roles: software engineering, product management, and marketing. I started as a software engineer and then I became a product manager. As a product manager, I was engaged with marketing divisions and sales divisions directly on a weekly basis. Maybe in some companies, sales, marketing, and product management are pillars that are not needed. I think it might vary. But in my opinion, those pillars are essential for doing a proper developer relations job. Trying to aim for those pillars is a great foundation. Just as in computer science when we go to college for four years, sometimes we don't use some of that background, but it gives us a good foundation. From outsourcing companies that just built business software for companies, I then went to vendor companies. That's where I landed as a person helping users to take full advantage of the software that they needed to build their own solutions. That process is, ideally, what I see happening to others. The journey of a regular tech enthusiast to a developer advocate Ivar Grimstad, a developer advocate at Eclipse foundation, speaks about his journey from being a regular tech enthusiast attending conferences to being there speaking at conferences as an advocate for his company. Ivar Grimstad says, I have attended many different conferences in my professional life and I always really enjoyed going to them. After some years of regularly attending conferences, I came to the point of thinking, "That guy isn't saying anything that I couldn't say. Why am I not up there?" I just wanted to try speaking, so I started submitting abstracts. I already gave talks at meetups locally, but I began feeling comfortable enough to approach conferences. I continued submitting abstracts until I got accepted. As it turned out, while I was becoming interested in speaking, my company was struggling to raise its profile. Nobody, even in Sweden, knew what we did. So, my company was super happy for any publicity it could get. I could provide it with that by just going out and talking about tech. It didn't have to be related to anything we did; I just had to be there with the company name on the slides. That was good enough in the eyes of my company. After a while, about 50% of my time became dedicated to activities such as speaking at conferences and contributing to open source projects. Tables turned from being an engineer to becoming a developer advocate Mark Heckler, a Spring developer and advocate at Pivotal, narrates his experience about how tables turned for him from University to Pivotal Principal Technologist & Developer Advocate. He says, initially, I was doing full-time engineering work and then presenting on the side. I was occasionally taking a few days here and there to travel to present at events and conferences. I think many people realized that I had this public-facing level of activities that I was doing. I was out there enough that they felt I was either doing this full-time or maybe should be. A good friend of mine reached out and said, "I know you're doing this anyway, so how would you like to make this your official role?" That sounded pretty great, so I interviewed, and I was offered a full-time gig doing, essentially, what I was already doing in my spare time. A hobby turned out to be a profession Matt Raible, a developer advocate at Okta has worked as an independent consultant for 20 years. He did advocacy as a side hobby. He talks about his experience as a consultant and walks through the progress and development. I started a blog in 2002 and wrote about Java a lot. This was before Stack Overflow, so I used Struts and Java EE. I posted my questions, which you would now post on Stack Overflow, on that blog with stack traces, and people would find them and help. It was a collaborative community. I've always done the speaking at conferences on the side. I started working for Stormpath two years ago, as a contractor part-time, and I was working at Computer Associates at the same time. I was doing Java in the morning at Stormpath and I was doing JavaScript in the afternoon at Computer Associates. I really liked the people I was working with at Stormpath and they tried to hire me full-time. I told them to make me an offer that I couldn't refuse, and they said, "We don't know what that is!" I wanted to be able to blog and speak at conferences, so I spent a month coming up with my dream job. Stormpath wanted me to be its Java lead. The problem was that I like Java, but it's not my favorite thing. I tend to do more UI work. The opportunity went away for a month and then I said, "There's a way to make this work! Can I do Java and JavaScript?" Stormpath agreed that instead of being more of a technical leader and owning the Java SDK, I could be one of its advocates. There were a few other people on board in the advocacy team. Six months later, Stormpath got bought out by Okta. As an independent consultant, I was used to switching jobs every six months, but I didn't expect that to happen once I went full-time. That's how I ended up at Okta! Developer advocacy can be done by calculating the highs and lows of the tech world Scott Davis, a Principal Engineer at Thoughtworks, was also a classroom instructor, teaching software classes to business professionals before becoming a developer advocate. As per him, tech really is a world of strengths and weaknesses. Advocacy, I think, is where you honestly say, "If we balance out the pluses and the minuses, I'm going to send you down the path where there are more strengths than weaknesses. But I also want to make sure that you are aware of the sharp, pointy edges that might nick you along the way." I spent eight years in the classroom as a software instructor and that has really informed my entire career. It's one thing to sit down and kind of understand how something works when you're cowboy coding on your own. It's another thing altogether when you're standing up in front of an audience of tens, or hundreds, or thousands of people. Discover how developer advocates are putting developer interests at the heart of the software industry in companies including Microsoft and Google with Developer, Advocate! by Geertjan Wielenga. This book is a collection of in-depth conversations with leading developer advocates that reveal the world of developer relations today. 6 reasons why employers should pay for their developers’ training and learning resources “Developers need to say no” – Elliot Alderson on the FaceApp controversy in a BONUS podcast episode [Podcast] GitHub has blocked an Iranian software developer’s account How do AWS developers manage Web apps? Are you looking at transitioning from being a developer to manager? Here are some leadership roles to consider
Read more
  • 0
  • 0
  • 11265
article-image-oldest-programming-languages-use-today
Antonio Cucciniello
11 Jul 2017
5 min read
Save for later

The oldest programming languages in use today

Antonio Cucciniello
11 Jul 2017
5 min read
Today, we are going to be discussing some of the oldest, most established programming languages that are still in use today. Some developers may be surprised to learn that many of these languages surpass them in age, in a world where technology, especially in the world of development, is advancing at such a rapid rate. But then, old is gold, after all. So, in age order, let’s present the oldest programming languages in use today: C The C language was created in 1972 (it’s not that old, okay). C is a lower level language that was based an earlier language called B (do you see a trend here?) It is a general-purpose language, and a parent language which many future programming languages derive from, such as C#, Java, JavaScript, Perl, PHP and Python. It is used in many applications that must interface with hardware or play with memory. C++ Pronounced see-plus-plus, C++ was developed 11 years later in 1983. It is very similar to C, in fact it is often considered an extension of C. It added various concepts such as classes, virtual functions, and templates. It is more of an intermediate level language that can be used lower level or higher level, depending on the application. It is also known for being used in low latency applications. Objective-C Around the same time as C++ was being released to the public, Objective-C was created. If you took an educated guess from the name and said that it would be another extension of C, then you’d be right. This version was meant to be an object-oriented version of C (there’s a lot in a name, clearly). It is used, probably most famously, by Apple. If you are a Mac or iOS user, then your iPhone or Mac applications were most likely developed with Objective-C (until they recently moved over to Swift). Python We are going to take a quick jump ahead in time to the 90’s for this one. In 1991, the Python programming language was released, though it had been in development in the late 80’s. It is a dynamically-typed, object-oriented language that is often used for scripting and web applications. It is usually used with some of its frameworks like Django or Flask on the backend. It is one of the most popular programming languages in use today. Ruby In 1993, Ruby was released. Today, you probably heard of Ruby on Rails, which primarily is used to create the backend of web applications using Ruby. Unlike the many languages derived from C, this language was influenced by older languages such as Perl and Lisp. This language was designed for productive and fun programming. This was done by making the language closer to human needs, rather than machine needs. Java Two years later in 1995, Java was developed. This is a high level language that is derived from C. It is famously known for its use in web applications and as the language to use to develop Android applications and Android OS. It used to be the most popular language a few years ago, but its popularity and usage has definitely decreased. PHP In the same year as Java was developed, PHP was born. It is an open source programming language developed for the purpose of creating dynamic websites. It is also used for server side web development. Its usage is definitely declining, but it is still in use today. JavaScript That same year (yup, ’95 was good year for programming, not so much for fans of Full House), JavaScript was brought to the world. Its purpose was to be a high level language that helped with the functionality of a web page. Today, it is sometimes used as a scripting language, as well as being used on the backend of applications with the release of Node.js. It is one of the most popular and widely used programming languages today. Conclusion That was our brief history lesson on some in use programming languages. Even though some of them are 20, 30, even over 40 years old, they are being used by thousands of developers daily. They all have a variety of uses, from lower level to higher level, from web applications to mobile applications. Do you feel there is a need for newer languages, or are you happy with what we have? If you have any favorites, let us know which one and why! About the author Antonio Cucciniello is a Software Engineer with a background in C, C++ and JavaScript (Node.Js) from New Jersey.   His most recent project called Edit Docs is an Amazon Echo skill that allows users to edit Google Drive files using your voice.  He loves building cool things with software, reading books on self-help and improvement, finance, and entrepreneurship. Follow him on twitter @antocucciniello, and follow him on GitHub here: https://github.com/acucciniello
Read more
  • 0
  • 0
  • 11201

article-image-systems-programming-go-unix-linux
Mihalis Tsoukalos
24 Jan 2018
17 min read
Save for later

Systems programming with Go in UNIX and Linux

Mihalis Tsoukalos
24 Jan 2018
17 min read
This is a guest post by Mihalis Tsoukalos. Mihalis is a Unix administrator, programmer, and Mathematician who enjoys writing. He is the author of Go Systems Programming from which this Go programming tutorial is taken. What is Go? Back when UNIX was first introduced, the only way to write systems software was by using C; nowadays you can program systems software using programming languages including Go. Apart from Go, other preferred languages for developing system utilities are Python, Perl, Rust and Ruby. Go is a modern generic purpose open-source programming language that was officially announced at the end of 2009, was begun as an internal Google project and has been inspired by many other programming languages including C, Pascal, Alef and Oberon. Its spiritual fathers are Robert Griesemer, Ken Thomson and Rob Pike that designed Go as a language for professional programmers that want to build reliable and robust software. Apart from its syntax and standard functions, Go comes with a pretty rich and convenient standard library. What is systems programming? Systems programming is a special area of programming on UNIX machines. Please note that Systems programming is not limited to UNIX machines. Most commands that have to do with System Administration tasks such as disk formatting, network interface configuration, module loading, kernel performance tracking, and so on, are implemented using the techniques of Systems Programming. Additionally, the /etc directory, which can be found on all UNIX systems, contains plain text files that deal with the configuration of a UNIX machine and its services and are also manipulated using systems software. You can group the various areas of systems software and related system calls in the following sets: File I/O: This area deals with file reading and writing operations, which is the most important task of an operating system. File input and output must be fast and efficient and, above all, it must be reliable. Advanced File I/O: Apart from the basic input and output system calls, there are also more advanced ways to read or write a file including asynchronous I/O and non-blocking I/O. System files and Configuration: This group of systems software includes functions that allow you to handle system files such as /etc/password and get system specific information such as system time and DNS configuration. Files and Directories: This cluster includes functions and system calls that allow the programmer to create and delete directories and get information such as the owner and the permissions of a file or a directory. Process Control: This group of software allows you to create and interact with UNIX processes. Threads: When a process has multiple threads, it can perform multiple tasks. However, threads must be created, terminated and synchronized, which is the purpose of this collection of functions and system calls. Server Processes: This set includes techniques that allow you to develop server processes, which are processes that get executed in the background without the need for an active terminal. Go is not that good at writing server processes in the traditional UNIX way – but let me explain this a little more. UNIX servers like Apache use fork(2) to create one or more children processes; this process is called forking and refers to cloning the parent process into a child process and continue executing the same executable from the same point and, most importantly, sharing memory. Although Go does not offer an equivalent to the fork(2) function this is not an issue because you can use goroutines to cover most of the uses of fork(2). Interprocess Communication: This set of functions allows processes that run on the same UNIX machine to communicate with each other using features such as pipes, FIFOs, message queues, semaphores and shared memory. Signal Processing: Signals offer processes a way of handling asynchronous events, which can be very handy. Almost all server processes have extra code that allows them to handle UNIX signals using the system calls of this group. Network Programming: This is the art of developing applications that work over computer networks with the he€lp of TCP/IP and is not Systems programming per se. However, most TCP/IP servers and clients are dealing with system resources, users, files and directories so most of the times you cannot create network applications without doing some kind of Systems programming. The challenging thing with Systems programming is that you cannot afford to have an incomplete program; you can either have a fully working, secure program that can be used on a production system or nothing at all. This mainly happens because you cannot trust end users and hackers! The key difficulty in systems programming is the fact that an erroneous system call can make your UNIX machine misbehave or, even worst, crash it! Most security issues on UNIX systems usually come from wrongly implemented systems software because bugs in systems software can compromise the security of an entire system. The worst part is that this can happen many years after using a certain piece of software! Systems programming examples with Go Printing the permission of a file or a directory With the help of the ls(1) command, you can find out the permissions of a file: $ ls -l /bin/ls -rwxr-xr-x 1 root wheel 38624 Mar 23 01:57 /bin/ls The presented Go program, which is named permissions.go, will teach you how to print the permissions of a file or a directory using Go and will be presented in two parts. The first part is the next: package main import ( "fmt" "os" ) func main() { arguments := os.Args if len(arguments) == 1 { fmt.Println("Please provide an argument!") os.Exit(1) } file := arguments[1] The second part contains the important Go code: info, err := os.Stat(file) if err != nil { fmt.Println("Error:", err) os.Exit(1) } mode := info.Mode() fmt.Print(file, ": ", mode, "n") } Once again most of the Go code is for dealing with the command line argument and making sure that you have one! The Go code that does the actual job is mainly the call to the os.Stat() function, which returns a FileInfo structure that describes the file or directory examined by os.Stat(). From the FileInfo structure you can discover the permissions of a file by calling the Mode() function. Executing permissions.go creates the following kind of output: $ go run permissions.go /bin/ls /bin/ls: -rwxr-xr-x $ go run permissions.go /usr /usr: drwxr-xr-x $ go run permissions.go /us Error: stat /us: no such file or directory exit status 1 How to write to files using fmt.Fprintf() The use of the fmt.Fprintf() function allows you to write formatted text to files in a way that is similar to the way the fmt.Printf() function works. The Go code that illustrates the use of fmt.Fprintf() will be named fmtF.go and is going to be presented in three parts. The first part is the expected preamble of the program: package main import ( "fmt" "os" ) The second part has the next Go code: func main() { if len(os.Args) != 2 { fmt.Println("Please provide a filename") os.Exit(1) } filename := os.Args[1] destination, err := os.Create(filename) if err != nil { fmt.Println("os.Create:", err) os.Exit(1) } defer destination.Close() First, you make sure that you have one command line argument before continuing. Then, you read that command line argument and you give it to os.Create() in order to create it! Please note that the os.Create() function will truncate the file if it already exists. The last part is the following: fmt.Fprintf(destination, "[%s]: ", filename) fmt.Fprintf(destination, "Using fmt.Fprintf in %sn", filename) } Here, you write the desired text data to the file that is identified by the destination variable using fmt.Fprintf() as if you were using the fmt.Printf() method. Executing fmtF.go will generate the following output: $ go run fmtF.go test $ cat test [test]: Using fmt.Fprintf in test In other words, you can create plain text files using fmt.Fprintf(). Developing wc(1) in Go The principal idea behind the code of the wc.go program is that you read a text file line by line until there is nothing left to read. For each line you read you find out the number of characters and the number of words it has. As you need to read your input line by line, the use of bufio is preferred instead of the plain io because it simplifies the code. However, trying to implement wc.go on your own using io would be a very educational exercise. But first you will see the kind of output the wc(1) utility generates: $ wcwc.gocp.go 68 160 1231wc.go 45 112 755cp.go 113 272 1986 total So, if wc(1) has to process more than one file, it automatically generates summary information. Counting words The trickiest part of the implementation is word counting, which is implemented using Go regular expressions: r := regexp.MustCompile("[^s]+") for range r.FindAllString(line, -1) { numberOfWords++ } What the provided regular expression does is separating the words of a line based on whitespace characters in order to count them afterwards! The code! After this little introduction, it is time to see the Go code of wc.go, which will be presented in five parts. The first part is the expected preamble: import ( "bufio" "flag" "fmt" "io" "os" "regexp" ) The second part is the implementation of the count() function, which includes the core functionality of the program: func count(filename string) (int, int, int) { var err error varnumberOfLinesint varnumberOfCharactersint varnumberOfWordsint numberOfLines = 0 numberOfCharacters = 0 numberOfWords = 0 f, err := os.Open(filename) if err != nil { fmt.Printf("error opening file %s", err) os.Exit(1) } defer f.Close() r := bufio.NewReader(f) for { line, err := r.ReadString('n') if err == io.EOF { break } else if err != nil { fmt.Printf("error reading file %s", err) } numberOfLines++ r := regexp.MustCompile("[^s]+") for range r.FindAllString(line, -1) { numberOfWords++ } numberOfCharacters += len(line) } return numberOfLines, numberOfWords, numberOfCharacters } There exist lot of interesting things here. First of all, you can see the Go code presented in the previous section for counting the words of each line. Counting lines is easy because each time the bufio reader reads a new line the value of the numberOfLines variable is increased by one. The ReadString() function tells the program to read until the first occurrence of a 'n' in the input – multiple calls to ReadString() mean that you are reading a file line by line. Next, you can see that the count() function returns three integer values. Last, counting characters is implemented with the help of the len() function that returns the number of characters in a given string, which in this case is the line that was read. The for loop terminates when you get the io.EOF error message, which signifies that there is nothing left to read from the input file. The third part of wc.go starts with the beginning of the implementation of the main() function, which also includes the configuration of the flag package: func main() { minusC := flag.Bool("c", false, "Characters") minusW := flag.Bool("w", false, "Words") minusL := flag.Bool("l", false, "Lines") flag.Parse() flags := flag.Args() if len(flags) == 0 { fmt.Printf("usage: wc<file1> [<file2> [... <fileN]]n") os.Exit(1) } totalLines := 0 totalWords := 0 totalCharacters := 0 printAll := false for _, filename := range flag.Args() { The last for statement is for processing all input files given to the program. The wc.go program supports three flags: the -c flag is for printing the character count, the -w flag is for printing the word count and the -l flag is for printing the line count. The fourth part is the next: numberOfLines, numberOfWords, numberOfCharacters := count(filename) totalLines = totalLines + numberOfLines totalWords = totalWords + numberOfWords totalCharacters = totalCharacters + numberOfCharacters if (*minusC&& *minusW&& *minusL) || (!*minusC&& !*minusW&& !*minusL) { fmt.Printf("%d", numberOfLines) fmt.Printf("t%d", numberOfWords) fmt.Printf("t%d", numberOfCharacters) fmt.Printf("t%sn", filename) printAll = true continue } if *minusL { fmt.Printf("%d", numberOfLines) } if *minusW { fmt.Printf("t%d", numberOfWords) } if *minusC { fmt.Printf("t%d", numberOfCharacters) } fmt.Printf("t%sn", filename) } This part deals with the printing of the information on a per file basis depending on the command line flags. As you can see, most of the Go code here is for handling the output according to the command line flags. The last part is the following: if (len(flags) != 1) &&printAll { fmt.Printf("%d", totalLines) fmt.Printf("t%d", totalWords) fmt.Printf("t%d", totalCharacters) fmt.Println("ttotal") return } if (len(flags) != 1) && *minusL { fmt.Printf("%d", totalLines) } if (len(flags) != 1) && *minusW { fmt.Printf("t%d", totalWords) } if (len(flags) != 1) && *minusC { fmt.Printf("t%d", totalCharacters) } if len(flags) != 1 { fmt.Printf("ttotaln") } } This is where you print the total number of lines, words and characters read according to the flags of the program. Once again, most of the Go code here is for modifying the output according to the command line flags. Executing wc.go will generated the following kind of output: $ go build wc.go $ ls -l wc -rwxr-xr-x 1 mtsouk staff 2264384 Apr 29 21:10 wc $ ./wcwc.gosparse.gonotGoodCP.go 120 280 2319 wc.go 44 98 697 sparse.go 27 61 418 notGoodCP.go 191 439 3434 total $ ./wc -l wc.gosparse.go 120 wc.go 44 sparse.go 164 total $ ./wc -w -l wc.gosparse.go 120 280 wc.go 44 98 sparse.go 164 378 total If you do not execute go build wc.go in order to create an executable file, then executing go run wc.go using Go source files as arguments will fail because the compiler will try to compile the Go source files instead of treating them as command line arguments to the go run wc.go command: $ go run wc.gosparse.go # command-line-arguments ./sparse.go:11: main redeclared in this block previous declaration at ./wc.go:49 $ go run wc.gowc.go package main: case-insensitive file name collision: "wc.go" and "wc.go" $ go run wc.gocp.gosparse.go # command-line-arguments ./cp.go:35: main redeclared in this block previous declaration at ./wc.go:49 ./sparse.go:11: main redeclared in this block previous declaration at ./cp.go:35 Additionally, trying to execute wc.go on a Linux system with Go version 1.3.3 will fail because it uses features of Go that can be found in newer versions – if you use the latest Go version you will have no problem running wc.go. The error message you will get will be the following: $ go version go version go1.3.3 linux/amd64 $ go run wc.go # command-line-arguments ./wc.go:40: syntax error: unexpected range, expecting { ./wc.go:46: non-declaration statement outside function body ./wc.go:47: syntax error: unexpected } Reading a text file character by character Although reading a text file character by character is not needed for the development of the wc(1) utility, it would be good to know how to implement it in Go. The name of the file will be charByChar.go and will be presented in four parts. The first part comes with the following Go code: import ( "bufio" "fmt" "io/ioutil" "os" "strings" ) Although charByChar.go does not have many lines of Go code, it needs lots of Go standard packages, which is a naïve indication that the task it implements is not trivial. The second part is: func main() { arguments := os.Args if len(arguments) == 1 { fmt.Println("Not enough arguments!") os.Exit(1) } input := arguments[1] The third part is the following: buf, err := ioutil.ReadFile(input) if err != nil { fmt.Println(err) os.Exit(1) } The last part has the next Go code: in := string(buf) s := bufio.NewScanner(strings.NewReader(in)) s.Split(bufio.ScanRunes) for s.Scan() { fmt.Print(s.Text()) } } ScanRunes is a split function that returns each character (rune) as a token. Then the call to Scan() allows us to process each character one by one. There also exist ScanWords and ScanLines for getting words and lines scanned, respectively. If you use fmt.Println(s.Text()) as the last statement to the program instead of fmt.Print(s.Text()), then each character will be printed in its own line and the task of the program will be more obvious. Executing charByChar.go generates the following kind of output: $ go run charByChar.go test package main … The wc(1) command can verify the correctness of the Go code of charByChar.go by comparing the input file with the output generated by charByChar.go: $ go run charByChar.go test | wc 32 54 439 $ wc test 32 54 439 test How to create sparse files in Go Big files that are created with the os.Seek() function may have holes in them and occupy fewer disk blocks than files with the same size but without holes in them; such files are called sparse files. This section will develop a program that creates sparse files. The Go code of sparse.go will be presented in three parts. The first part is: package main import ( "fmt" "log" "os" "path/filepath" "strconv" ) The second part of sparse.go has the following Go code: func main() { if len(os.Args) != 3 { fmt.Printf("usage: %s SIZE filenamen", filepath.Base(os.Args[0])) os.Exit(1) } SIZE, _ := strconv.ParseInt(os.Args[1], 10, 64) filename := os.Args[2] _, err := os.Stat(filename) if err == nil { fmt.Printf("File %s already exists.n", filename) os.Exit(1) } The strconv.ParseInt() function is used for converting the command line argument that defines the size of the sparse file from its string value to its integer value. Additionally, the os.Stat() call makes sure that you will not accidentally overwrite an existing file. The last part is where the action takes place: fd, err := os.Create(filename) if err != nil { log.Fatal("Failed to create output") } _, err = fd.Seek(SIZE-1, 0) if err != nil { fmt.Println(err) log.Fatal("Failed to seek") } _, err = fd.Write([]byte{0}) if err != nil { fmt.Println(err) log.Fatal("Write operation failed") } err = fd.Close() if err != nil { fmt.Println(err) log.Fatal("Failed to close file") } } First, you try to create the desired sparse file using os.Create(). Then, you call fd.Seek() in order to make the file bigger without adding actual data. Last, you write a byte to it using fd.Write(). As you do not have anything more to do with the file, you call fd.Close() and you are done. Executing sparse.go generates the following output: $ go run sparse.go 1000 test $ go run sparse.go 1000 test File test already exists. exit status 1 How can you tell whether a file is a sparse file or not? You will learn in a while, but first let us create some files: $ go run sparse.go 100000 testSparse $ dd if=/dev/urandom bs=1 count=100000 of=noSparseDD 100000+0 records in 100000+0 records out 100000 bytes (100 kB) copied, 0.152511 s, 656 kB/s $ dd if=/dev/urandom seek=100000 bs=1 count=0 of=sparseDD 0+0 records in 0+0 records out 0 bytes (0 B) copied, 0.000159399 s, 0.0 kB/s $ ls -l noSparse DDsparse DDtestSparse -rw-r--r-- 1 mtsoukmtsouk 100000 Apr 29 21:43 noSparseDD -rw-r--r-- 1 mtsoukmtsouk 100000 Apr 29 21:43 sparseDD -rw-r--r-- 1 mtsoukmtsouk 100000 Apr 29 21:40 testSparse So, how can you tell if any of the three files is a sparse file or not? The -s flag of the ls(1) utility shows the number of file system blocks actually used by a file. So, the output of the ls -ls command allows you to detect if you are dealing with a sparse file or not: $ ls -ls noSparse DDsparse DDtestSparse 104 -rw-r--r-- 1 mtsoukmtsouk 100000 Apr 29 21:43 noSparseDD 0 -rw-r--r-- 1 mtsoukmtsouk 100000 Apr 29 21:43 sparseDD 8 -rw-r--r-- 1 mtsoukmtsouk 100000 Apr 29 21:40 testSparse Now look at the first column of the output. The noSparseDD file, which was generated using the dd(1) utility, is not a sparse file. The sparseDD file is a sparse file generated using the dd(1) utility. Last, the testSparse is also a sparse file that was created using sparse.go. Mihalis Tsoukalos is a Unix administrator, programmer, DBA and mathematician who enjoys writing. He is currently writing Mastering Go. His research interests include programming languages, databases and operating systems. He holds a B.Sc in Mathematics from the University of Patras and an M.Sc in IT from University College London (UK). He has written various technical articles for Sys Admin, MacTech, C/C++ Users Journal, Linux Journal, Linux User and Developer, Linux Format and Linux Voice.
Read more
  • 0
  • 0
  • 10934