Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-why-uber-created-hudi-an-open-source-incremental-processing-framework-on-apache-hadoop
Bhagyashree R
19 Oct 2018
3 min read
Save for later

Why did Uber created Hudi, an open source incremental processing framework on Apache Hadoop?

Bhagyashree R
19 Oct 2018
3 min read
In the process of rebuilding its Big Data platform, Uber created an open-source Spark library named Hadoop Upserts anD Incremental (Hudi). This library permits users to perform operations such as update, insert, and delete on existing Parquet data in Hadoop. It also allows data users to incrementally pull only the changed data, which significantly improves query efficiency. It is horizontally scalable, can be used from any Spark job, and the best part is that it only relies on HDFS to operate. Why is Hudi introduced? Uber studied its current data content, data access patterns, and user-specific requirements to identify problem areas. This research revealed the following four limitations: Scalability limitation in HDFS Many companies who use HDFS to scale their Big Data infrastructure face this issue. Storing large numbers of small files can affect the performance significantly as HDFS is bottlenecked by its NameNode capacity. This becomes a major issue when the data size grows above 50-100 petabytes. Need for faster data delivery in Hadoop Since Uber operates in real time, there was a need for providing services the latest data. It was important to make the data delivery much faster, as the 24-hour data latency was way too slow for many of their use cases. No direct support for updates and deletes for existing data Uber used snapshot-based ingestion of data, which means a fresh copy of source data was ingested every 24 hours. As Uber requires the latest data for its business, there was a need for a solution which supports update and delete operations for existing data. However, since their Big Data is stored in HDFS and Parquet, direct support for update operations on existing data is not available. Faster ETL and modeling ETL and modeling jobs were also snapshot-based, requiring their platform to rebuild derived tables in every run. ETL jobs also needed to become incremental to reduce data latency. How Hudi solves the aforementioned limitations? The following diagram shows Uber's Big Data platform after the incorporation of Hudi: Source: Uber Regardless of whether the data updates are new records added to recent date partitions or updates to older data, Hudi allows users to pass on their latest checkpoint timestamp and retrieve all the records that have been updated since. This data retrieval happens without running an expensive query that scans the entire source table. Using this library Uber has moved to an incremental ingestion model leaving behind the snapshot-based ingestion. As a result, the data latency was reduced from 24 hrs to less than one hour. To know about Hudi in detail, check out Uber’s official announcement. How can Artificial Intelligence support your Big Data architecture? Big data as a service (BDaaS) solutions: comparing IaaS, PaaS and SaaS Uber’s Marmaray, an Open Source Data Ingestion and Dispersal Framework for Apache Hadoop
Read more
  • 0
  • 0
  • 13753

article-image-what-is-the-history-behind-c-programming-and-unix
Packt Editorial Staff
17 Oct 2019
9 min read
Save for later

What is the history behind C Programming and Unix?

Packt Editorial Staff
17 Oct 2019
9 min read
If you think C programming and Unix are unrelated, then you are making a big mistake. Back in the 1970s and 1980s, if the Unix engineers at Bell Labs had decided to use another programming language instead of C to develop a new version of Unix, then we would be talking about that language today. The relationship between the two is simple; Unix is the first operating system that is implemented with a high-level C programming language, got its fame and power from Unix. Of course, our statement about C being a high-level programming language is not true in today’s world. This article is an excerpt from the book Extreme C by Kamran Amini. Kamran teaches you to use C’s power. Apply object-oriented design principles to your procedural C code. You will gain new insight into algorithm design, functions, and structures. You’ll also understand how C works with UNIX, how to implement OO principles in C, and what multiprocessing is. In this article, we are going to look at the history of C programming and Unix. Multics OS and Unix Even before having Unix, we had the Multics OS. It was a joint project launched in 1964 as a cooperative project led by MIT, General Electric, and Bell Labs. Multics OS was a huge success because it could introduce the world to a real working and secure operating system. Multics was installed everywhere from universities to government sites. Fast-forward to 2019, and every operating system today is borrowing some ideas from Multics indirectly through Unix. In 1969, because of the various reasons that we will talk about shortly, some people at Bell Labs, especially the pioneers of Unix, such as Ken Thompson and Dennis Ritchie, gave up on Multics and, subsequently, Bell Labs quit the Multics project. But this was not the end for Bell Labs; they had designed their simpler and more efficient operating system, which was called Unix. It is worthwhile to compare the Multics and Unix operating systems. In the following list, you will see similarities and differences found while comparing Multics and Unix: Both follow the onion architecture as their internal structure. We mean that they both have the same rings in their onion architecture, especially kernel and shell rings. Therefore, programmers could write their own programs on top of the shell ring. Also, Unix and Multics expose a list of utility programs, and there are lots of utility programs such as ls and pwd. In the following sections, we will explain the various rings found in the Unix architecture. Multics needed expensive resources and machines to be able to work. It was not possible to install it on ordinary commodity machines, and that was one of the main drawbacks that let Unix thrive and finally made Multics obsolete after about 30 years. Multics was complex by design. This was the reason behind the frustration of Bell Labs employees and, as we said earlier, the reason why they left the project. But Unix tried to remain simple. In the first version, it was not even multitasking or multi-user! You can read more about Unix and Multics online, and follow the events that happened in that era. Both were successful projects, but Unix has been able to thrive and survive to this day. It is worth sharing that Bell Labs has been working on a new distributed operating system called Plan 9, which is based on the Unix project.   Figure 1-1: Plan 9 from Bell Labs Suffice to say that Unix was a simplification of the ideas and innovations that Multics presented; it was not something new, and so, I can quit talking about Unix and Multics history at this point. So far, there are no traces of C in the history because it has not been invented yet. The first versions of Unix were purely written using assembly language. Only in 1973 was Unix version 4 written using C. Now, we are getting close to discussing C itself, but before that, we must talk about BCPL and B because they have been the gateway to C. About BCPL and B BCPL was created by Martin Richards as a programming language invented for the purpose of writing compilers. The people from Bell Labs were introduced to the language when they were working as part of the Multics project. After quitting the Multics project, Bell Labs first started to write Unix using assembly programming language. That’s because, back then, it was an anti-pattern to develop an operating system using a programming language other than assembly. For instance, it was strange that the people at the Multics project were using PL/1 to develop Multics but, by doing that, they showed that operating systems could be successfully written using a higher-level programming language other than assembly. As a result, Multics became the main inspiration for using another language for developing Unix. The attempt to write operating system modules using a programming language other than assembly remained with Ken Thompson and Dennis Ritchie at Bell Labs. They tried to use BCPL, but it turned out that they needed to apply some modifications to the language to be able to use it in minicomputers such as the DEC PDP-7. These changes led to the B programming language. While we won’t go too deep into the properties of the B language here you can read more about it and the way it was developed at the following links: The B Programming Language  The Development of the C Language Dennis Ritchie authored the latter article himself, and it is a good way to explain the development of the C programming language while still sharing valuable information about B and its characteristics. B also had its shortcomings in terms of being a system programming language. B was typeless, which meant that it was only possible to work with a word (not a byte) in each operation. This made it hard to use the language on machines with a different word length. Therefore, over time, further modifications were made to the language until it led to developing the NB (New B) language, which later derived the structures from the B language. These structures were typeless in B, but they became typed in C. And finally, in 1973, the fourth version of Unix could be developed using C, which still had many assembly codes. In the next section, we talk about the differences between B and C, and why C is a top-notch modern system programming language for writing an operating system. The way to C programming and Unix I do not think we can find anyone better than Dennis Ritchie himself to explain why C was invented after the difficulties met with B. In this section, we’re going to list the causes that prompted Dennis Ritchie, Ken Thompson, and others create a new programming language instead of using B for writing Unix. Limitations of the B programming language: B could only work with words in memory: Every single operation should have been performed in terms of words. Back then, having a programming language that was able to work with bytes was a dream. This was because of the available hardware at the time, which addressed the memory in a word-based scheme. B was typeless: More accurately, B was a single-type language. All variables were from the same type: word. So, if you had a string with 20 characters (21 plus the null character at the end), you had to divide it up by words and store it in more than one variable. For example, if a word was 4 bytes, you would have 6 variables to store 21 characters of the string. Being typeless meant that multiple byte-oriented algorithms, such as string manipulation algorithms, were not efficiently written with B: This was because B was using the memory words not bytes, and they could not be used efficiently to manage multi-byte data types such as integers and characters. B didn’t support floating-point operations: At the time, these operations were becoming increasingly available on the new hardware, but there was no support for that in the B language. Through the availability of machines such as PDP-1, which could address memory on a byte basis, B showed that it could be inefficient in addressing bytes of memory: This became even clearer with B pointers, which could only address the words in the memory, and not the bytes. In other words, for a program wanting to access a specific byte or a byte range in the memory, more computations had to be done to calculate the corresponding word index. The difficulties with B, particularly its slow development and execution on machines that were available at the time, forced Dennis Ritchie to develop a new language. This new language was called NB, or New B at first, but it eventually turned out to be C. This newly developed language, C, tried to cover the difficulties and flaws of B and became a de facto programming language for system development, instead of the assembly language. In less than 10 years, newer versions of Unix were completely written in C, and all newer operating systems that were based on Unix got tied with C and its crucial presence in the system. As you can see, C was not born as an ordinary programming language, but instead, it was designed by having a complete set of requirements in mind. You may consider languages such as Java, Python, and Ruby to be higher-level languages, but they cannot be considered as direct competitors as they are different and serve different purposes. For instance, you cannot write a device driver or a kernel module with Java or Python, and they themselves have been built on top of a layer written in C. Unlike some programming languages, C is standardized by ISO, and if it is required to have a certain feature in the future, then the standard can be modified to support the new feature. To summarize In this article, we began with the relationship between Unix and C. Even in non-Unix operating systems, you see some traces of a similar design to Unix systems. We also looked at the history of C and explained how Unix appeared from Multics OS and how C was derived from the B programming language. The book Extreme C, written by Kamran Amini will help you make the most of C's low-level control, flexibility, and high performance. Is Dark an AWS Lambda challenger? Microsoft mulls replacing C and C++ code with Rust calling it a “modern safer system programming language” with great memory safety features Is Scala 3.0 a new language altogether? Martin Odersky, its designer, says “yes and no”
Read more
  • 0
  • 0
  • 13654

article-image-top-5-cybersecurity-assessment-tools-for-networking-professionals
Savia Lobo
07 Jun 2018
6 min read
Save for later

Top 5 cybersecurity assessment tools for networking professionals

Savia Lobo
07 Jun 2018
6 min read
Security is one of the major concerns while setting up data centers in the cloud. Although firewalls and managed networking components are deployed by most of the organizations for their data centers, they still fear being attacked by intruders. As such, organizations constantly seek tools that can assist them in gauging how vulnerable their network is and how they can secure their applications therein. Many confuse security assessment with penetration testing and also use it interchangeably. However, there is a notable difference between the two. Security assessment is a process of finding out the different vulnerabilities within a system and prioritize them based on severity and business criticality. On the other hand, penetration testing simulates a real-life attack and maps out paths that a real attacker would take to fulfill the attack. You can check out our article, Top 5 penetration testing tools for ethical hackers to know about some of the pentesting tools. Plethora of tools in the market exist and every tool claims to be the best. Here is our top 5 list of tools to secure your organization over the network. Wireshark Wireshark is one of the popular tools for packet analysis. It is open source under GNU General Public License. Wireshark has a user-friendly GUI  and supports Command Line Input (CLI). It is a great debugging tool for developers who wish to develop a network application. It runs on multiple platforms including Windows, Linux, Solaris, NetBSD, and so on. WireShark community also hosts SharkFest, launched in 2008, for WireShark developers and the user communities. The main aim of this conference is to support Wireshark development and to educate current and future generations of computer science and IT professionals on how to use this tool to manage, troubleshoot, diagnose, and secure traditional and modern networks. Some benefits of using this tool include: Wireshark features live real-time traffic analysis and also supports offline analysis. Depending on the platform, one can read live data from Ethernet, PPP/HDLC, USB, IEEE 802.11, Token Ring, and many others. Decryption support for several protocols such as IPsec, ISAKMP, Kerberos, SNMPv3, SSL/TLS, WEP, and WPA/WPA2 Network captured by this tool can be browsed via a GUI, or via the TTY-mode TShark utility. Wireshark also has the most powerful display filters in whole industry It also provides users with Tshark, a network protocol analyzer, used to analyze packets from the hosts without a UI. Nmap Network Mapper, popularly known as Nmap is an open source licensed tool for conducting network discovery and security auditing.  It is also utilized for tasks such as network inventory management, monitoring host or service uptime, and much more. How Nmap works is, it uses raw IP packets in order to find out the available hosts on the network, the services they offer, the OS on which they are operating, the firewall that they are currently using and much more. Nmap is a quick essential to scan large networks and can also be used to scan single hosts. It runs on all major operating system. It also provides official binary packages for Windows, Linux, and Mac OS X. It also includes Zenmap - An advanced security scanner GUI and a results viewer Ncat - This is a tool used for data transfer, redirection, and debugging. Ndiff - A utility tool for comparing scan results Nping - A packet generation and response analysis tool Nmap is traditionally a command-line tool run from a Unix shell or Windows Command prompt. This makes Nmap easy for scripting and allows easy sharing of useful commands within the user community. With this, experts do not have to move through different configuration panels and scattered option fields. Nessus Nessus, a product of the Tenable.io, is one of the popular vulnerability scanners specifically for UNIX systems. This tool remains constantly updated with 70k+ plugins. Nessus is available in both free and paid versions. The paid version costs around  $2,190 per year, whereas the free version, ‘Nessus Home’ offers limited usage and is licensed only for home network usage. Customers choose Nessus because It includes simple steps for policy creation and needs just a few clicks for scanning an entire corporate network. It offers vulnerability scanning at a low total cost of ownership (TCO) product One can carry out a quick and accurate scanning with lower false positives. It also has an embedded scripting language for users to write their own plugins and to understand the existing ones. QualysGuard QualysGuard is a famous SaaS (Software-as-a-Service) vulnerability management tool. It has a comprehensive vulnerability knowledge base, using which it is able to provide continuous protection against the latest worms and security threats. It proactively monitors all the network access points, due to which security managers can invest less time to research, scan, and fix network vulnerabilities. This helps organizations in avoiding network vulnerabilities before they could be exploited. It provides a detailed technical analysis of the threats via powerful and easy-to-read reports. The detailed report includes the security threat, the consequences faced if the vulnerability is exploited, and also a solution that recommends how the vulnerability can be fixed. One can get a summary of the overall security with QualysGuard’s executive dashboard. The dashboard displays a number of new, active, and re-opened vulnerabilities. It also displays a graph which showcases vulnerabilities based on severity level. Get to know more about QualysGuard on its official website. Core Impact Core Impact is widely used as a comprehensive tool to assess and test security vulnerability within any organization. It includes a large database of professional exploits and is regularly updated. It assists in cleanly exploiting one machine and later creating an encrypted tunnel through it to exploit other machines. Core Impact provides a controlled environment to mimic bad attacks. This helps one to secure their network before the occurrence of an actual attack. One interesting feature of Core Impact is that one can fully test their network, irrespective of the length, quickly and efficiently. These are five popular tools network security professionals use for assessing their networks. However, there are many other tools such as Netsparker, OpenVAS, Nikto, and many more for assessing the security of their network. Every security assessment tool is unique in its own way. However, it all boils down to one’s own expertise and the experience they have, and also the kind of project environment it is used in. Top 5 penetration testing tools for ethical hackers Intel’s Spectre variant 4 patch impacts CPU performance Pentest tool in focus: Metasploit
Read more
  • 0
  • 0
  • 13645

article-image-admiring-many-faces-facial-recognition-deep-learning
Sugandha Lahoti
07 Dec 2017
7 min read
Save for later

Admiring the many faces of Facial Recognition with Deep Learning

Sugandha Lahoti
07 Dec 2017
7 min read
Facial recognition technology is not new. In fact, it has been around for more than a decade. However, with the recent rise in artificial intelligence and deep learning, facial technology has achieved new heights. In addition to facial detection, modern day facial recognition technology also recognizes faces with high accuracy and in unfavorable conditions. It can also recognize expressions and analyze faces to generate insights about an individual. Deep learning has enabled a power-packed face recognition system, all geared up to achieve widespread adoption. How has deep learning modernised facial recognition Traditional facial recognition algorithms would recognize images and people using distinct facial features (placement of eye, eye color, nose shape etc.) However, they failed in correct identification in cases of different lighting or slight change in the appearance ( beard growth, aging, or pose). In order to develop facial recognition techniques for a dynamic and ever-changing face, deep learning is proving to be a game changer. Deep Neural nets go beyond the approach of manual extraction. These AI based Neural Networks rely on image pixels to analyze features of a particular face. So they scan faces irrespective of the lighting, ageing, pose, or emotions. Deep learning algorithms remember each time they recognize or fail to recognize a problem. Thus, avoiding repeat mistakes and getting better at each attempt. Deep learning algorithms can also be helpful in converting 2D images to 3D. Facial recognition in practice: Facial Recognition Technology in Multimedia Deep learning enabled facial recognition technologies can be used to track audience reaction and measure different levels of emotions. Essentially it can predict how a member of the audience will react to the remaining film. Not only this, it also helps determine what percentage of users will be interested in a particular movie genre. For example, Microsoft’s Azure Emotion,  an emotion API detects emotions by analysing the facial expressions on an image or video content over time. Caltech and Disney have collaborated to develop a neural network which can track facial expressions. Their deep learning based Factorised Variational Autoencoders (FVAEs) analyze facial expressions of audience for about 10 minutes and then predict how their reaction will be for the rest of the film. These techniques help in estimating whether the viewers are giving the expected reactions at the right place. For example, the viewer is not expected to yawn on a comical scene. With this, Disney can also predict the earning potential of a particular movie. It can generate insights that may help producers create compelling movie trailers to maximize the number of footfalls. Smart TVs are also equipped with sophisticated cameras and deep learning algos for facial recognition ability. They can recognize the face of the person watching and automatically show channels and web applications programmed as their favorites. The British broadcasting corporation uses the facial recognition technology, built by CrowdEmotion. By tracking faces of almost 4,500 audience members watching show trailers, they gauge exact customer emotions about a particular programme. This in turn helps them generate insights to showcase successful commercials. Biometrics in Smartphones A large number of smartphones nowadays are instilled with biometric capabilities. Facial recognition in smartphones are not only used as a means of unlocking and authorizing, but also for making secure transactions and payments. In present times, there has been a rise in chips with built-in deep learning ability. These chips are embedded into smartphones. By having a neural net embedded inside the device, crucial face biometric data never leaves the device or sent to the cloud. This in turn improves privacy and reduces latency. Some of the real-world examples include Intel’s Nervana Neural Network Processor, Google’s TPU, Microsoft’s FPGA, and Nvidia’s Tesla V100. Deep learning models, embedded in a smartphone, can construct a mathematical model of the face which is then stored in the database. Using this mathematical face model, smartphones can easily recognize users even as their face ages or when it is obstructed by wearable accessories. Apple has recently launched the iPhone X facial recognition system termed as FaceID. It maps thousands of points on a user’s face using a projector and an infrared camera (which can operate under varied lighting conditions). This map is then passed to a bionic chip embedded in the smart phone. The chip has a neural network which constructs a mathematical model of the user’s face, used for biometric face verification and recognition. Windows Hello is also a facial recognition technology to unlock Windows smart devices equipped with infrared cameras. Qualcomm, a mobile technology organization, is working on a new depth-perception technology. It will include an image signal processor and high-resolution 3D depth-sensing cameras for facial recognition. Face recognition for Travel Facial recognition technologies can smoothen the departure process for a customer by eliminating the need for a boarding pass. A traveller is scanned by cameras installed at various check points, so they don’t have to produce a boarding pass at every step. Emirates is collaborating with Dubai Customs, Police and Airports to use a facial recognition technology solution integrated with the UAE Wallet app. The project is known as Together Initiative, it allows travellers to register and store their biometric facial data at several kiosks placed at the check-in area. This facility helps passengers to avoid presenting their physical documents at every touchpoint. Face recognition can also be used for determining illegal immigration. The technology compares the photos of passengers taken immediately before boarding, with the photos provided in their visa application. Biometric Exit, is an initiative by US government, which uses facial recognition to identify individuals leaving the country. Facial recognition technology can also be used at train stations to reduce the waiting time for  buying a train ticket or going through other security barriers. Bristol Robotics Laboratory has developed a software which uses infrared cameras to identify passengers as they walk onto the train platform. They do not need to carry tickets. Retail and shopping In the area of retail, smart facial recognition technologies can be helpful in fast checkout by keeping a track of each customer as they shop across a store. This smart technology, can also use machine learning and analytics to find trends in the shopper’s purchasing behavior over time and devise personalized recommendations. Facial video analytics and deep learning algorithms can also identify loyal and VIP shoppers from the moving crowd, giving them a privileged VIP experience. Thus, enabling them with more reasons to come back and make repeat purchases. Facial biometrics can also accumulate rich statistics about demographics(age, gender, shopping history) of an individual. Analyzing these statistics can generate insights, which helps organizations develop their products and marketing strategies. FindFace is one such platform that uses sophisticated deep learning technologies to generate meaningful data about the shopper. Its e-facial recognition system can verify faces with almost 99% accuracy. It can also help route the shopper data to a salesperson’s notice for personalized assistance. Facial recognition technology can also be used to make secure payment transactions simply by analysing a person’s face. AliBaba has set up a Smile to Pay face recognition system in KFC's. This system allows customers to make secure payments by merely scanning their face. Facial recognition has emerged as a hot topic of interest and is poised to grow. On the flip side, organizations deploying such technology should incorporate privacy policies as a standard measure. Data collected from such facial recognition software can also be used wrongly for targeting customers with ads, or for other illegal purposes. They should implement a methodical and systematic approach for using facial recognition for the benefit of their customers. This will not only help businesses generate a new source of revenue, but will also usher in a new era of judicial automation.  
Read more
  • 0
  • 0
  • 13616

article-image-iot-forensics-security-connected-world
Vijin Boricha
01 May 2018
3 min read
Save for later

IoT Forensics: Security in an always connected world where things talk

Vijin Boricha
01 May 2018
3 min read
Connected physical devices, home automation appliances, and wearable devices are all part of Internet of Things (IoT). All of these have two major things in common that is seamless connectivity and massive data transfer. This also brings with it, plenty of opportunities for massive data breaches and allied cyber security threats. The motive of digital forensics is to identify, collect, analyse, and present digital evidence collected from various mediums in a cybercrime incident. The multiplication of IoT devices and the increased number of cyber security incidents has given birth to IoT forensics. IoT forensics is a branch of digital forensics which deals with IoT-related cybercrimes and includes investigation of connected devices, sensors and the data stored on all possible platforms. If you look at the bigger picture, IoT forensics is a lot more complex, multifaceted and multidisciplinary in approach than traditional forensics. With versatile IoT devices, there is no specific method of IoT forensics that can be broadly used.So identifying valuable sources is a major challenge. The entire investigation will depend on the nature of the connected or smart device in place. For example, evidence could be collected from fixed home automation sensors, or moving automobile sensors, wearable devices or data store on Cloud. When compared to the standard digital forensic techniques, IoT forensics portrays multiple challenges depending on the versatility and complexity of the IoT devices. Following are some challenges that one may face in an investigation: Variance of the IoT devices Proprietary Hardware and Software Data present across multiple devices and platforms Data can be updated, modified, or lost Proprietary jurisdictions for data is stored on cloud or a different geography As such, IoT Forensics requires a multi-faceted approach where evidence can be collected from various sources. We can categorize sources of evidence into three broad groups: Smart devices and sensors; Gadgets present at the crime scene (Smartwatch, home automation appliances, weather control devices, and more) Hardware and Software; the communication link between smart devices and the external world (computers, mobile, IPS, and firewalls) External resources; areas outside the network unders investigation (Cloud, social networks, ISPs and mobile network providers) Once the evidence is successfully collected from an IoT device no matter the file system, operating system, or the platform it is based on, it should be logged and monitored. The main reason behind this is IoT devices data storage are majorly on Cloud due to its scalability and accessibility. There are high possibilities the data on Cloud can be altered which would result to an investigation failure. No doubt Cloud forensics can equally play an important role here but strengthening cyber security best practices should be the ideal motive. With ever evolving IoT devices there will always be a need for unique practice methods and techniques to break through the investigation. Cybercrime keeps evolving and getting bolder by the day. Forensics experts will have to develop skill sets to deal with the variety and complexity of IoT devices to keep up with this evolution. No matter the challenges one faces there is always a unique solution to complex problems. There will always be a need for unique, intelligent, and adaptable techniques to investigate IoT-related crimes and an even greater need for those displaying these capabilities. To learn more on IoT security, you can get you hands on a few of our books; IoT Penetration Testing Cookbook and Practical Internet of Things Security. Why Metadata is so important for IoT Why the Industrial Internet of Things (IIoT) needs Architects 5 reasons to choose AWS IoT Core for your next IoT project  
Read more
  • 0
  • 0
  • 13607

article-image-heres-how-you-can-handle-the-bias-variance-trade-off-in-your-ml-models
Savia Lobo
22 Jan 2018
8 min read
Save for later

Here's how you can handle the bias variance trade-off in your ML models

Savia Lobo
22 Jan 2018
8 min read
Many organizations rely on machine learning techniques in their day-today workflow, to cut down on the time required to do a job. The reason why these techniques are robust is because they undergo various tests in order to carry out correct predictions about any data fed into them. During this phase, there are also certain errors generated, which can lead to an inconsistent ML model. Two common errors that we are going to look at in this article are that of bias and Variance, and how a trade-off can be achieved between the two in order to generate a successful ML model.  Let’s first have a look at what creates these kind of errors. Machine learning techniques or more precisely supervised learning techniques involve training, often the most important stage in the ML workflow. The machine learning model is trained using the training data. How is this training data prepared? This is done by using a dataset for which the output of the algorithm is known. During the training stage, the algorithm analyzes the training data that is fed and produces patterns which are captured within an inferred function. This inferred function, which is derived after analysis of the training dataset, is the model that would be further used to map new examples. An ideal model generated from this training data should be able to generalize well. This means, it should learn from the training data and should correctly predict or classify data within any new problem instance. In general, the more complex the model is, the better it classifies the training data. However, if the model is too complex i.e it will pick up random features i.e. noise in the training data, this is the case of overfitting i.e. the model is said to overfit . On the other hand, if the model is not so complex, or missing out on important dynamics present within the data, then it is a case of underfitting. Both overfitting and underfitting are basically errors in the ML models or algorithms. Also, it is generally impossible to minimize both these errors at the same time and this leads to a condition called as the Bias-Variance Tradeoff. Before getting into knowing how to achieve the trade-off, lets simply understand how bias and variance errors occur. The Bias and Variance Error Let’s understand each error with the help of an example. Suppose you have 3 training datasets say T1, T2, and T3, and you pass these datasets through a supervised learning algorithm. The algorithm generates three different models say M1, M2, and M3 from each of the training dataset. Now let’s say you have a new input A. The whole idea is to apply each model on this new input A. Here, there can be two types of errors that can occur. If the output generated by each model on the input A is different(B1, B2, B3), the algorithm is said to have a high Variance Error. On the other hand, if the output from all the three models is same (B) but incorrect, the algorithm is said to have a high Bias Error. High Variance also means that the algorithm produces a model that is too specific to the training data, which is a typical case of Overfitting. On the other hand, high bias means that the algorithm has not picked up defining patterns from the dataset, this is a case of Underfitting. Some examples of high-bias ML algorithms are: Linear Regression, Linear Discriminant Analysis and Logistic Regression Examples of high-variance Ml algorithms are: Decision Trees, k-Nearest Neighbors and Support Vector Machines.  How to achieve a Bias-Variance Trade-off? For any supervised algorithm, having a high bias error usually means it has low variance error and vise versa. To be more specific, parametric or linear ML algorithms often have a high bias but low variance. On the other hand, non-parametric or non-linear algorithms have vice versa. The goal of any ML model is to obtain a low variance and a low bias state, which is often a task due to the parametrization of machine learning algorithms. So how can we achieve a trade-off between the two? Following are some ways to achieve the Bias-Variance Tradeoff: By minimizing the total error: The optimum location for any model is the level of complexity at which the increase in bias is equivalent to the reduction in variance. Practically, there is no analytical method to find the optimal level. One should use an accurate measure for error prediction and explore different levels of model complexity, and then choose the complexity level that reduces the overall error. Generally resampling based measures such as cross-validation should be preferred over theoretical measures such as Aikake's Information Criteria. Source: http://scott.fortmann-roe.com/docs/BiasVariance.html (The irreducible error is the noise that cannot be reduced by algorithms but can be reduced with better data cleaning.) Using Bagging and Resampling techniques: These can be used to reduce the variance in model predictions. In bagging (Bootstrap Aggregating), several replicas of the original dataset are created using random selection with replacement. One modeling algorithm that makes use of bagging is Random Forests. In Random Forest algorithm, the bias of the full model is equivalent to the bias of a single decision tree--which itself has high variance. By creating many of these trees, in effect a "forest", and then averaging them the variance of the final model can be greatly reduced over that of a single tree. Adjusting minor values in algorithms: Both the k-nearest algorithms and Support Vector Machines(SVM) algorithms have low bias and high variance. But the trade-offs in both these cases can be changed. In the K-nearest algorithm, the value of k can be increased, which would simultaneously increase the number of neighbors that contribute to the prediction. This in turn would increase the bias of the model. Whereas, in the SVM algorithm, the trade-off can be changed by an increase in the C parameter that would influence the violations of the margin allowed in the training data. This will increase the bias but decrease the variance. Using a proper Machine learning workflow: This means you have to ensure proper training by: Maintaining separate training and test sets - Splitting the dataset into training (50%), testing(25%), and validation sets ( 25%). The training set is to build the model, test set is to check the accuracy of the model, and the validation set is to evaluate the performance of your model hyperparameters. Optimizing your model by using systematic cross-validation - A cross-validation technique is a must to fine tune the model parameters, especially for unknown instances. In supervised machine learning, validation or cross-validation is used to find out the predictive accuracy within various models of varying complexity, in order to find the best model.For instance, one can use the k-fold cross validation method. Here, the dataset is divided into k folds. For each fold, train the algorithm on k-1 folds iteratively, using the remaining fold(also called as 'holdout fold')as the test set. Repeat this process until each k has acted as a test set. The average of the k recorded errors is called as the cross validation error and can serve as the performance metric for the model.   Trying out appropriate algorithms - Before relying on any model we need to first ensure that the model works best for our assumptions. One can make use of the No Free Lunch theorem, which states that one model can not work for only one problem. For instance, while using No Free lunch theorem, a random search will do the same as any of the heuristic optimization algorithms.   Tuning the hyperparameters that can give an impactful performance - Any machine learning model requires different hyperparameters such as constraints, weights or learning rates for generalizing different data patterns. Tuning these hyperparameters is necessary so that the model can optimally solve machine learning problems. Grid search and randomized search are two such methods practiced for hyperparameter tuning. So, we have listed some of the ways where you can achieve trade-off between the two. Both bias and variance are related to each other, if you increase one the other decreases and vice versa. By a trade-off, there is an optimal balance in the bias and variance which gives us a model that is neither underfit nor overfit. And finally, the ultimate goal of any supervised machine algorithm lies in isolating the signal from the dataset, and making sure that it eliminates the noise.  
Read more
  • 0
  • 0
  • 13569
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-the-best-backend-tools-in-web-development
Sugandha Lahoti
06 Jun 2018
5 min read
Save for later

The best backend tools in web development

Sugandha Lahoti
06 Jun 2018
5 min read
If you’re a backend developer, it’s easy to feel overwhelmed by the range of backend development tools available. It goes without saying that you should use what works for you but sometimes it’s not that easy to even work that out. With this in mind, this year’s Skill Up report offers a useful insight into some of the most popular backend tools being used today. Let’s take a look at what tools came out on top. That should help you make decisions about what you’re going to use or maybe even just learn. Read the Skill Up report in full. Sign up to our weekly newsletter and download the PDF for free. Node.js More than 50% respondents said, they prefer Node.js, the popular server-side Javascript coding framework. Node.js is a Javascript runtime that runs on the V8 JavaScript runtime engine. Node.js adds capabilities to Javascript (front-end language) to let it do more than just creating interactive websites. It uses an event-driven, non-blocking I/O model that makes it lightweight and efficient. The latest stable release of Node, Node 10, will be the next candidate in line for the Long Term Support (LTS) in October 2018. Node.js 10.0 comes with plenty of new features like OpenSSL 1.1.0 security toolkit, upgraded npm, N-API, and much more. Get started with learning Node.js with the following books: Learning Node.js Development Learn Node.js by Building 6 Projects RESTful Web API Design with Node.js 10 - Third Edition ASP.NET Core The next popular alternative was ASP. NET Core with over 25% developers approving it as their choice of backend framework. ASP.NET Core is the open-source cross-platform framework for building backends, web apps and services, and IoT apps. According to the skill-up survey, it was also one of the most popular framework used by developers. It provides a cloud-ready, environment-based configuration system. It seamlessly integrates with popular client-side frameworks and libraries, including Angular, React, and Bootstrap. Get started with ASP.NET Core by reading: Learning ASP.NET Core 2.0 Mastering ASP.NET Core 2.0 ASP.NET Core 2 High Performance - Second Edition Express.js Developers and tech pros also like to work with Express JS, and hence it ranked No. 3 on our list. Express JS is the pre-built Node JS framework that can help developers build faster and smarter websites and web apps. Express basically extends Node.js to build complete web apps. It is the perfect framework to learn for developers, who are fluent in Node.js, but want to transition to creating apps from just server-side technologies. Express is lightweight and comes with extra, built-in web application features and the Express API to support the already robust, feature-packed Node.js platform. Express is not just limited to NodeJS. It also works seamlessly with other modules and offers HTTP utilities and middleware for creating APIs. It can help developers master single-page and multiple-page websites, as well as some complex web apps. You can go through Projects in ExpressJS [Video], a complete course to learn professional web development using Express.js. Laravel Next, was Laravel, a prominent member of a new generation of web frameworks. It is one of the most popular PHP frameworks and is also free and and open source. It features: A simple, fast routing engine Powerful dependency injection container Multiple back-ends for session and cache storage Database agnostic schema migrations Robust background job processing Real-time event broadcasting The latest stable release, Laravel 5 is a substantial upgrade with a lot of new toys, at the same time retaining the features that made Laravel wildly successful. It comes with plenty of architectural as well as design-based changes. Start building with Laravel with these videos. Beginning Laravel [Video] Laravel Foundations: Basics to Every App [Video] Java EE The fifth most popular choice of backend tool is the Java EE. The Enterprise Java standard or Java EE is a collection of technologies and APIs for the Java platform designed to support Enterprise. By enterprise, we mean applications classified as large-scale, distributed, transactional and highly-available, designed to support mission-critical business requirements. Applications written to comply with the Java EE specification do not tie developers to a specific vendor; instead, they can be deployed to any Java EE compliant application server. The Java EE server application implements the Java EE platform APIs and provides the standard Java EE services. The latest stable release, Java EE 8 brings with it a load of features, mainly targeting newer architectures such as microservices, modernized security APIs, and cloud deployments. Our best picks for learning Java EE: Java EE 8 Application Development Architecting Modern Java EE Applications Java EE 8 High Performance The other backend tools which were among the top picks by developers included: Spring, a programming and configuration model for building modern Java-based enterprise applications, on any kind of deployment platform. Django, a powerful Python web framework for creating RESTful web services. It reduces the amount of trivial code, which simplifies the creation of web applications and results in faster development. Flask, a framework for building web servers in Python. It is a micro framework, meaning it’s not a full stack web application development framework. It just gives the developers very basics to get a web server running. Firebase, Google’s mobile platform to help developers run mobile backend code without managing servers and develop high-quality apps. Ruby on Rails, one of the oldest, backend technology. A certain percentage of people still prefer using ruby on rails for their backend code. Rails is a flexible and IDE friendly framework with easy functions and manipulations and the support of the powerful ruby language. The entire skill up survey report can be read on the Packt website, which details on what developers think about the changing tech landscape and the parameters that are driving that change. This survey report is launched at the start of the Skill Up campaign, where every eBook and video will be available for $10. Go grab your free content now!
Read more
  • 0
  • 0
  • 13386

article-image-top-7-python-programming-books-need-to-read
Aaron Lazar
22 Jun 2018
9 min read
Save for later

Top 7 Python programming books you need to read

Aaron Lazar
22 Jun 2018
9 min read
Python needs no introduction. It’s one of the top rated and growing programming languages, mainly because of its simplicity and wide applicability to solve a range of problems. Developers like yourself, beginners and experts alike, are looking to skill themselves up with Python. So I thought I would put together a list of Python programming books that I think are the best for learning Python - whether you're a beginner or experienced Python developer. Books for beginning to learn Python Learning Python, by Fabrizio Romano What the book is about This book explores the essentials of programming, covering data structures while showing you how to manipulate them. It talks about control flows in a program and teaches you how to write clean and reusable code. It reveals different programming paradigms and shows you how to optimize performance as well as debug your code effectively. Close to 450 pages long, the content spans twelve well thought out chapters. You’ll find interesting content on Functions, Memory Management and GUI app development with PyQt. Why Learn from Fabrizio Fabrizio has been creating software for over a decade. He has a master's degree in computer science engineering from the University of Padova and is also a certified Scrum master. He has delivered talks at the last two editions of EuroPython and at Skillsmatter in London. The Approach Taken The book is very easy to follow, and takes an example driven approach. As you end the book, you will be able to build a website in Python. Whether you’re new to Python or programming on the whole, you’ll have no trouble at all in following the examples. Download Learning Python FOR FREE. Learning Python, by Mark Lutz What the book is about This is one of the top most books on Python. A true bestseller, the book is perfectly fit for both beginners to programming, as well as developers who already have experience working with another language. Over 1,500 pages long, and covering content over 41 chapters, the book is a true shelf-breaker! Although this might be a concern to some, the content is clear and easy to read, providing great examples wherever necessary. You’ll find in-depth content ranging from Python syntax, to Functions, Modules, OOP and more. Why Learn from Mark Mark is the author of several Python books and has been using Python since 1992. He is a world renowned Python trainer and has taught close to 260 virtual and on-site Python classes to roughly 4,000 students. The Approach Taken The book is a great read, complete with helpful illustrations, quizzes and exercises. It’s filled with examples and also covers some advanced language features that recently have become more common in modern Python. You can find the book here, on Amazon. Intermediate Python books Modern Python Cookbook, by Steven Lott What the book is about Modern Python Cookbook is a great book for those already well versed with Python programming. The book aims to help developers solve the most common problems that they’re faced with, during app development. Spanning 824 pages, the book is divided into 13 chapters that cover solutions to problems related to data structures, OOP, functional programming, as well as statistical programming. Why Learn from Steven Steven has over 4 decades of programming experience, over a decade of which has been with Python. He has written several books on Python and has created some tutorial videos as well. Steven’s writing style is one to envy, as he manages to grab the attention of the readers while also imparting vast knowledge through his books. He’s also a very enthusiastic speaker, especially when it comes to sharing his knowledge. The Approach Taken The book takes a recipe based approach; presenting some of the most common, as well as uncommon problems Python developers face, and following them up with a quick and helpful solution. The book describes not just the how and the what, but the why of things. It will leave you able to create applications with flexible logging, powerful configuration, command-line options, automated unit tests, and good documentation. Find Modern Python Cookbook on the Packt store. Python Crash Course, by Eric Matthes What the book is about This one is a quick paced introduction to Python and assumes that you have knowledge of some other programming language. This is actually somewhere in between Beginner and Intermediate, but I've placed it under Intermediate because of its fast-paced, no-fluff-just-stuff approach. It will be difficult to follow if you’re completely new to programming. The book is 560 pages long and is covered over 20 chapters. It covers topics ranging from the Python libraries like NumPy and matplotlib, to building 2D games and even working with data and visualisations. All in all, it’s a complete package! Why Learn from Eric Eric is a high school math and science teacher. He has over a decade’s worth of programming experience and is a teaching enthusiast, always willing to share his knowledge. He also teaches an ‘Introduction to Programming’ class every fall. The Approach Taken The book has a great selection of projects that caters to a wide range of audience who’re planning to use Python to solve their programming problems. It thoughtfully covers both Python 2 and 3. You can find the book here on Amazon. Fluent Python, by Luciano Ramalho What the book is about The book is an intermediate guide that assumes you have already dipped your feet into the snake pit. It takes you through Python’s core language features and libraries, showing you how to make your code shorter, faster, and more readable at the same time. The book flows over almost 800 pages, with 21 chapters. You’ll indulge yourself in topics on the likes of Functions as well as objects, metaprogramming, etc. Why Learn from Luciano Luciano Ramalho is a member of the Python Software Foundation and co-founder of Garoa Hacker Clube, the first hackerspace in Brazil. He has been working with Python since 1998. He has taught Python web development in the Brazilian media, banking and government sectors and also speaks at PyCon US, OSCON, PythonBrazil and FISL. The Approach Taken The book is mainly based on the language features that are either unique to Python or not found in many other popular languages. It covers the core language and some of its libraries. It has a very comprehensive approach and touches on nearly every point of the language that is pythonic, describing not just the how and the what, but the why. You can find the book here, on Amazon. Advanced Python books The Hitchhiker's Guide to Python, by Kenneth Reitz & Tanya Schlusser What the book is about This isn’t a book that teaches Python. Rather, it’s a book that shows experienced developers where, when and how to use Python to solve problems. The book contains a list of best practices and how to apply these practices in real-world python projects. It focuses on giving great advice about writing good python code. It is spread over 11 chapters and 338 pages. You’ll find interesting topics like choosing an IDE, how to manage code, etc. Why Learn from Kenneth and Tanya Kenneth Reitz is a member of the Python Software Foundation. Until recently, he was the product owner of Python at Heroku. He is a known speaker at several conferences. Tanya is an independent consultant who has over two decades of experience in half a dozen languages. She is an active member of the Chicago Python User’s Group, Chicago’s PyLadies, and has also delivered data science training to students and industry analysts. The Approach Taken The book is highly opinionated and talks about what the best tools and techniques are to build Python apps. It is a book about best practices and covers how to write and ship high quality code, and is very insightful. The book also covers python libraries/frameworks that are focused on capabilities such as data persistence, data manipulation, web, CLI, and performance. You can get the book here on Amazon. Secret Recipes of the Python Ninja, by Cody Jackson What the book is about Now this is a one-of-a-kind book. Again, this one is not going to teach you about Python Programming, rather it will show you tips and tricks that you might not have known you could do with Python. In close to 400 pages, the book unearth secrets related to the implementation of the standard library, by looking at how modules actually work. You’ll find interesting topics on the likes of the CPython interpreter, which is a treasure trove of secret hacks that not many programmers are aware of, the PyPy project, as well as explore the PEPs of the latest versions to discover some interesting hacks. Why Learn from Cody Cody Jackson is a military veteran and the founder of Socius Consulting, an IT and business management consulting company. He has been involved in the tech industry since 1994. He is a self-taught Python programmer and also the author of the book series Learning to Program Using Python. He’s always bubbling with ideas and ways about improving the way he codes and has brilliantly delivered content through this book. The Approach Taken Now this one is highly opinionated too - the idea is to learn the skills from a Python Ninja. The book takes a recipe based approach, putting a problem before you and then showing you how you can wield Python to solve it. Whether you’re new to Python or are an expert, you’re sure to find something interesting in the book. The recipes are easy to follow and waste no time on lengthy explanations. You can find the book here on Amazon and here on the Packt website. So there you have it. Those were my top 7 books on Python Programming. There are loads of books available on Amazon, and quite a few from Packt that you can check out, but the above are a list of those that are a must-have for anyone who’s developing in Python. Read Next What are data professionals planning to learn this year? Python, deep learning, yes. But also… Python web development: Django vs Flask in 2018 Why functional programming in Python matters: Interview with best selling author, Steven Lott What the Python Software Foundation & Jetbrains 2017 Python Developer Survey had to reveal
Read more
  • 0
  • 0
  • 13325

article-image-what-is-seaborn-and-why-should-you-use-it-for-data-visualization
Erik Kappelman
30 Jan 2018
6 min read
Save for later

What is Seaborn and why should you use it for data visualization?

Erik Kappelman
30 Jan 2018
6 min read
Seaborn is a Python library created for enhanced data visualization. It's a very timely and relevant tool for data professionals working today precisely because effective data visualization – and communication in general – is a particularly essential skill. Being able to bridge the gap between data and insight is hugely valuable, and Seaborn is a tool that fits comfortably in the toolchain of anyone interested in doing just that. There are, of course, a huge range of data visualization libraries out there – but if you're wondering why you should use Seaborn, put simply it brings some serious power to the table that other tools can’t quite match. Follow this Seaborn tutorial and you’ll find out what makes Seaborn such a good data visualization library. How to get started with Seaborn To get started, I recommend becoming familiar with Anaconda, if you are not already. I find that using Anaconda and its various tools makes coding in Python, especially package and library management, a whole lot easier. So, let's load the packages we are going to need. (I am assuming you have already downloaded and setup Seaborn.) import numpy as np import matplotlib.pyplot as plt import seaborn as sns import pandas as pd Now that we have our packages on board, let's just make a basic plot. The function below creates a series of sine functions and then graphs all of these functions; take a look: np.random.seed(sum(map(ord, "aesthetics"))) def sinplot(flip=1): x = np.linspace(0, 14, 100) for i in range(1, 7): plt.plot(x, np.sin(x + i * .5) * (7 - i) * flip) sin = sinplot() plt.savefig("sin.png") It’s a pretty basic set of sine curves, and while it looks pretty professional and clean, it doesn’t really tell us much more about what makes Seaborn unique. So what makes Seaborn different? What are the benefits of Seaborn? Well, let's take a look at what Seaborn refers to as ‘joint plots.’ These plots pair a scatter plot with the distribution of each variable in the scatter plot on the axes. Let's look at the code for the next two graphs and then we’ll discuss why they matter: join1 = sns.jointplot(x="x", y="y", data=df); join1.savefig("join1.png") join2= sns.jointplot(x="x", y="y", data=df, kind="kde"); join2.savefig("join2.png") plt.clf() This plot isn’t unique to Seaborn. I've created very similar plots in R, however, that plot took one single line of code. In R, at the very least you're looking at five or six lines, and you’re going to have to use the default plotting package because I’ve never been able to figure out marginal plots in ggplot2. Graphs like this really show us a lot about the data we are examining. We can simultaneously see that the two sets of data are correlated and that they are both somewhat skewed and non-normal, although the y variable could probably pass as normal. If marginal plots were this easy in R, I would leverage them a whole lot more because they are informative. The next plot, however, is different. In fact, I hadn’t really seen something like it before I learned about Seaborn. This plot uses a kernel density plot instead of a scatter plot, and the distributions are estimated smoothly instead of using histograms. This could be a helpful graph if you were specifically interested in densities and correlations as well as the distributions of the data. This could be quite beneficial in various spatial analysis applications, as well as traditional statistical fields. The third join plot includes a regression line in the scatter plot as well as an assessment of the fit of the linear model used. The code used to produce this plot is below: tips = sns.load_dataset('tips') sns.jointplot(x="total_bill", y="tip", data=tips, kind="reg"); plt.savefig('join3.png') The inclusion of error fields around the line helps you to better visualize the accuracy of the linear regression. Additionally, the distribution of the data is available in the margins. Normally, it would take three separate graphs to convey all of this information. Seaborn makes this much simpler. With a single line of code, we are able to create a graph that covers all of the relevant information related to this linear regression. Another somewhat novel graph type that’s available in Seaborn is the violin plot. Again, we can create this complex graph with the simple code shown below: iris = sns.load_dataset("iris") sns.violinplot(x=iris.species, y=iris.sepal_length, data=iris); plt.savefig("violin.png") This is data from the famous Iris data set. The violin plot is essentially an amalgamation of a box plot and a kernel density estimate of a distribution. Both box plots and graphs of univariate distributions are very helpful when first beginning analysis of some dataset. Again, Seaborn takes a lot out of the work of this process by making it easy to produce single graphs that would normally take multiple graphs using other analysis tools. The final chart I would like to show is really useful. It summarizes the results of univariate logistic regression graphically. This is a tough thing to display and until I came across Seaborn I had really never seen an example I would consider good. The chart is created with the code below: tips['big_tip'] = tips['tip']/tips['total_bill'] >= 0.2 sns.lmplot(x="total_bill", y="big_tip", data=tips,logistic=True, y_jitter=.03); plt.savefig("tiplogit.png") The chart displays the results of the regression a binary indicator if a tip was larger than 20 percent or ‘big’ against the total cost of the meal: The chart illustrates very clearly that people are not tipping as much when their meals are more expensive, at least in terms of proportions. Summarizing the results of logistic regressions is always challenging, but as you can see, thanks to Seaborn, you can do a pretty good job with just one line of code. Seaborn is simply a really great library that's worth your time exploring – I hope this post has convinced you and inspired you to go and try it for yourself if you haven't already. There is always room for improvement when it comes to data visualization. Seaborn might be the improvement you need. I know I'll be using it.
Read more
  • 0
  • 0
  • 13295

article-image-what-is-the-reactive-manifesto
Packt Editorial Staff
17 Apr 2018
3 min read
Save for later

What is the Reactive Manifesto?

Packt Editorial Staff
17 Apr 2018
3 min read
The Reactive Manifesto is a document that defines the core principles of reactive programming. It was first released in 2013 by a group of developers led by a man called Jonas Boner (you can find him on Twitter: @jboner). Jonas wrote this in a blog post explaining the reasons behind the manifesto: "Application requirements have changed dramatically in recent years. Both from a runtime environment perspective, with multicore and cloud computing architectures nowadays being the norm, as well as from a user requirements perspective, with tighter SLAs in terms of lower latency, higher throughput, availability and close to linear scalability. This all demands writing applications in a fundamentally different way than what most programmers are used to." A number of high-profile programmers signed the reactive manifesto. Some of the names behind it include Erik Meijer, Martin Odersky, Greg Young, Martin Thompson, and Roland Kuhn. A second, updated version of the Reactive Manifesto was released in 2014 - to date more than 22,000 people have signed it. The Reactive Manifesto underpins the principles of reactive programming You can think of it as the map to the treasure of reactive programming, or like the bible for the programmers of the reactive programming religion. Everyone starting with reactive programming should have a read of the manifesto to understand what reactive programming is all about and what its principles are. The 4 principles of the Reactive Manifesto Reactive systems must be responsive The system should respond in a timely manner. Responsive systems focus on providing rapid and consistent response times, so they deliver a consistent quality of service. Reactive systems must be resilient In case the system faces any failure, it should stay responsive. Resilience is achieved by replication, containment, isolation, and delegation. Failures are contained within each component, isolating components from each other, so when failure has occurred in a component, it will not affect the other components or the system as a whole. Reactive systems must be elastic Reactive systems can react to changes and stay responsive under varying workload. They achieve elasticity in a cost effective way on commodity hardware and software platforms. Reactive systems must be message driven Message driven: In order to establish the resilient principle, reactive systems need to establish a boundary between components by relying on asynchronous message passing. Those are the core principles behind reactive programming put forward by the manifesto. But there's something else that supports the thinking behind reactive programming. That's the standard specification on reactive streams. Reactive Streams standard specifications Everything in the reactive world is accomplished with the help of Reactive Streams. In 2013, Netflix, Pivotal, and Lightbend (previously known as Typesafe) felt a need for a standards specification for Reactive Streams as the reactive programming was beginning to spread and more frameworks for reactive programming were starting to emerge, so they started the initiative that resulted in Reactive Streams standard specification, which is now getting implemented across various frameworks and platforms. You can take a look at the Reactive Streams standard specification here. This post has been adapted from Reactive Programming in Kotlin. Find it on the Packt store here.
Read more
  • 0
  • 1
  • 13259
article-image-why-is-data-science-important
Richard Gall
24 Apr 2018
3 min read
Save for later

Why is data science important?

Richard Gall
24 Apr 2018
3 min read
Is data science important? It's a term that's talked about a lot but often misunderstood. Because it's a buzzword it's easy to dismiss; but data science is important. Behind the term lies very specific set of activities - and skills - that businesses can leverage to their advantage. Data science allows businesses to use the data at their disposal, whether that's customer data, financial data or otherwise, in an intelligent manner. It's results should be a key driver of growth. However, although it’s not wrong to see data science as a real game changer for business, that doesn’t mean it’s easy to do well. In fact, it’s pretty easy to do data science badly. A number of reports suggest that a large proportion of analytics projects fail to deliver results. That means a huge number of organizations are doing data science wrong. Key to these failures is a misunderstanding of how to properly utilize data science. You see it so many times - buzzwords like data science are often like hammers. They make all your problems look like nails. And not properly understanding the business problems you’re trying to solve is where things go wrong. What is data science? But what is data science exactly? Quite simply, it’s about using data to solve problems. The scope of these problems is huge. Here are a few ways data science can be used: Improving customer retention by finding out what the triggers of churn might be Improving internal product development processes by looking at points where faults are most likely to happen Targeting customers with the right sales messages at the right time Informing product development by looking at how people use your products Analyzing customer sentiment on social media Financial modeling As you can see data science is a field that can impact every department. From marketing to product management to finance, data science isn’t just a buzzword, it’s a shift in mindset about how we work. Data science is about solving business problems To anyone still asking is data science important, the answer is actually quite straightforward. It's important because it solves business problems. Once you - and management - recognise that fact, you're on the right track. Too often businesses want machine learning, big data projects without thinking about what they’re really trying to do. If you want your data scientists to be successful, present them with the problems - let them create the solutions. They won’t want to be told to simply build a machine learning project. It’s crucial to know what the end goal is. Peter Drucker once said “in God we trust… everyone else must bring data”. But data science didn’t really exist then - if it did it could be much simpler: trust your data scientists.
Read more
  • 0
  • 0
  • 13159

article-image-what-coding-service
Antonio Cucciniello
02 Oct 2017
4 min read
Save for later

What is coding as a service?

Antonio Cucciniello
02 Oct 2017
4 min read
What is coding as a service? If you want to know what coding as a service is, you have to start with Artificial intelligence. Put simply, coding-as-a-service is using AI to build websites, using your machine to write code so you don't have to. The challenges facing engineers and programmers today In order to give you a solid understanding of what coding as a service is, you must understand where we are today. Typically, we have programs that are made by software developers or engineers. These programs are usually created to automate a task or make tasks easier. Think things that typically speed up processing or automate a repetitive task. This is, and has been, extremely beneficial. The gained productivity from the automated applications and tasks allows us, as humans and workers, to spend more time on creating important things and coming up with more ground breaking ideas. This is where Artificial Intelligence and Machine Learning come into the picture. Artificial intelligence and coding as a service Recently, with the gains in computing power that have come with time and breakthroughs, computers have became more and more powerful, allowing for AI applications to arise in more common practice. At this point today, there are applications that allow for users to detect objects in images and videos in real-time, translate speech to text, and even determine the emotions in the text sent by someone else. For an example of Artificial Intelligence Applications in use today, you may have used an Amazon Alexa or Echo Device. You talk to it, and it can understand your speech, and it will then complete a task based off your speech. Previously, this was a task given to only humans (the ability to understand speech.). Now with advances, Alexa is capable of understanding everything you say,given that it is "trained" to understand it. This development, previously only expected of humans, is now being filtered through to technology. How coding as a service will automate boring tasks Today, we have programmers that write applications for many uses and make things such as websites for businesses. As things progress and become more and more automated, that will increase programmer’s efficiency and will reduce the need for additional manpower. Coding as a service, other wise known as Caas, will result in even fewer programmers needed. It mixes the efficiencies we already have with Artificial Intelligence to do programming tasks for a user. Using Natural Language Processing to understand exactly what the user or customer is saying and means, it will be able to make edits to websites and applications on the fly. Not only will it be able to make edits, but combined with machine learning, the Caas can now come up with recommendations from past data to make edits on its own. Efficiency-wise, it is cheaper to own a computer than it is to pay a human especially when a computer will work around the clock for you and never get tired. Imagine paying an extremely low price (one than you might already pay to get a website made) for getting your website built or maybe your small application created. Conclusion Every new technology comes with pros and cons. Overall, the number of software developers may decrease, or, as a developer, this may free up your time from more menial tasks, and enable you to further specialize and broaden your horizons. Artificial Intelligence programs such as Coding as a Service could be spent doing plenty of the underlying work, and leave some of the heavier loading to human programmers. With every new technology comes its positives and negatives. You just need to use the postives to your advantage!
Read more
  • 0
  • 0
  • 13029

article-image-what-micro-frontend
Amit Kothari
08 Oct 2017
6 min read
Save for later

What is a micro frontend?

Amit Kothari
08 Oct 2017
6 min read
The microservice architecture enables us to write scalable and agile backend systems. Writing independent, self-contained services give us the flexibility to quickly add a new feature or easily change an existing one without affecting the whole system. Independently deployable services also allow us to scale our services as per the demand. We will show you how you can use a similar approach for frontend applications. You will learn about micro frontend architecture, its benefits, and strategy to break down a monolith web app into micro frontends. What is micro frontend architecture? Micro frontend architecture is an approach to developing web application as a composition of small frontend apps. Instead of writing a large monolith frontend application, the application is broken down into domain specific micro frontends, which are self-contained and can be developed and deployed independently. Advantages of using micro frontends Micro frontends bring the concept and benefits of micro services to frontend applications. Each micro frontend is self-contained, which allows faster delivery as multiple teams can work on different parts of the application without affecting each other. This also gives each team the freedom to choose different technology as required. Since the micro frontends are highly decoupled, they have a lower impact on other parts of the application and can be enhanced and deployed independently. Design considerations Let's say we want to build an online shopping website using micro frontend architecture. Instead of developing the site as one large application, we can split the website into micro frontends. For example, the pages to display lists of products and product details can be one micro frontend and the pages to show order history of a user can be another micro frontend. The user interface is made up of multiple micro frontends, but we do not want our users to feel that different pages are part of different apps. Here are some of the practices we can use to decompose a frontend application into smaller micro frontends, without compromising user experience. Single responsibility The first thing to consider is how to split an application into smaller apps so that each app can be developed and deployed independently. When teams are working on the different micro frontends, we want the apps to be highly decoupled so that a change in one app would not affect the other apps. This can be achieved by building domain specific micro frontends with single responsibility and well-defined bounded context. Just like our code, we want our micro frontends to have high cohesion and low coupling i.e. all the related code should be close to each other and less dependent on other modules. If we take the example of our online shopping site again, we want all the product related UI components in the product micro frontend and all the order related functionality in the order micro frontend. Let's say we have a user dashboard screen where users can see information from different domains, they can see their pending orders and also products which are on specials. Instead of creating a dashboard micro frontend, it is recommended to have the pending order UI component as part of order micro frontend and product related components as part of product micro frontend. This will allow us to split our system vertically and have domain specific frontend and backend services. Common interface for communication and data exchange For micro frontends to work harmoniously as a single web application, they need a common and consistent way to communicate with each other. Even if they are highly independent, they still need to talk to each other. One of the common approaches is to have an application that works as an integration layer. The app can work as a container to render different micro frontends and also facilitate communication between them. For example, in our online shopping website, once a user submits an order through the shopping cart micro frontend, we want to take the user to their order lists screen. Since both the order and shopping cart micro frontends are highly decoupled and do not know about each other, we can use the container app as the orchestration layer. On receiving order submission events from the shopping cart micro frontend, the container app will navigate the user to the order micro frontend. The container app can also be used to handle cross cutting concerns like user session management, analytics, etc. This approach works well with existing monolith frontends where the existing monolith application can work as the container and any new feature can be independently developed as a micro frontend and can be integrated into the existing app. The existing functionality can be also extracted and rewritten as micro frontends as required. Consistent look and feel Although our user interface is divided into multiple micro frontends, we still want our users to feel as if they are interacting with a single application. We want our apps to have a consistent look and feel, and also the ability to make UI changes easily across multiple apps. For example, we should be able to change the font or the primary colors across multiple micro frontends. This can be done by sharing CSS and assets like images, fonts, icons, etc. We also want the apps to use same UI components, for example, if we have date picker on multiple screens, we want all the date pickers to look the same. This can be achieved by creating a common library of UI components, which can be shared by micro frontends. Using shared assets and a UI component library will allow us to make changes easily instead of having to update multiple micro frontends. In this post, we discussed micro frontends, their benefits, and things to consider before migrating to micro frontend architecture. To deliver faster, we want the ability to build, test, and deploy features independently and this can be achieved by using micro frontends and microservices. Implementing micro frontends may present its own challenges and there will be technical hurdles to overcome but the benefits outweigh the complexity. If you are using micro frontend architecture, please share your experience with us. About the author Amit Kothari is a full stack software developer based in Melbourne, Australia. He has 10+ years experience in designing and implementing software mainly in Java/JEE. His recent experience is in building web application using JavaScript frameworks like React and AngularJS and backend micro services/ REST API in Java. He is passionate about lean software development and continuous delivery.
Read more
  • 0
  • 0
  • 12955
article-image-simple-player-health
Gareth Fouche
22 Dec 2016
8 min read
Save for later

Simple Player Health

Gareth Fouche
22 Dec 2016
8 min read
In this post, we’ll create a simple script to manage player health, then use that script and Unity triggers to create health pickups and environmental danger (lava) in a level. Before we get started on our health scripts, let’s create a prototype 3D environment to test them in. Create a new project with a new scene. Save this as “LavaWorld”. Begin by adding two textures to the project, a tileable rock texture and a tileable lava texture. If you don’t have those assets already, there are many sources of free textures online. Click Here is a good start. Create two new Materials named “LavaMaterial” and “RockMaterial” to match the new textures by right-clicking in the Project pane and selecting Create > Material. Drag the rock texture into the Albedo slot of RockMaterial. Drag the lava texture into the Emission slot of LavaMaterial to create a glowing lava effect. Now our materials are ready to use. In the Hierarchy view, use Create > 3D Object > Cube to create a 3D cube in the scene. Drag RockMaterial into the Materials > Element 0 slot on the Mesh Renderer of your cube in order to change the cube texture from the default blue material to your rock texture. Use the scale controls to stretch and flatten the cube. We now have a simple “rock platform”. Copy and paste the platform a few times, moving the new copies away to form small “islands”. Create a few more copies of the rock platform, scale them so that they’re long and thin, and position them as bridges between the “islands”. For example: Now, create a new cube named “LavaVolume”, and assign it the LavaMaterial. Scale it so that it is large enough to encompass all the islands but shallow (scale the y-axis height down). Move it so that it is lower than the islands, and so they appear to float in a lava field. In order to make it possible that a player can fall into the lava, check the BoxCollider’s “Is Trigger” property on LavaVolume. The Box Collider will now act as a Trigger volume, no longer physically blocking objects that come into contact with it, but notifying the script when an object moves through the collider volume. This presents a problem, as objects will now fall through the lava into infinite space! To deal with this problem, make another copy of the rock platforms and scale/position it so that it’s a similar dimension to the lava, also wide but flat, and position it just below the lava. So it forms a rock “floor” under the lava volume. To make your scene a little nicer, repeat the process to create rock walls around the lava, hiding where the lava volume ends. A few point lights ( Create > Light > Point Light) scattered around the islands will also add interesting visual variety. Now it’s time to add a player! First, import the “Standard Assets” package from the Unity Asset Store (if you don’t know how to do this, google the Unity Asset Store to learn about it). In the newly imported Standard Assets Project folder, go to Characters > FirstPersonCharacter > Prefabs. There you will find the FPSController prefab. Drag it into your scene, rename it to “Player” and position it on one of the islands, like so: Delete the old main camera that you had in your scene; the FPSController has its own camera. If you run the project, you should be able to walk around your scene, from island to island. You can also walk in the lava, but it doesn’t harm you, yet. To make the lava an actual threat, we start by giving our player the ability to track its health. In the Project Pane, right-click and select Create > C# Script. Name the script “Player”. Drag the Player script onto the Player object in the Hierarchy view. Open the script in Visual Studio, and add code as follows: This script exposes a variable, maxHealth, which determines how much health the Player starts with and the maximum health they can ever have. It exposes a function to alter the Player’s current health. And it uses a reference to a Text object to display the Player’s current health on screen. Back to Unity, you can now see the Max Health Property exposed in the inspector. Set Max Health to 100. There is also a field for Current Health Label, but we don’t currently have a GUI. To remedy this, in the Hierarchy view, select Create > UI > Canvas and then Create > UI > Label. This will create the UI root and a text label on it. Change the label’s text to “Health:”, the font size to 20 and colour to white. Drag it to the bottom left corner of the screen (and make sure the Rect Transform anchor is set to bottom left). Duplicate that text label, offset it right a little from the previous text label and change the text to “0”. Rename this new label “CurrentHealthLabel”. The GUI should now look like this: In the Hierarchy view, drag CurrentHealthLabel into your Player script’s “Current Health Label” property. If we run now, we’ll have a display in the bottom corner of the screen showing our Player’s health of 100. By itself, this isn’t particularly exciting. Time to add lava! Create a new c# script as before; call it Lava. Add this Lava script to the LavaVolume scene object. Open the script in Visual Studio and insert the following code: Note the TriggerEnter and TriggerExit functions. Because LavaVolume, the object we’ve added this script to, has a collider with Is Trigger checked, whenever another object enters LavaVolume’s box collider, OnTriggerEnter will be called, with the colliding object’s Collider passed as a parameter. Similarly, when an object leaves LavaVolume’s collider volume, OnTriggerExit will be called. Taking advantage of this functionality, we keep a list of all players who enter the lava. Then, during the Update call, if any players are in the lava, we apply damage to them periodically. damageTickTime determines an interval between every time we apply damage (a “tick”), and damagePerTick determines how much damage we apply per tick. Both properties are exposed in the Inspector by the script, so that they’re customizable. Set the values to Damage Per Tick = 5 and Damage Tick Time = 0.1. Now, if we run the game, stepping in the lava hurts! But, it’s a bit of an anti-climax, since nothing actually happens when our health gets down to 0. Let’s make things a little more fatal. First, use a paint program to create a “You Died!” screen at 1920 x 1080 resolution. Add that image to the project. Under the Import Settings, set the Texture Type to Sprite (2D and UI). Then, from the Hierarchy, select Create > UI > Image. Make the size 1920 x 1080, and set the Source Image property to your new player died sprite image. Go back to your Player Script and extend the code as follows: The additions add a reference to the player died screen, and code in the CheckDead function to check if the player’s health reaches 0, displaying the death screen if it does. The function also disables the FirstPersonController script if the player dies, so that the player can’t continue to move Player around via keyboard/mouse input after Player has died. Return to the Hierarchy view, and drag the player died screen into the exposed Dead Screen property on the Player script. Now, if you run the game, stepping in lava will “kill” the player if they stay in it long enough. Better! But it’s only fair to add a way for the Player to recover health, too. To do so, use a paint program to create a new “medkit” texture. Following the same procedure as used to create the LavaVolume, create a new cube called HealthKit, give it a Material that uses this new medkit texture, and enable “Is Trigger” on the cube’s BoxCollider. Create a new C# script called “Health Pickup”, add it to the cube, and insert the following code: Simpler than the Lava script, this adds health to a Player that collides with it, before disabling itself. Scale the HealthKit object until it looks about the right size for a health pack; then copy and paste a few of the packs across the islands. Now, when you play, if you manage to extricate yourself from the lava after falling in, you can collect a health pack to restore your health! And that brings us to the end of the Simple Player Health tutorial. We have a deadly lava level with health pickups, just waiting for enemy characters to be added. About the author Gareth Fouche is a game developer. He can be found on Github at @GarethNN
Read more
  • 0
  • 0
  • 12875

article-image-create-strong-data-science-project-portfolio-lands-job
Aaron Lazar
13 Feb 2018
8 min read
Save for later

How to create a strong data science project portfolio that lands you a job

Aaron Lazar
13 Feb 2018
8 min read
Okay, you’re probably here because you’ve got just a few months to graduate and the projects section of your resume is blank. Or you’re just an inquisitive little nerd scraping the WWW for ways to crack that dream job. Either way, you’re not alone and there are ten thousand others trying to build a great Data Science portfolio to land them a good job. Look no further, we’ll try our best to help you on how to make a portfolio that catches the recruiter’s eye! David “Trent” Salazar‘s portfolio is a great example of a wholesome one and Sajal Sharma’s, is a good example of how one can display their Data Science Portfolios on a platform like Github. Companies are on the lookout for employees who can add value to the business. To showcase this on your resume effectively, the first step is to understand the different ways in which you can add value. 4 things you need to show in a data science portfolio Data science can be broken down into 4 broad areas: Obtaining insights from data and presenting them to the business leaders Designing an application that directly benefits the customer Designing an application or system that directly benefits other teams in the organisation Sharing expertise on data science with other teams You’ll need to ensure that your portfolio portrays all or at least most of the above, in order to easily make it through a job selection. So let’s see what we can do to make a great portfolio. Demonstrate that you know what you're doing So the idea is to show the recruiter that you’re capable of performing the critical aspects of Data Science, i.e. import a data set, clean the data, extract useful information from the data using various techniques, and finally visualise the findings and communicate them. Apart from the technical skills, there are a few soft skills that are expected as well. For instance, the ability to communicate and collaborate with others, the ability to reason and take the initiative when required. If your project is actually able to communicate these things, you’re in! Stay focused and be specific You might know a lot, but rather than throwing all your skills, projects and knowledge in the employer’s face, it’s always better to be focused on doing something and doing it right. Just as you’d do in your resume, keeping things short and sweet, you can implement this while building your portfolio too. Always remember, the interviewer is looking for specific skills. Research the data science job market Find 5-6 jobs, probably from Linkedin or Indeed, that interest you and go through their descriptions thoroughly. Understand what kind of skills the employer is looking for. For example, it could be classification, machine learning, statistical modeling or regression. Pick up the tools that are required for the job - for example, Python, R, TensorFlow, Hadoop, or whatever might get the job done. If you don’t know how to use that tool, you’ll want to skill-up as you work your way through the projects. Also, identify the kind of data that they would like you to be working on, like text or numerical, etc. Now, once you have this information at hand, start building your project around these skills and tools. Be a problem solver Working on projects that are not actual ‘problems’ that you’re solving, won’t stand out in your portfolio. The closer your projects are to the real-world, the easier it will be for the recruiter to make their decision to choose you. This will also showcase your analytical skills and how you’ve applied data science to solve a prevailing problem. Put at least 3 diverse projects in your data science portfolio A nice way to create a portfolio is to list 3 good projects that are diverse in nature. Here are some interesting projects to get you started on your portfolio: Data Cleaning and wrangling Data Cleaning is one of the most critical tasks that a data scientist performs. By taking a group of diverse data sets, consolidating and making sense of them, you’re giving the recruiter confidence that you know how to prep them for analysis. For example, you can take Twitter or Whatsapp data and clean it for analysis. The process is pretty simple; you first find a “dirty” data set, then spot an interesting angle to approach the data from, clean it up and perform analysis on it, and finally present your findings. Data storytelling Storytelling showcases not only your ability to draw insight from raw data, but it also reveals how well you’re able to convey the insights to others and persuade them. For example, you can use data from the bus system in your country and gather insights to identify which stops incur the most delays. This could be fixed by changing their route. Make sure your analysis is descriptive and your code and logic can be followed. Here’s what you do; first you find a good dataset, then you explore the data and spot correlations in the data. Then you visualize it before you start writing up your narrative. Tackle the data from various angles and pick up the most interesting one. If it’s interesting to you, it will most probably be interesting to anyone else who’s reviewing it. Break down and explain each step in detail, each code snippet, as if you were describing it to a friend. The idea is to teach the reviewer something new as you run through the analysis. End to end data science If you’re more into Machine Learning, or algorithm writing, you should do an end-to-end data science project. The project should be capable of taking in data, processing it and finally learning from it, every step of the way. For example, you can pick up fuel pricing data for your city or maybe stock market data. The data needs to be dynamic and updated regularly. The trick for this one is to keep the code simple so that it’s easy to set up and run. You first need to identify a good topic. Understand here that we will not be working with a single dataset, rather you will need to import and parse all the data and bring it under a single dataset yourself. Next, get the training and test data ready to make predictions. Document your code and other findings and you’re good to go. Prove you have the data science skill set If you want to get that job, you’ve got to have the appropriate tools to get the job done. Here’s a list of some of the most popular tools with a link to the right material for you to skill-up: Data science languages There's a number of key languages in data science that are essential. It might seem obvious, but making sure they're on your resume and demonstrated in your portfolio is incredibly important. Include things like: Python R Java Scala SQL Big Data tools If you're applying for big data roles, demonstrating your experience with the key technologies is a must. It not only proves you have the skills, but also shows that you have an awareness of what tools can be used to build a big data solution or project. You'll need: Hadoop, Spark Hive Machine learning frameworks With machine learning so in demand, if you can prove you've used a number of machine learning frameworks, you've already done a lot to impress. Remember, many organizations won't actually know as much about machine learning as you think. In fact, they might even be hiring you with a view to building out this capability. Remember to include: TensorFlow Caffe2 Keras PyTorch Data visualisation tools Data visualization is a crucial component of any data science project. If you can visualize and communicate data effectively, you're immediately demonstrating you're able to collaborate with others and make your insights accessible and useful to the wider business. Include tools like these in your resume and portfolio:  D3.js Excel chart  Tableau  ggplot2 So there you have it. You know what to do to build a decent data science portfolio. It’s really worth attending competitions and challenges. It will not only help you keep up to data and well oiled with your skills, but also give you a broader picture of what people are actually working on and with what tools they’re able to solve problems.
Read more
  • 0
  • 2
  • 12780