Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech Guides

852 Articles
article-image-4-key-benefits-of-using-firebase-for-mobile-app-development
Guest Contributor
19 Oct 2018
6 min read
Save for later

4 key benefits of using Firebase for mobile app development

Guest Contributor
19 Oct 2018
6 min read
A powerful backend solution is essential for building sophisticated mobile apps. In recent years, Firebase has emerged to prominence as a power-packed Backend-as-a-Solution (BaaS), thanks to its wide-ranging features and performance boosting elements. After being acquired in 2014 by Google, several of its features further got a performance boost. These features have made  Firebase quite a popular backend solution for app developers and other emerging IT sectors. Let us look at its 4 key benefits for cross-platform mobile app development. Unleashing the power of Google Analytics Google Analytics for Firebase is a completely free solution with unconstrained reporting on many aspects. The reporting feature allows you to evaluate client behavior, report on broken links, user interactions and all other aspects of user experience and user interface. The reporting helps developers make informed decisions while optimizing the UI and the app performance. The unmatched scale of reporting: Firebase analytics allows access to unlimited reports on as many as 500 different events. The developers can also create custom events for reporting as their need suits. Robust audience segmentation: The Firebase analytics also allows segmenting the app audience on different parameters and grounds. The integrated console allows segmenting the audience on the basis of device information, custom events, and user characteristics. Crash reporting to fix Bugs Firebase also helps to address performance issues of an app by fixing bugs right from its backend solution. It is also equipped with robust crash reporting feature. Its crash reporting helps to deliver intricate and detailed bug and crash reports to address all the coding errors in an app. The reporting feature is capable of grouping together the issues in different categories as per the characteristics of the problem. Here are some of the attributes of this reporting feature. Monitoring errors: It is capable of monitoring fatal errors for iOS apps and both fatal and non-fatal errors for Android apps. Generally, reports are initiated as per the impact caused by such errors on the user experience. Required data collection to fix errors: The reports also enlist all the details concerning the device in use, performance shortfalls and user scenarios concerning the erroneous events. According to the contributing factors and other similarities, the issues are grouped in different categories. Email alerts: It also allows sending email alerts as and when such issues or problems are detected. The configuration of error reporting: The error reporting can also be configured remotely to control who can access the reports and list of events that occurred before an event. It is free: Crash and bug reporting is free with Firebase. You don't need to pay a penny to access this feature. Synchronizing data with real-time database With Firebase you can sync the offline and online data through NoSQL database. This makes the application data available on both offline and online states of the app. This boosts collaboration on the application data in real time. Here are some of its benefits. Real-time: Unlike the so-called HTTP requests that work to update the data across interfaces, the Real-time Database of firebase syncs data with every change thus helping to reflect the change in real time across any device in use. Offline: As Firebase Real-time Database SDK helps save your data in local disk, you can always access the data offline. As and when connectivity is back, the changes are synced with the present state of the server. Access from multiple devices: The Firebase Real-time Database allows accessing application data from multiple devices and interfaces including mobile devices and web. Splitting and scaling your data: Thanks to Firebase Real-time Database, you can split your data across multiple databases within the same project and set rules for each database instances. Firebase is feature rich for futuristic app development In addition to the above, Firebase is fully empowered with a host of rich features required for building sophisticated and most feature-rich mobile apps. Let us have a look at some of the key features of Firebase that made it a reliable platform for cross-platform development. Hosting: The hosting feature of Firebase allows developers to update their contents in the Content Delivery Network (CDN) during production. Firebase offers full hosting support with a custom domain, Global CDN, and an automatically provided SSL Certificate. Authentication: Firebase backend service offers a powerful authentication feature. It comes equipped with simple SDKs and easy to use libraries to integrate authentication feature with any mobile app. Storage: Firebase storage feature is powered by Google Cloud Storage and allows users to easily download media files and visual contents. This feature is also helpful in making use of user-generated content. Cloud Messaging: With Cloud Messaging, a mobile app powered can easily send a message to users and indulge in real-time communication. Remote Configuration: This feature of Firebase allows developers to incorporate certain changes in the app remotely. Thanks to this, the changes are reflected in the existing version, and the user does not need to download the latest updated version. Test Lab: With Test lab, developers can easily test the app in all the devices listed in the Google data center. It can even do the testing without requiring any test code of the respective app. Notifications: This feature gives developers a console to manage and send user-focused custom notifications to the users. App Indexing: This feature allows developers to index the app in Google Search and achieve higher search ranks in app marketplaces like Play Store and App Store. Dynamic Links: Firebase also equips the app to create dynamic links or smart URLs to present the respective app across all digital platforms including social media, mobile app, web, email, and other channels. All the above-mentioned benefits and useful features that empower mobile app developers to create dynamic user experience helped Firebase achieve such unprecedented popularity among developers worldwide. No wonder, in a short time span it has become a very popular backend solution for so many successful cross-platform mobile apps. Some exemplary use cases of Firebases Here we have picked two use cases of Firebase, respectively for one relatively new and successful app and one leading app in its niche. Fabulous Fabulous is a unique app that trains users to dispose of bad habits and get used to good habits to ensure health and wellbeing. The app by customizing the onboarding process through Firebase managed to double the retention rate. The app could incorporate custom user experience for different groups of users as per their preference. Onefootball This leading mobile soccer app OneFootBall experienced more than 5% increase in user session time thanks to Firebase. The new backend solution powered by Firebase helped the game app engage the audience more efficiently than ever before. The custom contents created by this popular app can enjoy better traction with users thanks to higher engagement. Author Bio: Juned Ahmed works as an IT consultant at IndianAppDevelopers, a leading Mobile app development company which offers to hire app developers in India for mobile solutions. He has more than 10 years of experience in developing and implementing marketing strategies. How to integrate Firebase on Android/iOS applications natively. Build powerful progressive web apps with Firebase. How to integrate Firebase with NativeScript for cross-platform app development.
Read more
  • 0
  • 0
  • 12600

article-image-how-assess-your-tech-teams-skills
Hari Vignesh
20 Sep 2017
5 min read
Save for later

How to assess your tech team’s skills

Hari Vignesh
20 Sep 2017
5 min read
For those of us that manage others, effectiveness is largely driven by the skills and motivation of those that report to us. So whether you are a CIO, IT division leader, or a front-line manager, you need to spend the time to assess the currents skills, abilities and career aspirations of your staff and help them put in place the plans that can support their development. And yet, you need to do this in such a way that still supports the overall near-term objectives of the organization, and properly balances the need for professional development against the day-to-day needs of the organization. There are certifications for competence in many different products. Having such certifications is very valuable and gives one a sense of the skill-set of an individual. But how do you assess someone as a journeyman programmer, tester or systems engineer, or perhaps as a master in one’s chosen discipline? This evaluation is overly subjective and places too much emphasis on “book knowledge” rather than practical application of that knowledge to develop new, innovative solutions or approaches that the organization truly needs. In other words, how do you assess the knowledge, skills and abilities (KSAs) of a person to perform their job role? This assessment problem is two-fold: For a specific IT discipline, you need a comprehensive framework by which to understand the types of skills and knowledge you should have each level — from novice to expert. For each discipline, you also need a way to accurately assess the current level ability of your technical staff members to create the baseline by which you can develop their skills to move to higher levels of proficiency. This not only helps the individual develop a realistic and achievable plan, but also gives you insights into where you have significant skills gaps in your organization. Skills Framework for the Information Age (SFIA) In 2003, a non-profit organization was founded called the Skills Framework for the Information Age (SFIA), which provides a comprehensive framework of skills in IT technologies and disciplines based on a broad industry “body of knowledge.” SFIA currently covers 97 professional skills required by professionals in roles involving information and communications technology. These skills are organized into six categories, as follows: Strategy and Architecture Change and Transformation Development and Implementation Delivery and Operation Skills and Quality Relationships and Engagement Each of the skills are described at one or more of SFIA’s seven levels of attainment — from a novice to expert. Find out more about this framework here. Although the framework helps define your needed competencies, it doesn’t tell you if your workers have the skills that match them. Building your own effective framework In order to accurately assess the current ability level of your technical staff members is to create the baseline from which you can develop their skills to higher levels of proficiency. So, the best way to progress would be by identifying the goals of the team or org and then building your own framework. So, how do we proceed? List the roles within your team To start with you need a list of the role types within your team. This isn’t the same thing as having a listing of every position on your org chart. You want to simplify the process by grouping together like roles. List the skills needed for each role Now that you’ve created a list of role types, the next step is to list the skills needed for each of these roles. What do the skills look like? They could be behavioral like “Listens to customer needs carefully to determine requirements” or they could be more technical like this sample list of engineering skills: Writing quality code Design skills Writing optimal code Programming patterns Once you have this list, it’s a valuable resource in itself. Create a survey It’s ideal if you can find out all of the relevant skills a person has, not just those for their current role. To do this, create a survey that makes it easy for your people to respond. This essentially means you need to keep it short and not ask the same question twice. To achieve this, the survey should group together each of the major role types. Use the list you created in step 2 as your starting point for this. Let’s say you have an engineering group within your organization. It may have a number of different role types within it, but there’s probably common skills across many of them. For example, many of the role types may require people to be skilled at “Programming.” Rather than listing skills more than once under each relevant role type, list them once under a common group heading. Survey your workforce With the survey designed, you are now ready to ask your workforce to respond to it. The size of your team and the number of roles will determine how you go about doing this. It’s a good practice to communicate to survey participants to explain why you are asking for their response and what will happen with the information. Analyze the data You can now reap the rewards of your skills audit process. You can analyze: The skill gaps in specific roles Skill gaps within teams or organization groups Potential successors for certain roles The number of people who have critical skills Future skill requirements This assessment not only helps employees create realistic and achievable individual development plans, but also gives you insight into where you have significant skills gaps in your team or in your organization. Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades.
Read more
  • 0
  • 0
  • 12389

article-image-data-science-windows-big-no
Aaron Lazar
13 Apr 2018
5 min read
Save for later

Data science on Windows is a big no

Aaron Lazar
13 Apr 2018
5 min read
I read a post from a Linkedin connection about a week ago. It read: “The first step in becoming a data scientist: forget about Windows.” Even if you’re not a programmer, that's pretty controversial. The first nerdy thought I had was, that’s not true. The first step to Data Science is not choosing an OS, it’s statistics! Anyway, I kept wondering what’s wrong with doing data science on Windows, exactly. Why is the legacy product (Windows), created by one of the leaders in Data Science and Artificial Intelligence, not suitable to support the very thing it is driving? As a publishing professional and having worked with a lot of authors, one of the main issues I’ve faced while collaborating with them is the compatibility of platforms, especially when it comes to sharing documents, working with code, etc. At least 80 percent of the authors I’ve worked with have been using something other than Windows. They are extremely particular about the platform they’re working on, and have usually chosen Linux. I don’t know if they consider it a punishable offence, but I’ve been using Windows since I was 12, even though I have played around with Macs and machines running Linux/Unix. I’ve never been affectionately drawn towards those machines as much as my beloved laptop that is happily rolling on Windows 10 Pro. Why is data science on Windows is a bad idea? When Microsoft created Windows, its main idea was to make the platform as user friendly as possible, and it focused every ounce of energy on that and voila! They created one of the most simplest operating systems that one could ever use. Microsoft wanted to make computing easy for everyone - teachers, housewives, kids, business professionals. However, they did not consider catering to the developer community as much as its users. Now that’s not to say that you can’t really use a Windows machine to code. Of course, you can run Python or R programs. But you’re likely to face issues with compatibility and speed. If you’re choosing to use the command line, and something goes wrong, it’s a real PITA to debug on Windows. Also, if you’re doing cluster computing with other Linux/Macs, it’s better to have one of them yourself. Many would agree that Windows is more likely to suffer a BSoD (Blue Screen of Death) than a Mac or a Unix machine, messing up your algorithm that’s been running for a long time. [box type="note" align="" class="" width=""]Check out our most read post 15 useful Python libraries to make your Data science tasks easier. [/box] Is it all that bad? Well, not really. In fact, if you need to pump in a couple more gigs of RAM, you can’t think of doing that on a Mac. Although you might still encounter some weird stuff like those mentioned above, on a Windows PC, you can always Google up a workaround. Don’t beat yourself up if you own a PC. You can always set up a dual boot, running a Linux distribution parallely. You might want to check out Vagrant for this. Also, you’ll be surprised if you’re a Mac owner and you plan some heavy duty Deep Learning on a GPU, you can’t really run CUDA without messing things up. CUDA will only work well with NVIDIAs GPUs on a PC. In Joey Tribbiani's words “This is a moo point.” To me, data science is really OS agnostic. For instance, now with Docker, you don’t really have to worry much about which OS you’re running - so from that perspective, data science on Windows may work for you. Still feel for Windows? Well, there are obviously drawbacks. You’ll still keep living with the fear of isolation that Microsoft tries to create in the minds of customers. Moreover, you’ll be faced with “slowdom” if that’s a word, what with all the background processes eating away your computing power! You’ll be defying everything that modern computing is defined by - KISS, Open Source, Agile, etc. Another important thing you need to keep in mind is that when you’re working with so much data, you really don’t wanna get hacked! Last but not the least, if you’re intending to dabble with AI and Blockchain, your best bet is not going to be Windows. All said and done, if you’re a budding data scientist who’s looking to buy some new equipment, you might want to consider a few things before you invest in your machine. Think about what you’ll be working with, what tools you might want to use and if you want to play safe, it’s best to go with a Linux system. If you have the money and want to flaunt it, while still enjoying support from most tools, think about a Mac. And finally, if you’re brave and are not worried about having two OSes running on your system, go in for a Windows PC. So the next time someone decides to gift you a Windows PC, don’t politely decline right away. Grab it and swiftly install a Linux distro! Happy coding! :) *I will put an asterisk here, for the thoughts put in this article are completely my personal opinion and it might differ from person to person. Go ahead and share your thoughts in the comments section below.
Read more
  • 0
  • 4
  • 12240

article-image-brief-history-python
Sam Wood
14 Oct 2015
4 min read
Save for later

A Brief History of Python

Sam Wood
14 Oct 2015
4 min read
From data to web development, Python has come to stand as one of the most important and most popular open source programming languages being used today. But whilst some see it as almost a new kid on the block, Python is actually older than both Java, R, and JavaScript. So what are the origins of our favorite open source language? In the beginning... Python's origins lie way back in distant December 1989, making it the same age as Taylor Swift. Created by Guido van Rossum (the Python community's Benevolent Dictator for Life) as a hobby project to work on during week around Christmas, Python is famously named not after the constrictor snake but rather the British comedy troupe Monty Python's Flying Circus. (We're quite thankful for this at Packt - we have no idea what we'd put on the cover if we had to pick for 'Monty' programming books!) Python was born out of the ABC language, a terminated project of the Dutch CWI research institute that van Rossum worked for, and the Amoeba distributed operating system. When Amoeba needed a scripting language, van Rossum created Python. One of the principle strengths of this new language was how easy it was to extend, and its support for multiple platforms - a vital innovation in the days of the first personal computers. Capable of communicating with libraries and differing file formats, Python quickly took off. Computer Programming for Everybody Python grew throughout the early nineties, acquiring lambda, reduce(), filter() and map() functional programming tools (supposedly courtesy of a Lisp hacker who missed them and thus submitted working patches), key word arguments, and built in support for complex numbers. During this period, Python also served a central role in van Rossum's Computer Programming for Everybody initiative. The CP4E's goal was to make programming more accessible to the 'layman' and encourage a basic level of coding literacy as an equal essential knowledge alongside English literacy and math skills. Because of Python's focus on clean syntax and accessibility, it played a key part in this. Although CP4E is now inactive, learning Python remains easy and Python is one of the most common languages that new would-be programmers are pointed at to learn. Going Open with 2.0 As Python grew in the nineties, one of the key issues in uptake was its continued dependence on van Rossum. 'What if Guido was hit by a bus?' Python users lamented, 'or if he dropped dead of exhaustion or if he is rubbed out by a member of a rival language following?' In 2000, Python 2.0 was released by the BeOpen Python Labs team. The ethos of 2.0 was very much more open and community oriented in its development process, with much greater transparency. Python moved its repository to SourceForge, granting write access to its CVS tree more people and an easy way to report bugs and submit patches. As the release notes stated, 'the most important change in Python 2.0 may not be to the code at all, but to how Python is developed'. Python 2.7 is still used today - and will be supported until 2020. But the word from development is clear - there will be no 2.8. Instead, support remains focused upon 2.7's usurping younger brother - Python 3. The Rise of Python 3 In 2008, Python 3 was released on an almost-unthinkable premise - a complete overhaul of the language, with no backwards compatibility. The decision was controversial, and born in part of the desire to clean house on Python. There was a great emphasis on removing duplicative constructs and modules, to ensure that in Python 3 there was one - and only one - obvious way of doing things. Despite the introduction of tools such as '2to3' that could identify quickly what would need to be changed in Python 2 code to make it work in Python 3, many users stuck with their classic codebases. Even today, there is no assumption that Python programmers will be working with Python 3. Despite flame wars raging across the Python community, Python 3's future ascendancy was something of an inevitability. Python 2 remains a supported language (for now). But as much as it may still be the default choice of Python, Python 3 is the language's future. The Future Python's userbase is vast and growing - it's not going away any time soon. Utilized by the likes of Nokia, Google, and even NASA for it's easy syntax, it looks to have a bright future ahead of it supported by a huge community of OS developers. Its support of multiple programming paradigms, including object-oriented Python programming, functional Python programming, and parallel programming models makes it a highly adaptive choice - and its uptake keeps growing.
Read more
  • 0
  • 0
  • 12168

article-image-create-strong-data-science-project-portfolio-lands-job
Aaron Lazar
13 Feb 2018
8 min read
Save for later

How to create a strong data science project portfolio that lands you a job

Aaron Lazar
13 Feb 2018
8 min read
Okay, you’re probably here because you’ve got just a few months to graduate and the projects section of your resume is blank. Or you’re just an inquisitive little nerd scraping the WWW for ways to crack that dream job. Either way, you’re not alone and there are ten thousand others trying to build a great Data Science portfolio to land them a good job. Look no further, we’ll try our best to help you on how to make a portfolio that catches the recruiter’s eye! David “Trent” Salazar‘s portfolio is a great example of a wholesome one and Sajal Sharma’s, is a good example of how one can display their Data Science Portfolios on a platform like Github. Companies are on the lookout for employees who can add value to the business. To showcase this on your resume effectively, the first step is to understand the different ways in which you can add value. 4 things you need to show in a data science portfolio Data science can be broken down into 4 broad areas: Obtaining insights from data and presenting them to the business leaders Designing an application that directly benefits the customer Designing an application or system that directly benefits other teams in the organisation Sharing expertise on data science with other teams You’ll need to ensure that your portfolio portrays all or at least most of the above, in order to easily make it through a job selection. So let’s see what we can do to make a great portfolio. Demonstrate that you know what you're doing So the idea is to show the recruiter that you’re capable of performing the critical aspects of Data Science, i.e. import a data set, clean the data, extract useful information from the data using various techniques, and finally visualise the findings and communicate them. Apart from the technical skills, there are a few soft skills that are expected as well. For instance, the ability to communicate and collaborate with others, the ability to reason and take the initiative when required. If your project is actually able to communicate these things, you’re in! Stay focused and be specific You might know a lot, but rather than throwing all your skills, projects and knowledge in the employer’s face, it’s always better to be focused on doing something and doing it right. Just as you’d do in your resume, keeping things short and sweet, you can implement this while building your portfolio too. Always remember, the interviewer is looking for specific skills. Research the data science job market Find 5-6 jobs, probably from Linkedin or Indeed, that interest you and go through their descriptions thoroughly. Understand what kind of skills the employer is looking for. For example, it could be classification, machine learning, statistical modeling or regression. Pick up the tools that are required for the job - for example, Python, R, TensorFlow, Hadoop, or whatever might get the job done. If you don’t know how to use that tool, you’ll want to skill-up as you work your way through the projects. Also, identify the kind of data that they would like you to be working on, like text or numerical, etc. Now, once you have this information at hand, start building your project around these skills and tools. Be a problem solver Working on projects that are not actual ‘problems’ that you’re solving, won’t stand out in your portfolio. The closer your projects are to the real-world, the easier it will be for the recruiter to make their decision to choose you. This will also showcase your analytical skills and how you’ve applied data science to solve a prevailing problem. Put at least 3 diverse projects in your data science portfolio A nice way to create a portfolio is to list 3 good projects that are diverse in nature. Here are some interesting projects to get you started on your portfolio: Data Cleaning and wrangling Data Cleaning is one of the most critical tasks that a data scientist performs. By taking a group of diverse data sets, consolidating and making sense of them, you’re giving the recruiter confidence that you know how to prep them for analysis. For example, you can take Twitter or Whatsapp data and clean it for analysis. The process is pretty simple; you first find a “dirty” data set, then spot an interesting angle to approach the data from, clean it up and perform analysis on it, and finally present your findings. Data storytelling Storytelling showcases not only your ability to draw insight from raw data, but it also reveals how well you’re able to convey the insights to others and persuade them. For example, you can use data from the bus system in your country and gather insights to identify which stops incur the most delays. This could be fixed by changing their route. Make sure your analysis is descriptive and your code and logic can be followed. Here’s what you do; first you find a good dataset, then you explore the data and spot correlations in the data. Then you visualize it before you start writing up your narrative. Tackle the data from various angles and pick up the most interesting one. If it’s interesting to you, it will most probably be interesting to anyone else who’s reviewing it. Break down and explain each step in detail, each code snippet, as if you were describing it to a friend. The idea is to teach the reviewer something new as you run through the analysis. End to end data science If you’re more into Machine Learning, or algorithm writing, you should do an end-to-end data science project. The project should be capable of taking in data, processing it and finally learning from it, every step of the way. For example, you can pick up fuel pricing data for your city or maybe stock market data. The data needs to be dynamic and updated regularly. The trick for this one is to keep the code simple so that it’s easy to set up and run. You first need to identify a good topic. Understand here that we will not be working with a single dataset, rather you will need to import and parse all the data and bring it under a single dataset yourself. Next, get the training and test data ready to make predictions. Document your code and other findings and you’re good to go. Prove you have the data science skill set If you want to get that job, you’ve got to have the appropriate tools to get the job done. Here’s a list of some of the most popular tools with a link to the right material for you to skill-up: Data science languages There's a number of key languages in data science that are essential. It might seem obvious, but making sure they're on your resume and demonstrated in your portfolio is incredibly important. Include things like: Python R Java Scala SQL Big Data tools If you're applying for big data roles, demonstrating your experience with the key technologies is a must. It not only proves you have the skills, but also shows that you have an awareness of what tools can be used to build a big data solution or project. You'll need: Hadoop, Spark Hive Machine learning frameworks With machine learning so in demand, if you can prove you've used a number of machine learning frameworks, you've already done a lot to impress. Remember, many organizations won't actually know as much about machine learning as you think. In fact, they might even be hiring you with a view to building out this capability. Remember to include: TensorFlow Caffe2 Keras PyTorch Data visualisation tools Data visualization is a crucial component of any data science project. If you can visualize and communicate data effectively, you're immediately demonstrating you're able to collaborate with others and make your insights accessible and useful to the wider business. Include tools like these in your resume and portfolio:  D3.js Excel chart  Tableau  ggplot2 So there you have it. You know what to do to build a decent data science portfolio. It’s really worth attending competitions and challenges. It will not only help you keep up to data and well oiled with your skills, but also give you a broader picture of what people are actually working on and with what tools they’re able to solve problems.
Read more
  • 0
  • 2
  • 11822

article-image-what-is-the-history-behind-c-programming-and-unix
Packt Editorial Staff
17 Oct 2019
9 min read
Save for later

What is the history behind C Programming and Unix?

Packt Editorial Staff
17 Oct 2019
9 min read
If you think C programming and Unix are unrelated, then you are making a big mistake. Back in the 1970s and 1980s, if the Unix engineers at Bell Labs had decided to use another programming language instead of C to develop a new version of Unix, then we would be talking about that language today. The relationship between the two is simple; Unix is the first operating system that is implemented with a high-level C programming language, got its fame and power from Unix. Of course, our statement about C being a high-level programming language is not true in today’s world. This article is an excerpt from the book Extreme C by Kamran Amini. Kamran teaches you to use C’s power. Apply object-oriented design principles to your procedural C code. You will gain new insight into algorithm design, functions, and structures. You’ll also understand how C works with UNIX, how to implement OO principles in C, and what multiprocessing is. In this article, we are going to look at the history of C programming and Unix. Multics OS and Unix Even before having Unix, we had the Multics OS. It was a joint project launched in 1964 as a cooperative project led by MIT, General Electric, and Bell Labs. Multics OS was a huge success because it could introduce the world to a real working and secure operating system. Multics was installed everywhere from universities to government sites. Fast-forward to 2019, and every operating system today is borrowing some ideas from Multics indirectly through Unix. In 1969, because of the various reasons that we will talk about shortly, some people at Bell Labs, especially the pioneers of Unix, such as Ken Thompson and Dennis Ritchie, gave up on Multics and, subsequently, Bell Labs quit the Multics project. But this was not the end for Bell Labs; they had designed their simpler and more efficient operating system, which was called Unix. It is worthwhile to compare the Multics and Unix operating systems. In the following list, you will see similarities and differences found while comparing Multics and Unix: Both follow the onion architecture as their internal structure. We mean that they both have the same rings in their onion architecture, especially kernel and shell rings. Therefore, programmers could write their own programs on top of the shell ring. Also, Unix and Multics expose a list of utility programs, and there are lots of utility programs such as ls and pwd. In the following sections, we will explain the various rings found in the Unix architecture. Multics needed expensive resources and machines to be able to work. It was not possible to install it on ordinary commodity machines, and that was one of the main drawbacks that let Unix thrive and finally made Multics obsolete after about 30 years. Multics was complex by design. This was the reason behind the frustration of Bell Labs employees and, as we said earlier, the reason why they left the project. But Unix tried to remain simple. In the first version, it was not even multitasking or multi-user! You can read more about Unix and Multics online, and follow the events that happened in that era. Both were successful projects, but Unix has been able to thrive and survive to this day. It is worth sharing that Bell Labs has been working on a new distributed operating system called Plan 9, which is based on the Unix project.   Figure 1-1: Plan 9 from Bell Labs Suffice to say that Unix was a simplification of the ideas and innovations that Multics presented; it was not something new, and so, I can quit talking about Unix and Multics history at this point. So far, there are no traces of C in the history because it has not been invented yet. The first versions of Unix were purely written using assembly language. Only in 1973 was Unix version 4 written using C. Now, we are getting close to discussing C itself, but before that, we must talk about BCPL and B because they have been the gateway to C. About BCPL and B BCPL was created by Martin Richards as a programming language invented for the purpose of writing compilers. The people from Bell Labs were introduced to the language when they were working as part of the Multics project. After quitting the Multics project, Bell Labs first started to write Unix using assembly programming language. That’s because, back then, it was an anti-pattern to develop an operating system using a programming language other than assembly. For instance, it was strange that the people at the Multics project were using PL/1 to develop Multics but, by doing that, they showed that operating systems could be successfully written using a higher-level programming language other than assembly. As a result, Multics became the main inspiration for using another language for developing Unix. The attempt to write operating system modules using a programming language other than assembly remained with Ken Thompson and Dennis Ritchie at Bell Labs. They tried to use BCPL, but it turned out that they needed to apply some modifications to the language to be able to use it in minicomputers such as the DEC PDP-7. These changes led to the B programming language. While we won’t go too deep into the properties of the B language here you can read more about it and the way it was developed at the following links: The B Programming Language  The Development of the C Language Dennis Ritchie authored the latter article himself, and it is a good way to explain the development of the C programming language while still sharing valuable information about B and its characteristics. B also had its shortcomings in terms of being a system programming language. B was typeless, which meant that it was only possible to work with a word (not a byte) in each operation. This made it hard to use the language on machines with a different word length. Therefore, over time, further modifications were made to the language until it led to developing the NB (New B) language, which later derived the structures from the B language. These structures were typeless in B, but they became typed in C. And finally, in 1973, the fourth version of Unix could be developed using C, which still had many assembly codes. In the next section, we talk about the differences between B and C, and why C is a top-notch modern system programming language for writing an operating system. The way to C programming and Unix I do not think we can find anyone better than Dennis Ritchie himself to explain why C was invented after the difficulties met with B. In this section, we’re going to list the causes that prompted Dennis Ritchie, Ken Thompson, and others create a new programming language instead of using B for writing Unix. Limitations of the B programming language: B could only work with words in memory: Every single operation should have been performed in terms of words. Back then, having a programming language that was able to work with bytes was a dream. This was because of the available hardware at the time, which addressed the memory in a word-based scheme. B was typeless: More accurately, B was a single-type language. All variables were from the same type: word. So, if you had a string with 20 characters (21 plus the null character at the end), you had to divide it up by words and store it in more than one variable. For example, if a word was 4 bytes, you would have 6 variables to store 21 characters of the string. Being typeless meant that multiple byte-oriented algorithms, such as string manipulation algorithms, were not efficiently written with B: This was because B was using the memory words not bytes, and they could not be used efficiently to manage multi-byte data types such as integers and characters. B didn’t support floating-point operations: At the time, these operations were becoming increasingly available on the new hardware, but there was no support for that in the B language. Through the availability of machines such as PDP-1, which could address memory on a byte basis, B showed that it could be inefficient in addressing bytes of memory: This became even clearer with B pointers, which could only address the words in the memory, and not the bytes. In other words, for a program wanting to access a specific byte or a byte range in the memory, more computations had to be done to calculate the corresponding word index. The difficulties with B, particularly its slow development and execution on machines that were available at the time, forced Dennis Ritchie to develop a new language. This new language was called NB, or New B at first, but it eventually turned out to be C. This newly developed language, C, tried to cover the difficulties and flaws of B and became a de facto programming language for system development, instead of the assembly language. In less than 10 years, newer versions of Unix were completely written in C, and all newer operating systems that were based on Unix got tied with C and its crucial presence in the system. As you can see, C was not born as an ordinary programming language, but instead, it was designed by having a complete set of requirements in mind. You may consider languages such as Java, Python, and Ruby to be higher-level languages, but they cannot be considered as direct competitors as they are different and serve different purposes. For instance, you cannot write a device driver or a kernel module with Java or Python, and they themselves have been built on top of a layer written in C. Unlike some programming languages, C is standardized by ISO, and if it is required to have a certain feature in the future, then the standard can be modified to support the new feature. To summarize In this article, we began with the relationship between Unix and C. Even in non-Unix operating systems, you see some traces of a similar design to Unix systems. We also looked at the history of C and explained how Unix appeared from Multics OS and how C was derived from the B programming language. The book Extreme C, written by Kamran Amini will help you make the most of C's low-level control, flexibility, and high performance. Is Dark an AWS Lambda challenger? Microsoft mulls replacing C and C++ code with Rust calling it a “modern safer system programming language” with great memory safety features Is Scala 3.0 a new language altogether? Martin Odersky, its designer, says “yes and no”
Read more
  • 0
  • 0
  • 11775
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-two-popular-data-analytics-methodologies-every-data-professional-should-know-tdsp-crisp-dm
Amarabha Banerjee
21 Dec 2017
7 min read
Save for later

Two popular Data Analytics methodologies every data professional should know: TDSP & CRISP-DM

Amarabha Banerjee
21 Dec 2017
7 min read
[box type="note" align="" class="" width=""]This is a book excerpt taken from Advanced Analytics with R and Tableau authored by Jen Stirrup & Ruben Oliva Ramos. This book will help you make quick, cogent, and data driven decisions for your business using advanced analytical techniques on Tableau and R.[/box] Today we explore popular data analytics methods such as Microsoft TDSP Process and the CRISP- DM methodology. Introduction There is an increasing amount of data in the world, and in our databases. The data deluge is not going to go away anytime soon! Businesses risk wasting the useful business value of information contained in databases, unless they are able to excise useful knowledge from the data. It can be hard to know how to get started. Fortunately, there are a number of frameworks in data science that help us to work our way through an analytics project. Processes such as Microsoft Team Data Science Process (TDSP) and CRISP-DM position analytics as a repeatable process that is part of a bigger vision. Why are they important? The Microsoft TDSP Process and the CRISP-DM frameworks are frameworks for analytics projects that lead to standardized delivery for organizations, both large and small. In this chapter, we will look at these frameworks in more detail, and see how they can inform our own analytics projects and drive collaboration between teams. How can we have the analysis shaped so that it follows a pattern so that data cleansing is included? Industry standard methodologies for analytics There are a few main methodologies: the Microsoft TDSP Process and the CRISP-DM Methodology. Ultimately, they are all setting out to achieve the same objectives as an analytics framework. There are differences, of course, and these are highlighted here. CRISP-DM and TDSP focus on the business value and the results derived from analytics projects. Both of these methodologies are described in the following sections. CRISP-DM One common methodology is the CRISP-DM methodology (the modeling agency). The Cross Industry Standard Process for Data Mining or (CRISP-DM) model as it is known, is a process model that provides a fluid framework for devising, creating, building, testing, and deploying machine learning solutions. The process is loosely divided into six main phases. The phases can be seen in the following diagram: Initially, the process starts with a business idea and a general consideration of the data. Each stage is briefly discussed in the following sections. Business understanding/data understanding The first phase looks at the machine learning solution from a business standpoint, rather than a technical standpoint. The business idea is defined, and a draft project plan is generated. Once the business idea is defined, the data understanding phase focuses on data collection and familiarity. At this point, missing data may be identified, or initial insights may be revealed. This process feeds back to the business understanding phase. CRISP-DM model — data preparation In this stage, data will be cleansed and transformed, and it will be shaped ready for the modeling phase. CRISP-DM — modeling phase In the modeling phase, various techniques are applied to the data. The models are further tweaked and refined, and this may involve going back to the data preparation phase in order to correct any unexpected issues. CRISP-DM — evaluation The models need to be tested and verified to ensure that they meet the business objectives that were defined initially in the business understanding phase. Otherwise, we may have built models that do not answer the business question. CRISP-DM — deployment The models are published so that the customer can make use of them. This is not the end of the story, however. CRISP-DM — process restarted We live in a world of ever-changing data, business requirements, customer needs, and environments, and the process will be repeated. CRISP-DM summary CRISP-DM is the most commonly used framework for implementing machine learning projects specifically, and it applies to analytics projects as well. It has a good focus on the business understanding piece. However, one major drawback is that the model no longer seems to be actively maintained. The official site, CRISP-DM.org, is no longer being maintained. Furthermore, the framework itself has not been updated on issues on working with new technologies, such as big data. Big data technologies means that there can be additional effort spend in the data understanding phase, for example, as the business grapples with the additional complexities that are involved in the shape of big data sources. Team Data Science Process The TDSP process model provides a dynamic framework to machine learning solutions that have been through a robust process of planning, producing, constructing, testing, and deploying models. Here is an example of the TDSP process: The process is loosely divided into four main phases: Business Understanding Data Acquisition and Understanding Modeling Deployment The phases are described in the following paragraphs. Business understanding The Business understanding process starts with a business idea, which is solved with a machine learning solution. The business idea is defined from the business perspective, and possible scenarios are identified and evaluated. Ultimately, a project plan is generated for delivering the solution. Data acquisition and understanding Following on from the business understanding phase is the data acquisition and understanding phase, which concentrates on familiarity and fact-finding about the data. The process itself is not completely linear; the output of the data acquisition and understanding phase can feed back to the business understanding phase, for example. At this point, some of the essential technical pieces start to appear, such as connecting to data, and the integration of multiple data sources. From the user's perspective, there may be actions arising from this effort. For example, it may be noted that there is missing data from the dataset, which requires further investigation before the project proceeds further. Modeling In the modeling phase of the TDSP process, the R model is created, built, and verified against the original business question. In light of the business question, the model needs to make sense. It should also add business value, for example, by performing better than the existing solution that was in place prior to the new R model. This stage also involves examining key metrics in evaluating our R models, which need to be tested to ensure that the models meet the original business objectives set out in the initial business understanding phase. Deployment R models are published to production, once they are proven to be a fit solution to the original business question. This phase involves the creation of a strategy for ongoing review of the R model's performance as well as a monitoring and maintenance plan. It is recommended to carry out a recurrent evaluation of the deployed models. The models will live in a fluid, dynamic world of data and, over time, this environment will impact their efficacy. The TDSP process is a cycle rather than a linear process, and it does not finish, even if the model is deployed. It is comprised of a clear structure for you to follow throughout the Data Science process, and it facilitates teamwork and collaboration along the way. TDSP Summary The data science unicorn does not exist; that is, the person who is equally skilled in all areas of data science, right across the board. In order to ensure successful projects where each team player contributes according to their skill set, the Team Data Science Summary is a team-oriented solution that emphasizes teamwork and collaboration throughout. It recognizes the importance of working as part of a team to deliver Data Science projects. It also offers useful information on the importance of having standardized source control and backups, which can include open source technology. If you liked our post, please be sure to check out Advanced Analytics with R and Tableau that shows how to apply various data analytics techniques in R and Tableau across the different stages of a data science project highlighted in this article.  
Read more
  • 0
  • 0
  • 11467

article-image-why-uber-created-hudi-an-open-source-incremental-processing-framework-on-apache-hadoop
Bhagyashree R
19 Oct 2018
3 min read
Save for later

Why did Uber created Hudi, an open source incremental processing framework on Apache Hadoop?

Bhagyashree R
19 Oct 2018
3 min read
In the process of rebuilding its Big Data platform, Uber created an open-source Spark library named Hadoop Upserts anD Incremental (Hudi). This library permits users to perform operations such as update, insert, and delete on existing Parquet data in Hadoop. It also allows data users to incrementally pull only the changed data, which significantly improves query efficiency. It is horizontally scalable, can be used from any Spark job, and the best part is that it only relies on HDFS to operate. Why is Hudi introduced? Uber studied its current data content, data access patterns, and user-specific requirements to identify problem areas. This research revealed the following four limitations: Scalability limitation in HDFS Many companies who use HDFS to scale their Big Data infrastructure face this issue. Storing large numbers of small files can affect the performance significantly as HDFS is bottlenecked by its NameNode capacity. This becomes a major issue when the data size grows above 50-100 petabytes. Need for faster data delivery in Hadoop Since Uber operates in real time, there was a need for providing services the latest data. It was important to make the data delivery much faster, as the 24-hour data latency was way too slow for many of their use cases. No direct support for updates and deletes for existing data Uber used snapshot-based ingestion of data, which means a fresh copy of source data was ingested every 24 hours. As Uber requires the latest data for its business, there was a need for a solution which supports update and delete operations for existing data. However, since their Big Data is stored in HDFS and Parquet, direct support for update operations on existing data is not available. Faster ETL and modeling ETL and modeling jobs were also snapshot-based, requiring their platform to rebuild derived tables in every run. ETL jobs also needed to become incremental to reduce data latency. How Hudi solves the aforementioned limitations? The following diagram shows Uber's Big Data platform after the incorporation of Hudi: Source: Uber Regardless of whether the data updates are new records added to recent date partitions or updates to older data, Hudi allows users to pass on their latest checkpoint timestamp and retrieve all the records that have been updated since. This data retrieval happens without running an expensive query that scans the entire source table. Using this library Uber has moved to an incremental ingestion model leaving behind the snapshot-based ingestion. As a result, the data latency was reduced from 24 hrs to less than one hour. To know about Hudi in detail, check out Uber’s official announcement. How can Artificial Intelligence support your Big Data architecture? Big data as a service (BDaaS) solutions: comparing IaaS, PaaS and SaaS Uber’s Marmaray, an Open Source Data Ingestion and Dispersal Framework for Apache Hadoop
Read more
  • 0
  • 0
  • 11409

article-image-top-7-python-programming-books-need-to-read
Aaron Lazar
22 Jun 2018
9 min read
Save for later

Top 7 Python programming books you need to read

Aaron Lazar
22 Jun 2018
9 min read
Python needs no introduction. It’s one of the top rated and growing programming languages, mainly because of its simplicity and wide applicability to solve a range of problems. Developers like yourself, beginners and experts alike, are looking to skill themselves up with Python. So I thought I would put together a list of Python programming books that I think are the best for learning Python - whether you're a beginner or experienced Python developer. Books for beginning to learn Python Learning Python, by Fabrizio Romano What the book is about This book explores the essentials of programming, covering data structures while showing you how to manipulate them. It talks about control flows in a program and teaches you how to write clean and reusable code. It reveals different programming paradigms and shows you how to optimize performance as well as debug your code effectively. Close to 450 pages long, the content spans twelve well thought out chapters. You’ll find interesting content on Functions, Memory Management and GUI app development with PyQt. Why Learn from Fabrizio Fabrizio has been creating software for over a decade. He has a master's degree in computer science engineering from the University of Padova and is also a certified Scrum master. He has delivered talks at the last two editions of EuroPython and at Skillsmatter in London. The Approach Taken The book is very easy to follow, and takes an example driven approach. As you end the book, you will be able to build a website in Python. Whether you’re new to Python or programming on the whole, you’ll have no trouble at all in following the examples. Download Learning Python FOR FREE. Learning Python, by Mark Lutz What the book is about This is one of the top most books on Python. A true bestseller, the book is perfectly fit for both beginners to programming, as well as developers who already have experience working with another language. Over 1,500 pages long, and covering content over 41 chapters, the book is a true shelf-breaker! Although this might be a concern to some, the content is clear and easy to read, providing great examples wherever necessary. You’ll find in-depth content ranging from Python syntax, to Functions, Modules, OOP and more. Why Learn from Mark Mark is the author of several Python books and has been using Python since 1992. He is a world renowned Python trainer and has taught close to 260 virtual and on-site Python classes to roughly 4,000 students. The Approach Taken The book is a great read, complete with helpful illustrations, quizzes and exercises. It’s filled with examples and also covers some advanced language features that recently have become more common in modern Python. You can find the book here, on Amazon. Intermediate Python books Modern Python Cookbook, by Steven Lott What the book is about Modern Python Cookbook is a great book for those already well versed with Python programming. The book aims to help developers solve the most common problems that they’re faced with, during app development. Spanning 824 pages, the book is divided into 13 chapters that cover solutions to problems related to data structures, OOP, functional programming, as well as statistical programming. Why Learn from Steven Steven has over 4 decades of programming experience, over a decade of which has been with Python. He has written several books on Python and has created some tutorial videos as well. Steven’s writing style is one to envy, as he manages to grab the attention of the readers while also imparting vast knowledge through his books. He’s also a very enthusiastic speaker, especially when it comes to sharing his knowledge. The Approach Taken The book takes a recipe based approach; presenting some of the most common, as well as uncommon problems Python developers face, and following them up with a quick and helpful solution. The book describes not just the how and the what, but the why of things. It will leave you able to create applications with flexible logging, powerful configuration, command-line options, automated unit tests, and good documentation. Find Modern Python Cookbook on the Packt store. Python Crash Course, by Eric Matthes What the book is about This one is a quick paced introduction to Python and assumes that you have knowledge of some other programming language. This is actually somewhere in between Beginner and Intermediate, but I've placed it under Intermediate because of its fast-paced, no-fluff-just-stuff approach. It will be difficult to follow if you’re completely new to programming. The book is 560 pages long and is covered over 20 chapters. It covers topics ranging from the Python libraries like NumPy and matplotlib, to building 2D games and even working with data and visualisations. All in all, it’s a complete package! Why Learn from Eric Eric is a high school math and science teacher. He has over a decade’s worth of programming experience and is a teaching enthusiast, always willing to share his knowledge. He also teaches an ‘Introduction to Programming’ class every fall. The Approach Taken The book has a great selection of projects that caters to a wide range of audience who’re planning to use Python to solve their programming problems. It thoughtfully covers both Python 2 and 3. You can find the book here on Amazon. Fluent Python, by Luciano Ramalho What the book is about The book is an intermediate guide that assumes you have already dipped your feet into the snake pit. It takes you through Python’s core language features and libraries, showing you how to make your code shorter, faster, and more readable at the same time. The book flows over almost 800 pages, with 21 chapters. You’ll indulge yourself in topics on the likes of Functions as well as objects, metaprogramming, etc. Why Learn from Luciano Luciano Ramalho is a member of the Python Software Foundation and co-founder of Garoa Hacker Clube, the first hackerspace in Brazil. He has been working with Python since 1998. He has taught Python web development in the Brazilian media, banking and government sectors and also speaks at PyCon US, OSCON, PythonBrazil and FISL. The Approach Taken The book is mainly based on the language features that are either unique to Python or not found in many other popular languages. It covers the core language and some of its libraries. It has a very comprehensive approach and touches on nearly every point of the language that is pythonic, describing not just the how and the what, but the why. You can find the book here, on Amazon. Advanced Python books The Hitchhiker's Guide to Python, by Kenneth Reitz & Tanya Schlusser What the book is about This isn’t a book that teaches Python. Rather, it’s a book that shows experienced developers where, when and how to use Python to solve problems. The book contains a list of best practices and how to apply these practices in real-world python projects. It focuses on giving great advice about writing good python code. It is spread over 11 chapters and 338 pages. You’ll find interesting topics like choosing an IDE, how to manage code, etc. Why Learn from Kenneth and Tanya Kenneth Reitz is a member of the Python Software Foundation. Until recently, he was the product owner of Python at Heroku. He is a known speaker at several conferences. Tanya is an independent consultant who has over two decades of experience in half a dozen languages. She is an active member of the Chicago Python User’s Group, Chicago’s PyLadies, and has also delivered data science training to students and industry analysts. The Approach Taken The book is highly opinionated and talks about what the best tools and techniques are to build Python apps. It is a book about best practices and covers how to write and ship high quality code, and is very insightful. The book also covers python libraries/frameworks that are focused on capabilities such as data persistence, data manipulation, web, CLI, and performance. You can get the book here on Amazon. Secret Recipes of the Python Ninja, by Cody Jackson What the book is about Now this is a one-of-a-kind book. Again, this one is not going to teach you about Python Programming, rather it will show you tips and tricks that you might not have known you could do with Python. In close to 400 pages, the book unearth secrets related to the implementation of the standard library, by looking at how modules actually work. You’ll find interesting topics on the likes of the CPython interpreter, which is a treasure trove of secret hacks that not many programmers are aware of, the PyPy project, as well as explore the PEPs of the latest versions to discover some interesting hacks. Why Learn from Cody Cody Jackson is a military veteran and the founder of Socius Consulting, an IT and business management consulting company. He has been involved in the tech industry since 1994. He is a self-taught Python programmer and also the author of the book series Learning to Program Using Python. He’s always bubbling with ideas and ways about improving the way he codes and has brilliantly delivered content through this book. The Approach Taken Now this one is highly opinionated too - the idea is to learn the skills from a Python Ninja. The book takes a recipe based approach, putting a problem before you and then showing you how you can wield Python to solve it. Whether you’re new to Python or are an expert, you’re sure to find something interesting in the book. The recipes are easy to follow and waste no time on lengthy explanations. You can find the book here on Amazon and here on the Packt website. So there you have it. Those were my top 7 books on Python Programming. There are loads of books available on Amazon, and quite a few from Packt that you can check out, but the above are a list of those that are a must-have for anyone who’s developing in Python. Read Next What are data professionals planning to learn this year? Python, deep learning, yes. But also… Python web development: Django vs Flask in 2018 Why functional programming in Python matters: Interview with best selling author, Steven Lott What the Python Software Foundation & Jetbrains 2017 Python Developer Survey had to reveal
Read more
  • 0
  • 0
  • 11402

article-image-8-machine-learning-best-practices
Melisha Dsouza
02 Sep 2018
9 min read
Save for later

8 Machine learning best practices [Tutorial]

Melisha Dsouza
02 Sep 2018
9 min read
Machine Learning introduces a huge potential to reduce costs and generate new revenue in an enterprise. Application of machine learning effectively helps in solving practical problems smartly within an organization. Machine learning automates tasks that would otherwise need to be performed by a live agent. It has made drastic improvements in the past few years, but many a time, a machine needs the assistance of a human to complete its task. This is why it is necessary for organizations to learn best practices in machine learning which you will learn in this article today. This article is an excerpt from a book written by Chiheb Chebbi titled Mastering Machine Learning for Penetration Testing Feature engineering in machine learning Feature engineering and feature selection are essential to every modern data science product, especially machine learning based projects. According to research, over 50% of the time spent building the model is occupied by cleaning, processing, and selecting the data required to train the model. It is your responsibility to design, represent, and select the features. Most machine learning algorithms cannot work on raw data. They are not smart enough to do so. Thus, feature engineering is needed, to transform data in its raw status into data that can be understood and consumed by algorithms. Professor Andrew Ng once said: "Coming up with features is difficult, time-consuming, requires expert knowledge. 'Applied machine learning' is basically feature engineering." Feature engineering is a process in the data preparation phase, according to the cross-industry standard process for data mining: The term Feature Engineering itself is not a formally defined term. It groups together all of the tasks for designing features to build intelligent systems. It plays an important role in the system. If you check data science competitions, I bet you have noticed that the competitors all use the same algorithms, but the winners perform the best feature engineering. If you want to enhance your data science and machine learning skills, I highly recommend that you visit and compete at www.kaggle.com: When searching for machine learning resources, you will face many different terminologies. To avoid any confusion, we need to distinguish between feature selection and feature engineering. Feature engineering transforms raw data into suitable features, while feature selection extracts necessary features from the engineered data. Featuring engineering is selecting the subset of all features, without including redundant or irrelevant features. Machine learning best practices Feature engineering enhances the performance of our machine learning system. We discuss some tips and best practices to build robust intelligent systems. Let's explore some of the best practices in the different aspects of machine learning projects. Information security datasets Data is a vital part of every machine learning model. To train models, we need to feed them datasets. While reading the earlier chapters, you will have noticed that to build an accurate and efficient machine learning model, you need a huge volume of data, even after cleaning data. Big companies with great amounts of available data use their internal datasets to build models, but small organizations, like startups, often struggle to acquire such a volume of data. International rules and regulations are making the mission harder because data privacy is an important aspect of information security. Every modern business must protect its users' data. To solve this problem, many institutions and organizations are delivering publicly available datasets, so that others can download them and build their models for educational or commercial use. Some information security datasets are as follows: The Controller Area Network (CAN) dataset for intrusion detection (OTIDS): http://ocslab.hksecurity.net/Dataset/CAN-intrusion-dataset The car-hacking dataset for intrusion detection: http://ocslab.hksecurity.net/Datasets/CAN-intrusion-dataset The web-hacking dataset for cyber criminal profiling: http://ocslab.hksecurity.net/Datasets/web-hacking-profiling The API-based malware detection system (APIMDS) dataset: http://ocslab.hksecurity.net/apimds-dataset The intrusion detection evaluation dataset (CICIDS2017): http://www.unb.ca/cic/datasets/ids-2017.html The Tor-nonTor dataset: http://www.unb.ca/cic/datasets/tor.html The Android adware and general malware dataset: http://www.unb.ca/cic/datasets/android-adware.html Use Project Jupyter The Jupyter Notebook is an open source web application used to create and share coding documents. I highly recommend it, especially for novice data scientists, for many reasons. It will give you the ability to code and visualize output directly. It is great for discovering and playing with data; exploring data is an important step to building machine learning models. Jupyter's official website is http://jupyter.org/: To install it using pip, simply type the following: python -m pip install --upgrade pip python -m pip install jupyter Speed up training with GPUs As you know, even with good feature engineering, training in machine learning is computationally expensive. The quickest way to train learning algorithms is to use graphics processing units (GPUs). Generally, though not in all cases, using GPUs is a wise decision for training models. In order to overcome CPU performance bottlenecks, the gather/scatter GPU architecture is best, performing parallel operations to speed up computing. TensorFlow supports the use of GPUs to train machine learning models. Hence, the devices are represented as strings; following is an example: "/device:GPU:0" : Your device GPU "/device:GPU:1" : 2nd GPU device on your Machine To use a GPU device in TensorFlow, you can add the following line: with tf.device('/device:GPU:0'): <What to Do Here> You can use a single GPU or multiple GPUs. Don't forget to install the CUDA toolkit, by using the following commands: Wget "http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_8.0.44-1_amd64.deb" sudo dpkg -i cuda-repo-ubuntu1604_8.0.44-1_amd64.deb sudo apt-get update sudo apt-get install cuda Install cuDNN as follows: sudo tar -xvf cudnn-8.0-linux-x64-v5.1.tgz -C /usr/local export PATH=/usr/local/cuda/bin:$PATH export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64" export CUDA_HOME=/usr/local/cuda Selecting models and learning curves To improve the performance of machine learning models, there are many hyper parameters to adjust. The more data that is used, the more errors that can happen. To work on these parameters, there is a method called GridSearchCV. It performs searches on predefined parameter values, through iterations. GridSearchCV uses the score() function, by default. To use it in scikit-learn, import it by using this line: from sklearn.grid_search import GridSearchCV Learning curves are used to understand the performance of a machine learning model. To use a learning curve in scikit-learn, import it to your Python project, as follows: from sklearn.learning_curve import learning_curve Machine learning architecture In the real world, data scientists do not find data to be as clean as the publicly available datasets. Real world data is stored by different means, and the data itself is shaped in different categories. Thus, machine learning practitioners need to build their own systems and pipelines to achieve their goals and train the models. A typical machine learning project respects the following architecture: Coding Good coding skills are very important to data science and machine learning. In addition to using effective linear algebra, statistics, and mathematics, data scientists should learn how to code properly. As a data scientist, you can choose from many programming languages, like Python, R, Java, and so on. Respecting coding's best practices is very helpful and highly recommended. Writing elegant, clean, and understandable code can be done through these tips: Comments are very important to understandable code. So, don't forget to comment your code, all of the time. Choose the right names for variables, functions, methods, packages, and modules. Use four spaces per indentation level. Structure your repository properly. Follow common style guidelines. If you use Python, you can follow this great aphorism, called the The Zen of Python, written by the legend, Tim Peters: "Beautiful is better than ugly. Explicit is better than implicit. Simple is better than complex. Complex is better than complicated. Flat is better than nested. Sparse is better than dense. Readability counts. Special cases aren't special enough to break the rules. Although practicality beats purity. Errors should never pass silently. Unless explicitly silenced. In the face of ambiguity, refuse the temptation to guess. There should be one-- and preferably only one --obvious way to do it. Although that way may not be obvious at first unless you're Dutch. Now is better than never. Although never is often better than *right* now. If the implementation is hard to explain, it's a bad idea. If the implementation is easy to explain, it may be a good idea. Namespaces are one honking great idea -- let's do more of those!" Data handling Good data handling leads to successfully building machine learning projects. After loading a dataset, please make sure that all of the data has loaded properly, and that the reading process is performing correctly. After performing any operation on the dataset, check over the resulting dataset. Business contexts An intelligent system is highly connected to business aspects because, after all, you are using data science and machine learning to solve a business issue or to build a commercial product, or for getting useful insights from the data that is acquired, to make good decisions. Identifying the right problems and asking the right questions are important when building your machine learning model, in order to solve business issues. In this tutorial, we had a look at somes tips and best practices to build intelligent systems using Machine Learning. To become a master at penetration testing using machine learning with Python,  check out this book  Mastering Machine Learning for Penetration Testing Why TensorFlow always tops machine learning and artificial intelligence tool surveys Intelligent Edge Analytics: 7 ways machine learning is driving edge computing adoption in 2018 Tackle trolls with Machine Learning bots: Filtering out inappropriate content just got easy
Read more
  • 0
  • 0
  • 11376
article-image-12-ubiquitous-artificial-intelligence-powered-apps-that-are-changing-lives
Bhagyashree R
30 Aug 2018
11 min read
Save for later

12 ubiquitous artificial intelligence powered apps that are changing lives

Bhagyashree R
30 Aug 2018
11 min read
Artificial Intelligence is making it easier for people to do things every day. You can schedule your day, search for photos of loved ones, type emails on the go, or get things done with the virtual assistant. AI also provides innovative ways of tackling existing problems, from healthcare to advancing scientific discovery. According to Gartner’s Top 10 Strategic Technology Trends for 2018, the next few years will see every app, application, and service incorporating AI at some level. With major companies like Google, Amazon, IBM investing in AI and incorporating AI in their products, this statement, instead of a prediction is becoming a fact. Apple’s IPhone X comes with a Facial Recognition System, Samsung’s Bixby, Amazon’s Alexa, Google’s Google Assistant, and the recently launched Android Pie. Android Pie learns your preferences based on your usage patterns and gets better over time. It even provides you a breakdown of the time you spend on your phone. AI comes with endless possibilities, things that we used to dream of are now becoming a part of our day to day life. So, I have listed here, in no particular order, some of those innovative applications: Microsoft’s Seeing AI - Eye for the visually impaired Source: Microsoft Seeing AI is a perfect example of how technology is improving our lives. It is an intelligent camera app that uses computer vision to audibly help blind and visually impaired people to know about their surroundings. It comes with functionalities like reading out short text and documents for you, giving you description about a person, identifies currencies, colour, handwriting, light and even images in other apps using the device's camera. A data scientist named Anirudh Koul started this project (called Deep Vision earlier) to help his grandfather who was gradually losing his vision. Two breakthroughs by the Microsoft researchers facilitated him to further his idea: vision-to-language and image classification. To make the app this advance and real-time, they used the idea of making servers communicate with Microsoft Cognitive Services. This app brings in four technologies together to provide users with an array of functionalities: OCR, barcode scanner, facial recognition, and scene recognition. Check out this YouTube tutorial to understand how it works. Download App Store Ada - Healthcare in your hand Source: Digital Health Ada, with a very simple and conversational UI, helps you understand what could be wrong if you or someone you care about is not feeling well. Just like any doctor’s appointment, it starts with your basic details, then does an assessment, in which it asks several personalized questions related to the symptoms, and then gives a report. The report consists of a summary, possible causes, and less-likely causes. It also allows you to share the report as a PDF. After training over several years using real world cases, Ada has become a handy health advisor. Its platform is powered by a sophisticated Artificial Intelligence engine combined with large medical knowledge base covering many thousands of conditions, symptoms and findings. In every medical assessment, Ada takes all of a patient’s information into account, including past medical history, symptoms, risk factors and more. Using machine learning and multiple closed feedback loops, Ada becomes more intelligent. Download App Store Google Play Store Plume Air Report - An air pollution monitor Source: Plume Labs Blog Industrialization and urbanization definitely comes with their side effects, the main being air pollution. It has become inevitable to keep yourself safe from the pollution, but now at least you can be aware of the air pollution levels in your area. Plume Air Report forecasts how air quality will evolve hour by hour over the next 24 hours similar to weather forecast. You can also easily compare the air quality between cities. It gives you insight on all pollutants (PM2.5, PM10, O3, NO2), with absolute concentration levels and your local air quality scale. It uses machine learning and atmospheric sciences to deliver real-time and hourly forecast air quality data. First, latest pollution levels is collected from over 12,000 monitoring stations and 80 public agencies around the world and then filtered for errors. Local atmospheric data (wind, temperature, atmosphere, etc.) is sourced to track their influence on pollution levels in your city. A team of data scientists analyzes local specifics such as geographical features and human activities. Finally, AI algorithms and atmospheric models are developed that turn this giant amount of data into hourly forecasts. Download App Store Google Play Store Aura - Mindfulness meets AI Source: Popular Science In this fast life, slow down a little and give yourself a time out with Aura. Aura is a new kind of mindfulness app that learns about you and simplifies your learning through guided meditations. It helps in reducing stress and increases positivity through 3-minute meditations, personalized by Artificial Intelligence. Aura is an intelligent app that leverages machine learning to give you a unique experience. After every exercise, you can rate your experience and Aura will learn how to provide more tailored meditations according to your needs. You can even track your mood and learn your mood patterns. Download App Store Google Play Store Replika - An emotive chatbot as a friend for life Source: Medium Want to be friends with someone who is always there to listen to you, talk to you, and never judges you? Then Replika is for you! It helps you make a real connection with an unreal friend. The idea of building Replika came from a very tragic background. The founder of the software company, Luka, Eugenia Kudya, lost her best friend in an accident in November 2015. She used to go through their messenger texts to bring back their memories. This is how she got this idea to develop a chatbot making it learn from the sample texts sent by her best friend. In her own words, “Most of the companies try to build an app that talks, but we tried to build an app that could listen well”. The chatbot uses neural network facilitating more natural one-on-one conversation with its user, and over time, learn how to speak like them. The source code is freely available for developers under the name CakeChat. It comes with a pre-trained model that you can use as is to run a chatbot that maintains a conversation in a certain emotional state. You can also build a variety of other conversational agents by using your own dataset, for example, persona-based model, emotional chatting machine, topic-centric model. To know more about the background and evolution of Replika, check out this amazing YouTube video. Download App Store Google Play Store Google Assistant - Your personal Google Source: Google Assistant When talking of AI-powered apps, voice assistants probably come first in your mind. Google Assistant makes your life easier and helps in organizing your day better. You can manage your little tasks, plan your day, enjoy entertainment, and get answers. It can also sync to your other devices including Google Home, smart TVs, laptops, and more. To give users smart assistance, Google Assistant relies on Artificial Intelligence technologies such as natural language processing, natural language understanding, and machine learning to understand what the user is saying, and to make suggestions or act on that language input. Download App Store Google Play Store Hound - Say it, Get it Source: Android Apps In an array of virtual assistants to choose from, Hound understands your voice commands better. You do not need to give “search query” like commands and can have a more natural conversation. Hound can be used for variety of tasks, some of them are: search, discover, and play music, set alarms, timers, and reminders, call, text, navigate hands-free, get the weather forecast. Hound’s speed and accuracy comes from their powerful Houndify platform. This platform combines Speech Recognition and Natural Language Understanding into a single step, which is called Speech-to-Meaning. Download App Store Google Play Store Picai - An app that picks filters for your pics, keeping you looking your best always Source: Google Play Store Picai with the help of Artificial Intelligence, recommends picture-perfect filters by analyzing the scene. It automatically analyzes the scene and with the help of object recognition detects the type of the object, for example, a plant, a girl, etc. It then uses a proprietary deep learning model to recommend two optimum filters from 100+ filters. What makes this app stand out is the split-screen filter selection, which makes the filter selection easier for the users. When using this app be warned of the picture quality and app size (76 MB), but it is definitely worth trying! Download Google Play Store Microsoft Pix - The pro photographer Source: MSPoweuser Named one of the 50 Best Apps of the Year by Time Magazine, Microsoft Pix helps you take better photos without the extra effort! It solves the problem of “not living in the moment”. It comes with some amazing features like, hyperlapse, live images Microsoft Pix Comix, artistic styles to transform your photos, smart settings that automatically checks scene and lighting between each shutter tap, and updates settings between each shot, and more. Microsoft Pix uses Artificial Intelligence to improve the image, such as cropping edges, enhancing color and tone, and sharpening focus. It includes enhanced deep-learning capabilities around image understanding. It captures a burst of 10 frames with each shutter click and uses AI to select three best shots. Before the remaining photos are deleted, it uses data from the entire burst to remove noise. These best, enhanced images are ready in about a second. The app also detects whether your eyes are open or not using the facial recognition technology. Download App Store ELSA - Your machine learning English teacher Source: TechCrunch ELSA (English Language Speech Assistant)  helps you in learning English and bettering your pronunciation every day. It provides you a curriculum tailored just, regular feedback, progress tracking, common phrases used in daily life. You can practice in a relaxed environment and improve your speaking skills to prepare for the TOEFL, IELTS, TOEIC ELSA coaches you in improving your English pronunciations by using speech recognition, deep learning, and Artificial Intelligence. Download App Store Google Play Store Socratic - Homework in a snap Source: Google Play Store Socratic is your new helper, apart from your parents, in completing those complex Math problems. You just need to take a photo of your homework and can get explanations, videos, step-by-step help, instantly. Also, these resources are jargon-free, helping you understand the concepts better. It supports all subjects including Math (Algebra, Calculus, Statistics, Graphing, etc), Science, Chemistry, History, English, Economics, and more. Socratic uses Artificial Intelligence to figure out the concepts you need to learn in order to answer it. For this it combines cutting-edge computer vision technologies, which read questions from images, with machine learning classifiers. These classifiers are built using millions of sample homework questions, to accurately predict which concepts will help you solve your question. Download App Store Google Play Store Recent News - Stay informed Source: Recent News Recent News is an app that will provide you customized news. Some of the features that it comes with to give you the daily dose of news include one-minute news summary with very quick load time, hot news, local news, and personalized recommendations, instantly share news on Facebook, Twitter, and other social networks, and many more. It uses Artificial Intelligence to learn about your interests, suggest relevant articles, and propose topics you might like to follow. So, the more you use it the better it becomes! The app is surely innovative and saves time, but I do wish the developers applied some innovation in the app’s name as well :P Download App Store Google Play Store And that’s the end of my list. People say, “Smartphones and apps are becoming smarter, and we are becoming dumber”. But I would like to say that these apps, with the right usage, empower us to become smarter. Agree? 7 Popular Applications of Artificial Intelligence in Healthcare 5 examples of Artificial Intelligence in Web apps What Should We Watch Tonight? Ask a Robot, says Matt Jones from OVO Mobile [Interview]
Read more
  • 0
  • 0
  • 11274

article-image-how-do-you-become-a-developer-advocate
Packt Editorial Staff
11 Oct 2019
8 min read
Save for later

How do you become a developer advocate?

Packt Editorial Staff
11 Oct 2019
8 min read
Developer advocates are people with a strong technical background, whose job is to help developers be successful with a platform or technology. They act as a bridge between the engineering team and the developer community. A developer advocate does not only fill in the gap between developers and the platform but also looks after the development of developers in terms of traction and progress on their projects. Developer advocacy, is broadly referred to as "developer relations". Those who practice developer advocacy have fallen into in this profession in one way or another. As the processes and theories in the world of programming have evolved over several years, so has the idea of developer advocacy. This is the result of developer advocates who work in the wild using their own initiatives. This article is an excerpt from the book Developer, Advocate! by Geertjan Wielenga. This book serves as a rallying cry to inspire and motivate tech enthusiasts and burgeoning developer advocates to take their first steps within the tech community. The question then arises, how does one become a developer advocate? Here are some experiences shared by some well-known developer advocates on how they started the journey that landed them to this role. Is developer advocacy taught in universities? Bruno Borges, Principal Product Manager at Microsoft says, for most developer advocates or developer relations personnel, it was something that just happened. Developer advocacy is not a discipline that is taught in universities; there's no training specifically for this. Most often, somebody will come to realize that what they already do is developer relations. This is a discipline that is a conjunction of several other roles: software engineering, product management, and marketing. I started as a software engineer and then I became a product manager. As a product manager, I was engaged with marketing divisions and sales divisions directly on a weekly basis. Maybe in some companies, sales, marketing, and product management are pillars that are not needed. I think it might vary. But in my opinion, those pillars are essential for doing a proper developer relations job. Trying to aim for those pillars is a great foundation. Just as in computer science when we go to college for four years, sometimes we don't use some of that background, but it gives us a good foundation. From outsourcing companies that just built business software for companies, I then went to vendor companies. That's where I landed as a person helping users to take full advantage of the software that they needed to build their own solutions. That process is, ideally, what I see happening to others. The journey of a regular tech enthusiast to a developer advocate Ivar Grimstad, a developer advocate at Eclipse foundation, speaks about his journey from being a regular tech enthusiast attending conferences to being there speaking at conferences as an advocate for his company. Ivar Grimstad says, I have attended many different conferences in my professional life and I always really enjoyed going to them. After some years of regularly attending conferences, I came to the point of thinking, "That guy isn't saying anything that I couldn't say. Why am I not up there?" I just wanted to try speaking, so I started submitting abstracts. I already gave talks at meetups locally, but I began feeling comfortable enough to approach conferences. I continued submitting abstracts until I got accepted. As it turned out, while I was becoming interested in speaking, my company was struggling to raise its profile. Nobody, even in Sweden, knew what we did. So, my company was super happy for any publicity it could get. I could provide it with that by just going out and talking about tech. It didn't have to be related to anything we did; I just had to be there with the company name on the slides. That was good enough in the eyes of my company. After a while, about 50% of my time became dedicated to activities such as speaking at conferences and contributing to open source projects. Tables turned from being an engineer to becoming a developer advocate Mark Heckler, a Spring developer and advocate at Pivotal, narrates his experience about how tables turned for him from University to Pivotal Principal Technologist & Developer Advocate. He says, initially, I was doing full-time engineering work and then presenting on the side. I was occasionally taking a few days here and there to travel to present at events and conferences. I think many people realized that I had this public-facing level of activities that I was doing. I was out there enough that they felt I was either doing this full-time or maybe should be. A good friend of mine reached out and said, "I know you're doing this anyway, so how would you like to make this your official role?" That sounded pretty great, so I interviewed, and I was offered a full-time gig doing, essentially, what I was already doing in my spare time. A hobby turned out to be a profession Matt Raible, a developer advocate at Okta has worked as an independent consultant for 20 years. He did advocacy as a side hobby. He talks about his experience as a consultant and walks through the progress and development. I started a blog in 2002 and wrote about Java a lot. This was before Stack Overflow, so I used Struts and Java EE. I posted my questions, which you would now post on Stack Overflow, on that blog with stack traces, and people would find them and help. It was a collaborative community. I've always done the speaking at conferences on the side. I started working for Stormpath two years ago, as a contractor part-time, and I was working at Computer Associates at the same time. I was doing Java in the morning at Stormpath and I was doing JavaScript in the afternoon at Computer Associates. I really liked the people I was working with at Stormpath and they tried to hire me full-time. I told them to make me an offer that I couldn't refuse, and they said, "We don't know what that is!" I wanted to be able to blog and speak at conferences, so I spent a month coming up with my dream job. Stormpath wanted me to be its Java lead. The problem was that I like Java, but it's not my favorite thing. I tend to do more UI work. The opportunity went away for a month and then I said, "There's a way to make this work! Can I do Java and JavaScript?" Stormpath agreed that instead of being more of a technical leader and owning the Java SDK, I could be one of its advocates. There were a few other people on board in the advocacy team. Six months later, Stormpath got bought out by Okta. As an independent consultant, I was used to switching jobs every six months, but I didn't expect that to happen once I went full-time. That's how I ended up at Okta! Developer advocacy can be done by calculating the highs and lows of the tech world Scott Davis, a Principal Engineer at Thoughtworks, was also a classroom instructor, teaching software classes to business professionals before becoming a developer advocate. As per him, tech really is a world of strengths and weaknesses. Advocacy, I think, is where you honestly say, "If we balance out the pluses and the minuses, I'm going to send you down the path where there are more strengths than weaknesses. But I also want to make sure that you are aware of the sharp, pointy edges that might nick you along the way." I spent eight years in the classroom as a software instructor and that has really informed my entire career. It's one thing to sit down and kind of understand how something works when you're cowboy coding on your own. It's another thing altogether when you're standing up in front of an audience of tens, or hundreds, or thousands of people. Discover how developer advocates are putting developer interests at the heart of the software industry in companies including Microsoft and Google with Developer, Advocate! by Geertjan Wielenga. This book is a collection of in-depth conversations with leading developer advocates that reveal the world of developer relations today. 6 reasons why employers should pay for their developers’ training and learning resources “Developers need to say no” – Elliot Alderson on the FaceApp controversy in a BONUS podcast episode [Podcast] GitHub has blocked an Iranian software developer’s account How do AWS developers manage Web apps? Are you looking at transitioning from being a developer to manager? Here are some leadership roles to consider
Read more
  • 0
  • 0
  • 11273

article-image-oldest-programming-languages-use-today
Antonio Cucciniello
11 Jul 2017
5 min read
Save for later

The oldest programming languages in use today

Antonio Cucciniello
11 Jul 2017
5 min read
Today, we are going to be discussing some of the oldest, most established programming languages that are still in use today. Some developers may be surprised to learn that many of these languages surpass them in age, in a world where technology, especially in the world of development, is advancing at such a rapid rate. But then, old is gold, after all. So, in age order, let’s present the oldest programming languages in use today: C The C language was created in 1972 (it’s not that old, okay). C is a lower level language that was based an earlier language called B (do you see a trend here?) It is a general-purpose language, and a parent language which many future programming languages derive from, such as C#, Java, JavaScript, Perl, PHP and Python. It is used in many applications that must interface with hardware or play with memory. C++ Pronounced see-plus-plus, C++ was developed 11 years later in 1983. It is very similar to C, in fact it is often considered an extension of C. It added various concepts such as classes, virtual functions, and templates. It is more of an intermediate level language that can be used lower level or higher level, depending on the application. It is also known for being used in low latency applications. Objective-C Around the same time as C++ was being released to the public, Objective-C was created. If you took an educated guess from the name and said that it would be another extension of C, then you’d be right. This version was meant to be an object-oriented version of C (there’s a lot in a name, clearly). It is used, probably most famously, by Apple. If you are a Mac or iOS user, then your iPhone or Mac applications were most likely developed with Objective-C (until they recently moved over to Swift). Python We are going to take a quick jump ahead in time to the 90’s for this one. In 1991, the Python programming language was released, though it had been in development in the late 80’s. It is a dynamically-typed, object-oriented language that is often used for scripting and web applications. It is usually used with some of its frameworks like Django or Flask on the backend. It is one of the most popular programming languages in use today. Ruby In 1993, Ruby was released. Today, you probably heard of Ruby on Rails, which primarily is used to create the backend of web applications using Ruby. Unlike the many languages derived from C, this language was influenced by older languages such as Perl and Lisp. This language was designed for productive and fun programming. This was done by making the language closer to human needs, rather than machine needs. Java Two years later in 1995, Java was developed. This is a high level language that is derived from C. It is famously known for its use in web applications and as the language to use to develop Android applications and Android OS. It used to be the most popular language a few years ago, but its popularity and usage has definitely decreased. PHP In the same year as Java was developed, PHP was born. It is an open source programming language developed for the purpose of creating dynamic websites. It is also used for server side web development. Its usage is definitely declining, but it is still in use today. JavaScript That same year (yup, ’95 was good year for programming, not so much for fans of Full House), JavaScript was brought to the world. Its purpose was to be a high level language that helped with the functionality of a web page. Today, it is sometimes used as a scripting language, as well as being used on the backend of applications with the release of Node.js. It is one of the most popular and widely used programming languages today. Conclusion That was our brief history lesson on some in use programming languages. Even though some of them are 20, 30, even over 40 years old, they are being used by thousands of developers daily. They all have a variety of uses, from lower level to higher level, from web applications to mobile applications. Do you feel there is a need for newer languages, or are you happy with what we have? If you have any favorites, let us know which one and why! About the author Antonio Cucciniello is a Software Engineer with a background in C, C++ and JavaScript (Node.Js) from New Jersey.   His most recent project called Edit Docs is an Amazon Echo skill that allows users to edit Google Drive files using your voice.  He loves building cool things with software, reading books on self-help and improvement, finance, and entrepreneurship. Follow him on twitter @antocucciniello, and follow him on GitHub here: https://github.com/acucciniello
Read more
  • 0
  • 0
  • 11203
article-image-alteryx-vs-tableau-choosing-the-right-data-analytics-tool-for-your-business
Guest Contributor
04 Mar 2019
6 min read
Save for later

Alteryx vs. Tableau: Choosing the right data analytics tool for your business

Guest Contributor
04 Mar 2019
6 min read
Data Visualization is commonly used in the modern world, where most business decisions are taken into consideration by analyzing the data. One of the most significant benefits of data visualization is that it enables us to visually access huge amounts of data in easily understandable visuals. There are many areas where data visualization is being used. Some of the data visualization tools include Tableau, Alteryx, Infogram, ChartBlocks, Datawrapper, Plotly, Visual.ly, etc. Tableau and Alteryx are industry standard tools and have dominated the data analytics market for a few years now and still running strong without any strong competition. In this article, we will understand the core differences between Alteryx tool and Tableau. This will help us in deciding which tool to use for what purposes. Tableau is one of the top-rated tools which helps the analysts to carry out business intelligence and data visualization activities. Using Tableau, the users will be able to generate compelling dashboards and stunning data visualizations. Tableau’s interactive user interface helps users to quickly generate reports where they can drill down the information to a granular level. Alteryx is a powerful tool widely used in data analytics and also provides meaningful insights to the executive level personnel. With the user-friendly interface, the user will be able to extract the data, transform the data, and load the data within the Alteryx tool. Why use Alteryx with Tableau? The use of Alteryx with Tableau is a powerful combination when it comes to getting value-added data decisions. With Alteryx, businesses can manipulate their data and provide input to the Tableau platform, which in return will be able to showcase strong data visualizations. This will help the businesses to take appropriate actions which are backed up with data analysis. Alteryx and Tableau tools are widely used within organizations where the decisions can be taken into consideration based on the insights obtained from data analysis. Talking about data handling, Alteryx is a powerful ETL platform where data can be analyzed in different formats. When it comes to data representation, Tableau is a perfect match. Further, using Tableau the reports can be shared across team members. Nowadays, most of the businesses want to see real-time data and want to understand business trends. The combination of Alteryx and Tableau allows the data analysts to analyze the data, and generate meaningful insights to the users, on-the-fly. Here, data analysis can be executed within the Alteryx tool where the raw data is handled, and then the data representation or visualization is done in Tableau, so both of these tools go hand in hand. Tableau vs Alteryx The table below lists the differences between the tools. Alteryx Tableau This tool is known as a smart data analytics platform. This tool is known for its data visualization capabilities. 2. Can connect with different data sources and can synthesize the raw data. A standard ETL process is possible. 2. Can connect with different data sources and provide data visualization within minutes from the gathered data. 3. Helps in terms of the data analysis 3. Helps in terms of building appealing graphs. 4. The GUI is okay and widely accepted. 4. The GUI is one of the best features where graphs can be easily built by using drag and drop options. 5. Technical knowledge is necessary because it involves in data sources integrations, and also data blending activity. 5. Technical knowledge is not necessary, because all the data will be polished and only the user has to build graphs/visualization. 6.  Once the data blending activity is completed, the users will be able to share the file which can be consumed by Tableau. 6. Once the graphs are prepared, the reports can be easily shared among team members without any hassle. 7. A lot of flexibility while using this tool for data blending activity. 7. Flexibility while using the tool for data visualization. 8. Using this tool, the users will be able to do spatial and predictive analysis 8. Possible by representing the data in an appropriate format. 9.  One of the best tools when it comes to data preparations. 9. Not feasible to prepare the data in Tableau when it is compared to Alteryx. 10. Data representation cannot be done accurately. 10. It is a wonderful tool for data representation. 11. Has one time feeds- Annual fees 11. Has an option to pay monthly as well. 12. Has a drag and drop interface where the user can develop a workflow easily. 12. Has a drag and drop interface where the user will be able to build a visualization in no time. Alteryx and Tableau Integration As discussed earlier, these two tools have their own advantages and disadvantages, but when integrated together, they can do wonders with the data. This integration between Tableau and Alteryx makes the task of visualizing the Alteryx generated answers quite simple. The data is first loaded into the Alteryx tool and is then extracted in the form of .tde files (i.e. Tableau Data Extracted Files). These .tde files will be consumed by Tableau tool to do the data visualization part. On a regular basis, the data extracted file from Alteryx tool (i.e. .tde files) will be generated and will replace the old .tde files. Thus, by integrating Alteryx and Tableau, we can: Cleanse, combine, as well as collect all the data sources that are relevant and enrich them with the help of third-party data - everything in one workflow. Give analytical context to your data by providing predictive, location-based, and deep spatial analytics. Publish your analytic workflows’ results to Tableau for intuitive, rich visualizations that help you in making decisions more quickly. Tableau and Alteryx do not require any advanced skill-set as both tools have simple drag and drop interfaces. You can create a workflow in Alteryx that can process data in a sequential manner. In a similar way, Tableau enables you to build charts by dragging various fields to be utilized, to specified areas. The companies which have a lot of data to analyze, and can spend large amounts of money on analytics, can use these two tools. There doesn’t exist any significant challenges during Tableau, Alteryx integration. Conclusion When Tableau and Alteryx are used together, it is really useful for the businesses so that the senior management can take decisions based on the data insights provided by these tools. These two tools compliment each other and provide high-quality service to businesses. Author Bio Savaram Ravindra is a Senior Content Contributor at Mindmajix.com. His passion lies in writing articles on different niches, which include some of the most innovative and emerging software technologies, digital marketing, businesses, and so on. By being a guest blogger, he helps his company acquire quality traffic to its website and build its domain name and search engine authority. Before devoting his work full time to the writing profession, he was a programmer analyst at Cognizant Technology Solutions. Follow him on LinkedIn and Twitter. How to share insights using Alteryx Server How to do data storytelling well with Tableau [Video] A tale of two tools: Tableau and Power BI  
Read more
  • 0
  • 0
  • 10985

article-image-systems-programming-go-unix-linux
Mihalis Tsoukalos
24 Jan 2018
17 min read
Save for later

Systems programming with Go in UNIX and Linux

Mihalis Tsoukalos
24 Jan 2018
17 min read
This is a guest post by Mihalis Tsoukalos. Mihalis is a Unix administrator, programmer, and Mathematician who enjoys writing. He is the author of Go Systems Programming from which this Go programming tutorial is taken. What is Go? Back when UNIX was first introduced, the only way to write systems software was by using C; nowadays you can program systems software using programming languages including Go. Apart from Go, other preferred languages for developing system utilities are Python, Perl, Rust and Ruby. Go is a modern generic purpose open-source programming language that was officially announced at the end of 2009, was begun as an internal Google project and has been inspired by many other programming languages including C, Pascal, Alef and Oberon. Its spiritual fathers are Robert Griesemer, Ken Thomson and Rob Pike that designed Go as a language for professional programmers that want to build reliable and robust software. Apart from its syntax and standard functions, Go comes with a pretty rich and convenient standard library. What is systems programming? Systems programming is a special area of programming on UNIX machines. Please note that Systems programming is not limited to UNIX machines. Most commands that have to do with System Administration tasks such as disk formatting, network interface configuration, module loading, kernel performance tracking, and so on, are implemented using the techniques of Systems Programming. Additionally, the /etc directory, which can be found on all UNIX systems, contains plain text files that deal with the configuration of a UNIX machine and its services and are also manipulated using systems software. You can group the various areas of systems software and related system calls in the following sets: File I/O: This area deals with file reading and writing operations, which is the most important task of an operating system. File input and output must be fast and efficient and, above all, it must be reliable. Advanced File I/O: Apart from the basic input and output system calls, there are also more advanced ways to read or write a file including asynchronous I/O and non-blocking I/O. System files and Configuration: This group of systems software includes functions that allow you to handle system files such as /etc/password and get system specific information such as system time and DNS configuration. Files and Directories: This cluster includes functions and system calls that allow the programmer to create and delete directories and get information such as the owner and the permissions of a file or a directory. Process Control: This group of software allows you to create and interact with UNIX processes. Threads: When a process has multiple threads, it can perform multiple tasks. However, threads must be created, terminated and synchronized, which is the purpose of this collection of functions and system calls. Server Processes: This set includes techniques that allow you to develop server processes, which are processes that get executed in the background without the need for an active terminal. Go is not that good at writing server processes in the traditional UNIX way – but let me explain this a little more. UNIX servers like Apache use fork(2) to create one or more children processes; this process is called forking and refers to cloning the parent process into a child process and continue executing the same executable from the same point and, most importantly, sharing memory. Although Go does not offer an equivalent to the fork(2) function this is not an issue because you can use goroutines to cover most of the uses of fork(2). Interprocess Communication: This set of functions allows processes that run on the same UNIX machine to communicate with each other using features such as pipes, FIFOs, message queues, semaphores and shared memory. Signal Processing: Signals offer processes a way of handling asynchronous events, which can be very handy. Almost all server processes have extra code that allows them to handle UNIX signals using the system calls of this group. Network Programming: This is the art of developing applications that work over computer networks with the he€lp of TCP/IP and is not Systems programming per se. However, most TCP/IP servers and clients are dealing with system resources, users, files and directories so most of the times you cannot create network applications without doing some kind of Systems programming. The challenging thing with Systems programming is that you cannot afford to have an incomplete program; you can either have a fully working, secure program that can be used on a production system or nothing at all. This mainly happens because you cannot trust end users and hackers! The key difficulty in systems programming is the fact that an erroneous system call can make your UNIX machine misbehave or, even worst, crash it! Most security issues on UNIX systems usually come from wrongly implemented systems software because bugs in systems software can compromise the security of an entire system. The worst part is that this can happen many years after using a certain piece of software! Systems programming examples with Go Printing the permission of a file or a directory With the help of the ls(1) command, you can find out the permissions of a file: $ ls -l /bin/ls -rwxr-xr-x 1 root wheel 38624 Mar 23 01:57 /bin/ls The presented Go program, which is named permissions.go, will teach you how to print the permissions of a file or a directory using Go and will be presented in two parts. The first part is the next: package main import ( "fmt" "os" ) func main() { arguments := os.Args if len(arguments) == 1 { fmt.Println("Please provide an argument!") os.Exit(1) } file := arguments[1] The second part contains the important Go code: info, err := os.Stat(file) if err != nil { fmt.Println("Error:", err) os.Exit(1) } mode := info.Mode() fmt.Print(file, ": ", mode, "n") } Once again most of the Go code is for dealing with the command line argument and making sure that you have one! The Go code that does the actual job is mainly the call to the os.Stat() function, which returns a FileInfo structure that describes the file or directory examined by os.Stat(). From the FileInfo structure you can discover the permissions of a file by calling the Mode() function. Executing permissions.go creates the following kind of output: $ go run permissions.go /bin/ls /bin/ls: -rwxr-xr-x $ go run permissions.go /usr /usr: drwxr-xr-x $ go run permissions.go /us Error: stat /us: no such file or directory exit status 1 How to write to files using fmt.Fprintf() The use of the fmt.Fprintf() function allows you to write formatted text to files in a way that is similar to the way the fmt.Printf() function works. The Go code that illustrates the use of fmt.Fprintf() will be named fmtF.go and is going to be presented in three parts. The first part is the expected preamble of the program: package main import ( "fmt" "os" ) The second part has the next Go code: func main() { if len(os.Args) != 2 { fmt.Println("Please provide a filename") os.Exit(1) } filename := os.Args[1] destination, err := os.Create(filename) if err != nil { fmt.Println("os.Create:", err) os.Exit(1) } defer destination.Close() First, you make sure that you have one command line argument before continuing. Then, you read that command line argument and you give it to os.Create() in order to create it! Please note that the os.Create() function will truncate the file if it already exists. The last part is the following: fmt.Fprintf(destination, "[%s]: ", filename) fmt.Fprintf(destination, "Using fmt.Fprintf in %sn", filename) } Here, you write the desired text data to the file that is identified by the destination variable using fmt.Fprintf() as if you were using the fmt.Printf() method. Executing fmtF.go will generate the following output: $ go run fmtF.go test $ cat test [test]: Using fmt.Fprintf in test In other words, you can create plain text files using fmt.Fprintf(). Developing wc(1) in Go The principal idea behind the code of the wc.go program is that you read a text file line by line until there is nothing left to read. For each line you read you find out the number of characters and the number of words it has. As you need to read your input line by line, the use of bufio is preferred instead of the plain io because it simplifies the code. However, trying to implement wc.go on your own using io would be a very educational exercise. But first you will see the kind of output the wc(1) utility generates: $ wcwc.gocp.go 68 160 1231wc.go 45 112 755cp.go 113 272 1986 total So, if wc(1) has to process more than one file, it automatically generates summary information. Counting words The trickiest part of the implementation is word counting, which is implemented using Go regular expressions: r := regexp.MustCompile("[^s]+") for range r.FindAllString(line, -1) { numberOfWords++ } What the provided regular expression does is separating the words of a line based on whitespace characters in order to count them afterwards! The code! After this little introduction, it is time to see the Go code of wc.go, which will be presented in five parts. The first part is the expected preamble: import ( "bufio" "flag" "fmt" "io" "os" "regexp" ) The second part is the implementation of the count() function, which includes the core functionality of the program: func count(filename string) (int, int, int) { var err error varnumberOfLinesint varnumberOfCharactersint varnumberOfWordsint numberOfLines = 0 numberOfCharacters = 0 numberOfWords = 0 f, err := os.Open(filename) if err != nil { fmt.Printf("error opening file %s", err) os.Exit(1) } defer f.Close() r := bufio.NewReader(f) for { line, err := r.ReadString('n') if err == io.EOF { break } else if err != nil { fmt.Printf("error reading file %s", err) } numberOfLines++ r := regexp.MustCompile("[^s]+") for range r.FindAllString(line, -1) { numberOfWords++ } numberOfCharacters += len(line) } return numberOfLines, numberOfWords, numberOfCharacters } There exist lot of interesting things here. First of all, you can see the Go code presented in the previous section for counting the words of each line. Counting lines is easy because each time the bufio reader reads a new line the value of the numberOfLines variable is increased by one. The ReadString() function tells the program to read until the first occurrence of a 'n' in the input – multiple calls to ReadString() mean that you are reading a file line by line. Next, you can see that the count() function returns three integer values. Last, counting characters is implemented with the help of the len() function that returns the number of characters in a given string, which in this case is the line that was read. The for loop terminates when you get the io.EOF error message, which signifies that there is nothing left to read from the input file. The third part of wc.go starts with the beginning of the implementation of the main() function, which also includes the configuration of the flag package: func main() { minusC := flag.Bool("c", false, "Characters") minusW := flag.Bool("w", false, "Words") minusL := flag.Bool("l", false, "Lines") flag.Parse() flags := flag.Args() if len(flags) == 0 { fmt.Printf("usage: wc<file1> [<file2> [... <fileN]]n") os.Exit(1) } totalLines := 0 totalWords := 0 totalCharacters := 0 printAll := false for _, filename := range flag.Args() { The last for statement is for processing all input files given to the program. The wc.go program supports three flags: the -c flag is for printing the character count, the -w flag is for printing the word count and the -l flag is for printing the line count. The fourth part is the next: numberOfLines, numberOfWords, numberOfCharacters := count(filename) totalLines = totalLines + numberOfLines totalWords = totalWords + numberOfWords totalCharacters = totalCharacters + numberOfCharacters if (*minusC&& *minusW&& *minusL) || (!*minusC&& !*minusW&& !*minusL) { fmt.Printf("%d", numberOfLines) fmt.Printf("t%d", numberOfWords) fmt.Printf("t%d", numberOfCharacters) fmt.Printf("t%sn", filename) printAll = true continue } if *minusL { fmt.Printf("%d", numberOfLines) } if *minusW { fmt.Printf("t%d", numberOfWords) } if *minusC { fmt.Printf("t%d", numberOfCharacters) } fmt.Printf("t%sn", filename) } This part deals with the printing of the information on a per file basis depending on the command line flags. As you can see, most of the Go code here is for handling the output according to the command line flags. The last part is the following: if (len(flags) != 1) &&printAll { fmt.Printf("%d", totalLines) fmt.Printf("t%d", totalWords) fmt.Printf("t%d", totalCharacters) fmt.Println("ttotal") return } if (len(flags) != 1) && *minusL { fmt.Printf("%d", totalLines) } if (len(flags) != 1) && *minusW { fmt.Printf("t%d", totalWords) } if (len(flags) != 1) && *minusC { fmt.Printf("t%d", totalCharacters) } if len(flags) != 1 { fmt.Printf("ttotaln") } } This is where you print the total number of lines, words and characters read according to the flags of the program. Once again, most of the Go code here is for modifying the output according to the command line flags. Executing wc.go will generated the following kind of output: $ go build wc.go $ ls -l wc -rwxr-xr-x 1 mtsouk staff 2264384 Apr 29 21:10 wc $ ./wcwc.gosparse.gonotGoodCP.go 120 280 2319 wc.go 44 98 697 sparse.go 27 61 418 notGoodCP.go 191 439 3434 total $ ./wc -l wc.gosparse.go 120 wc.go 44 sparse.go 164 total $ ./wc -w -l wc.gosparse.go 120 280 wc.go 44 98 sparse.go 164 378 total If you do not execute go build wc.go in order to create an executable file, then executing go run wc.go using Go source files as arguments will fail because the compiler will try to compile the Go source files instead of treating them as command line arguments to the go run wc.go command: $ go run wc.gosparse.go # command-line-arguments ./sparse.go:11: main redeclared in this block previous declaration at ./wc.go:49 $ go run wc.gowc.go package main: case-insensitive file name collision: "wc.go" and "wc.go" $ go run wc.gocp.gosparse.go # command-line-arguments ./cp.go:35: main redeclared in this block previous declaration at ./wc.go:49 ./sparse.go:11: main redeclared in this block previous declaration at ./cp.go:35 Additionally, trying to execute wc.go on a Linux system with Go version 1.3.3 will fail because it uses features of Go that can be found in newer versions – if you use the latest Go version you will have no problem running wc.go. The error message you will get will be the following: $ go version go version go1.3.3 linux/amd64 $ go run wc.go # command-line-arguments ./wc.go:40: syntax error: unexpected range, expecting { ./wc.go:46: non-declaration statement outside function body ./wc.go:47: syntax error: unexpected } Reading a text file character by character Although reading a text file character by character is not needed for the development of the wc(1) utility, it would be good to know how to implement it in Go. The name of the file will be charByChar.go and will be presented in four parts. The first part comes with the following Go code: import ( "bufio" "fmt" "io/ioutil" "os" "strings" ) Although charByChar.go does not have many lines of Go code, it needs lots of Go standard packages, which is a naïve indication that the task it implements is not trivial. The second part is: func main() { arguments := os.Args if len(arguments) == 1 { fmt.Println("Not enough arguments!") os.Exit(1) } input := arguments[1] The third part is the following: buf, err := ioutil.ReadFile(input) if err != nil { fmt.Println(err) os.Exit(1) } The last part has the next Go code: in := string(buf) s := bufio.NewScanner(strings.NewReader(in)) s.Split(bufio.ScanRunes) for s.Scan() { fmt.Print(s.Text()) } } ScanRunes is a split function that returns each character (rune) as a token. Then the call to Scan() allows us to process each character one by one. There also exist ScanWords and ScanLines for getting words and lines scanned, respectively. If you use fmt.Println(s.Text()) as the last statement to the program instead of fmt.Print(s.Text()), then each character will be printed in its own line and the task of the program will be more obvious. Executing charByChar.go generates the following kind of output: $ go run charByChar.go test package main … The wc(1) command can verify the correctness of the Go code of charByChar.go by comparing the input file with the output generated by charByChar.go: $ go run charByChar.go test | wc 32 54 439 $ wc test 32 54 439 test How to create sparse files in Go Big files that are created with the os.Seek() function may have holes in them and occupy fewer disk blocks than files with the same size but without holes in them; such files are called sparse files. This section will develop a program that creates sparse files. The Go code of sparse.go will be presented in three parts. The first part is: package main import ( "fmt" "log" "os" "path/filepath" "strconv" ) The second part of sparse.go has the following Go code: func main() { if len(os.Args) != 3 { fmt.Printf("usage: %s SIZE filenamen", filepath.Base(os.Args[0])) os.Exit(1) } SIZE, _ := strconv.ParseInt(os.Args[1], 10, 64) filename := os.Args[2] _, err := os.Stat(filename) if err == nil { fmt.Printf("File %s already exists.n", filename) os.Exit(1) } The strconv.ParseInt() function is used for converting the command line argument that defines the size of the sparse file from its string value to its integer value. Additionally, the os.Stat() call makes sure that you will not accidentally overwrite an existing file. The last part is where the action takes place: fd, err := os.Create(filename) if err != nil { log.Fatal("Failed to create output") } _, err = fd.Seek(SIZE-1, 0) if err != nil { fmt.Println(err) log.Fatal("Failed to seek") } _, err = fd.Write([]byte{0}) if err != nil { fmt.Println(err) log.Fatal("Write operation failed") } err = fd.Close() if err != nil { fmt.Println(err) log.Fatal("Failed to close file") } } First, you try to create the desired sparse file using os.Create(). Then, you call fd.Seek() in order to make the file bigger without adding actual data. Last, you write a byte to it using fd.Write(). As you do not have anything more to do with the file, you call fd.Close() and you are done. Executing sparse.go generates the following output: $ go run sparse.go 1000 test $ go run sparse.go 1000 test File test already exists. exit status 1 How can you tell whether a file is a sparse file or not? You will learn in a while, but first let us create some files: $ go run sparse.go 100000 testSparse $ dd if=/dev/urandom bs=1 count=100000 of=noSparseDD 100000+0 records in 100000+0 records out 100000 bytes (100 kB) copied, 0.152511 s, 656 kB/s $ dd if=/dev/urandom seek=100000 bs=1 count=0 of=sparseDD 0+0 records in 0+0 records out 0 bytes (0 B) copied, 0.000159399 s, 0.0 kB/s $ ls -l noSparse DDsparse DDtestSparse -rw-r--r-- 1 mtsoukmtsouk 100000 Apr 29 21:43 noSparseDD -rw-r--r-- 1 mtsoukmtsouk 100000 Apr 29 21:43 sparseDD -rw-r--r-- 1 mtsoukmtsouk 100000 Apr 29 21:40 testSparse So, how can you tell if any of the three files is a sparse file or not? The -s flag of the ls(1) utility shows the number of file system blocks actually used by a file. So, the output of the ls -ls command allows you to detect if you are dealing with a sparse file or not: $ ls -ls noSparse DDsparse DDtestSparse 104 -rw-r--r-- 1 mtsoukmtsouk 100000 Apr 29 21:43 noSparseDD 0 -rw-r--r-- 1 mtsoukmtsouk 100000 Apr 29 21:43 sparseDD 8 -rw-r--r-- 1 mtsoukmtsouk 100000 Apr 29 21:40 testSparse Now look at the first column of the output. The noSparseDD file, which was generated using the dd(1) utility, is not a sparse file. The sparseDD file is a sparse file generated using the dd(1) utility. Last, the testSparse is also a sparse file that was created using sparse.go. Mihalis Tsoukalos is a Unix administrator, programmer, DBA and mathematician who enjoys writing. He is currently writing Mastering Go. His research interests include programming languages, databases and operating systems. He holds a B.Sc in Mathematics from the University of Patras and an M.Sc in IT from University College London (UK). He has written various technical articles for Sys Admin, MacTech, C/C++ Users Journal, Linux Journal, Linux User and Developer, Linux Format and Linux Voice.
Read more
  • 0
  • 0
  • 10935