Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-how-everyone-at-netflix-uses-jupyter-notebooks-from-data-scientists-machine-learning-engineers-to-data-analysts
Bhagyashree R
18 Aug 2018
4 min read
Save for later

How everyone at Netflix uses Jupyter notebooks from data scientists, machine learning engineers, to data analysts

Bhagyashree R
18 Aug 2018
4 min read
Netflix uses a variety of tools to do data analysis. One of the big ways that data scientists and engineers at Netflix interact with their data is through Jupyter notebooks. In addition to providing execution environments to users, Netflix invests in various parts of the Jupyter ecosystem and tooling. They are “reimagining what a notebook can be, who can use it, and what they can do with it.” Netflix aims to provide personalized content to their 130 million viewers. For this every day more than 1 trillion events are written into a streaming ingestion pipeline. To support this, they’ve built an industry-leading data platform which is flexible, powerful, and complex. There are so many diverse users of this platform, such as analytics engineers, data engineers, and data scientists, requiring different sets of tools and languages. To help the platform scale, they wanted to minimize the number of tools and the solution to this was the open-source tool: Jupyter notebooks. Why Jupyter notebook is so compelling for Netflix? These are the functionalities provided by notebook that benefits Netflix’s data scientists and engineers: Standard messaging API: The Jupyter protocol provides a standard messaging API with the kernels that act as computational engines. It separates where the content is written and where the content is executed. This makes it language agnostic. Editable file format: It provides an editable file format that stores the code and results together. Web-based UI: It is web-based which helps interactively writing and running code as well as visualizing outputs. How Netflix uses Jupyter Notebooks? The following are some of the use cases they use Jupyter notebooks for: Data access: Notebooks were first introduced for workflows and their adoption grew among the data scientists. Seeing this, Netflix decided to leverage its versatility and architecture for general data access. Notebooks provide an user-friendly interface for interactively running code, exploring the outputs, and visualizing data all from a single cloud-based development environment. Notebook Templates: They introduced parameterized notebooks, which allow the use of parameters in the code and take values as input at runtime. These templates help: Data scientists to run an experiment with different coefficients and summarize the results Data engineers to execute data quality audits Data analysts to share prepared queries and visualizations Software engineers to email the results of a troubleshooting script Scheduling notebooks: Next they are using notebooks for creating a unifying layer for scheduling workflows. Notebooks are used for interactive work and allows smooth move to scheduling that work to run recurrently. Many users create an entire workflow in a notebook and just copy/paste it into separate files for scheduling when they’re ready to deploy it. Notebook infrastructure: The three fundamental components of the infrastructure are: storage, compute, and interface. Source: Netflix Tech Blog Storage: The Netflix Data Platform is made of Amazon S3 and EFS for cloud storage, which notebooks treat as virtual filesystems. Each user has a home directory on EFS containing a personal workspace for notebooks. This workspace is for storing any notebook created or uploaded by a user. When a user launches a notebook interactively, all the reading and writing happens at the workspace. Compute: All the jobs on the data platform run on containers including queries, pipelines and notebooks. A container with reasonable default resources is provisioned when a user launches a notebook. Users can request more resources if they find that the provided resources are not enough. A unified execution environment with a prepared container image is provided, which has common libraries and an array of default kernels preinstalled. The orchestration and environments are managed with Titus, their container management platform. Interface: They are using nteract, a React-based frontend for Jupyter notebooks, which emphasizes simplicity and composability as core design principles.They’re also introducing native support for parameterization, which makes it easier to schedule notebooks and create reusable templates. Netflix is planning to make investments in both the frontend and backend to improve the overall notebook experience. This year they are also sponsoring JupyterCon. To read more about how Jupyter is offering value to Netflix read Netflix’s original post at Medium. 10 reasons why data scientists love Jupyter notebooks What’s new in Jupyter Notebook 5.3.0 Netflix open sources Zuul 2 cloud gateway
Read more
  • 0
  • 0
  • 15328

article-image-4-key-benefits-of-using-firebase-for-mobile-app-development
Guest Contributor
19 Oct 2018
6 min read
Save for later

4 key benefits of using Firebase for mobile app development

Guest Contributor
19 Oct 2018
6 min read
A powerful backend solution is essential for building sophisticated mobile apps. In recent years, Firebase has emerged to prominence as a power-packed Backend-as-a-Solution (BaaS), thanks to its wide-ranging features and performance boosting elements. After being acquired in 2014 by Google, several of its features further got a performance boost. These features have made  Firebase quite a popular backend solution for app developers and other emerging IT sectors. Let us look at its 4 key benefits for cross-platform mobile app development. Unleashing the power of Google Analytics Google Analytics for Firebase is a completely free solution with unconstrained reporting on many aspects. The reporting feature allows you to evaluate client behavior, report on broken links, user interactions and all other aspects of user experience and user interface. The reporting helps developers make informed decisions while optimizing the UI and the app performance. The unmatched scale of reporting: Firebase analytics allows access to unlimited reports on as many as 500 different events. The developers can also create custom events for reporting as their need suits. Robust audience segmentation: The Firebase analytics also allows segmenting the app audience on different parameters and grounds. The integrated console allows segmenting the audience on the basis of device information, custom events, and user characteristics. Crash reporting to fix Bugs Firebase also helps to address performance issues of an app by fixing bugs right from its backend solution. It is also equipped with robust crash reporting feature. Its crash reporting helps to deliver intricate and detailed bug and crash reports to address all the coding errors in an app. The reporting feature is capable of grouping together the issues in different categories as per the characteristics of the problem. Here are some of the attributes of this reporting feature. Monitoring errors: It is capable of monitoring fatal errors for iOS apps and both fatal and non-fatal errors for Android apps. Generally, reports are initiated as per the impact caused by such errors on the user experience. Required data collection to fix errors: The reports also enlist all the details concerning the device in use, performance shortfalls and user scenarios concerning the erroneous events. According to the contributing factors and other similarities, the issues are grouped in different categories. Email alerts: It also allows sending email alerts as and when such issues or problems are detected. The configuration of error reporting: The error reporting can also be configured remotely to control who can access the reports and list of events that occurred before an event. It is free: Crash and bug reporting is free with Firebase. You don't need to pay a penny to access this feature. Synchronizing data with real-time database With Firebase you can sync the offline and online data through NoSQL database. This makes the application data available on both offline and online states of the app. This boosts collaboration on the application data in real time. Here are some of its benefits. Real-time: Unlike the so-called HTTP requests that work to update the data across interfaces, the Real-time Database of firebase syncs data with every change thus helping to reflect the change in real time across any device in use. Offline: As Firebase Real-time Database SDK helps save your data in local disk, you can always access the data offline. As and when connectivity is back, the changes are synced with the present state of the server. Access from multiple devices: The Firebase Real-time Database allows accessing application data from multiple devices and interfaces including mobile devices and web. Splitting and scaling your data: Thanks to Firebase Real-time Database, you can split your data across multiple databases within the same project and set rules for each database instances. Firebase is feature rich for futuristic app development In addition to the above, Firebase is fully empowered with a host of rich features required for building sophisticated and most feature-rich mobile apps. Let us have a look at some of the key features of Firebase that made it a reliable platform for cross-platform development. Hosting: The hosting feature of Firebase allows developers to update their contents in the Content Delivery Network (CDN) during production. Firebase offers full hosting support with a custom domain, Global CDN, and an automatically provided SSL Certificate. Authentication: Firebase backend service offers a powerful authentication feature. It comes equipped with simple SDKs and easy to use libraries to integrate authentication feature with any mobile app. Storage: Firebase storage feature is powered by Google Cloud Storage and allows users to easily download media files and visual contents. This feature is also helpful in making use of user-generated content. Cloud Messaging: With Cloud Messaging, a mobile app powered can easily send a message to users and indulge in real-time communication. Remote Configuration: This feature of Firebase allows developers to incorporate certain changes in the app remotely. Thanks to this, the changes are reflected in the existing version, and the user does not need to download the latest updated version. Test Lab: With Test lab, developers can easily test the app in all the devices listed in the Google data center. It can even do the testing without requiring any test code of the respective app. Notifications: This feature gives developers a console to manage and send user-focused custom notifications to the users. App Indexing: This feature allows developers to index the app in Google Search and achieve higher search ranks in app marketplaces like Play Store and App Store. Dynamic Links: Firebase also equips the app to create dynamic links or smart URLs to present the respective app across all digital platforms including social media, mobile app, web, email, and other channels. All the above-mentioned benefits and useful features that empower mobile app developers to create dynamic user experience helped Firebase achieve such unprecedented popularity among developers worldwide. No wonder, in a short time span it has become a very popular backend solution for so many successful cross-platform mobile apps. Some exemplary use cases of Firebases Here we have picked two use cases of Firebase, respectively for one relatively new and successful app and one leading app in its niche. Fabulous Fabulous is a unique app that trains users to dispose of bad habits and get used to good habits to ensure health and wellbeing. The app by customizing the onboarding process through Firebase managed to double the retention rate. The app could incorporate custom user experience for different groups of users as per their preference. Onefootball This leading mobile soccer app OneFootBall experienced more than 5% increase in user session time thanks to Firebase. The new backend solution powered by Firebase helped the game app engage the audience more efficiently than ever before. The custom contents created by this popular app can enjoy better traction with users thanks to higher engagement. Author Bio: Juned Ahmed works as an IT consultant at IndianAppDevelopers, a leading Mobile app development company which offers to hire app developers in India for mobile solutions. He has more than 10 years of experience in developing and implementing marketing strategies. How to integrate Firebase on Android/iOS applications natively. Build powerful progressive web apps with Firebase. How to integrate Firebase with NativeScript for cross-platform app development.
Read more
  • 0
  • 0
  • 15232

article-image-oldest-programming-languages-use-today
Antonio Cucciniello
11 Jul 2017
5 min read
Save for later

The oldest programming languages in use today

Antonio Cucciniello
11 Jul 2017
5 min read
Today, we are going to be discussing some of the oldest, most established programming languages that are still in use today. Some developers may be surprised to learn that many of these languages surpass them in age, in a world where technology, especially in the world of development, is advancing at such a rapid rate. But then, old is gold, after all. So, in age order, let’s present the oldest programming languages in use today: C The C language was created in 1972 (it’s not that old, okay). C is a lower level language that was based an earlier language called B (do you see a trend here?) It is a general-purpose language, and a parent language which many future programming languages derive from, such as C#, Java, JavaScript, Perl, PHP and Python. It is used in many applications that must interface with hardware or play with memory. C++ Pronounced see-plus-plus, C++ was developed 11 years later in 1983. It is very similar to C, in fact it is often considered an extension of C. It added various concepts such as classes, virtual functions, and templates. It is more of an intermediate level language that can be used lower level or higher level, depending on the application. It is also known for being used in low latency applications. Objective-C Around the same time as C++ was being released to the public, Objective-C was created. If you took an educated guess from the name and said that it would be another extension of C, then you’d be right. This version was meant to be an object-oriented version of C (there’s a lot in a name, clearly). It is used, probably most famously, by Apple. If you are a Mac or iOS user, then your iPhone or Mac applications were most likely developed with Objective-C (until they recently moved over to Swift). Python We are going to take a quick jump ahead in time to the 90’s for this one. In 1991, the Python programming language was released, though it had been in development in the late 80’s. It is a dynamically-typed, object-oriented language that is often used for scripting and web applications. It is usually used with some of its frameworks like Django or Flask on the backend. It is one of the most popular programming languages in use today. Ruby In 1993, Ruby was released. Today, you probably heard of Ruby on Rails, which primarily is used to create the backend of web applications using Ruby. Unlike the many languages derived from C, this language was influenced by older languages such as Perl and Lisp. This language was designed for productive and fun programming. This was done by making the language closer to human needs, rather than machine needs. Java Two years later in 1995, Java was developed. This is a high level language that is derived from C. It is famously known for its use in web applications and as the language to use to develop Android applications and Android OS. It used to be the most popular language a few years ago, but its popularity and usage has definitely decreased. PHP In the same year as Java was developed, PHP was born. It is an open source programming language developed for the purpose of creating dynamic websites. It is also used for server side web development. Its usage is definitely declining, but it is still in use today. JavaScript That same year (yup, ’95 was good year for programming, not so much for fans of Full House), JavaScript was brought to the world. Its purpose was to be a high level language that helped with the functionality of a web page. Today, it is sometimes used as a scripting language, as well as being used on the backend of applications with the release of Node.js. It is one of the most popular and widely used programming languages today. Conclusion That was our brief history lesson on some in use programming languages. Even though some of them are 20, 30, even over 40 years old, they are being used by thousands of developers daily. They all have a variety of uses, from lower level to higher level, from web applications to mobile applications. Do you feel there is a need for newer languages, or are you happy with what we have? If you have any favorites, let us know which one and why! About the author Antonio Cucciniello is a Software Engineer with a background in C, C++ and JavaScript (Node.Js) from New Jersey.   His most recent project called Edit Docs is an Amazon Echo skill that allows users to edit Google Drive files using your voice.  He loves building cool things with software, reading books on self-help and improvement, finance, and entrepreneurship. Follow him on twitter @antocucciniello, and follow him on GitHub here: https://github.com/acucciniello
Read more
  • 0
  • 0
  • 15231

article-image-ecmascript-7-what-expect
Soham Kamani
22 Jun 2016
5 min read
Save for later

ECMAScript 7 - What to expect?

Soham Kamani
22 Jun 2016
5 min read
Now that ES6 has been officially accepted, it’s time to look forward to the next iteration of JavaScript, which is ECMAScript 7. There are many new and exciting features in ES7. Support for asynchronous programming Of all the new features in ES7, the most exciting one, in my view, is the addition of async and await for asynchronous programming, which occurs quite often, especially when you're trying to build applications using Node.js. To explain async and await, it's better you first see an example. Let’s say you have three asynchronous operations, each one dependent on the result returned by the previous one. There are multiple ways you could do that. The most common way to do this is to utilize callbacks. Let’s take a look at the code: myFirstOperation(function(err, firstResult){ mySecondOperation(firstResult, function(err, secondResult){ myThirdOperation(secondResult, function(err, thirdResult){ /* Do something with the third result */ }); }); }); The obvious flaw with this approach is that it leads to a situation known as callback hell. The introduction of promises simplified async programming greatly, so let’s see how the code would look using promises (which were introduced with ES6): myFirstPromise() .then(firstResult => mySecondPromise(firstResult)) .then(secondResult => myThirdPromis(secondResult)) .then(thirdResult =>{ /* Do something with thrid result */ }, err => { /* Handle error */ }); Now, let’s see how to handle these operations using async and await: async function myOperations(){ const firstResult = await myFirstOperation(); const secondResult = await mySecondOperation(firstResult); const thirdResult = await myThirdOperation(secondResult); /* Do something with third result */ }; try { myOperations(); } catch (err) { /* Handle error */ } This looks just like synchronous code? What? Exactly! The use of async and await makes life much simpler, by making async functions seem as if they are synchronous code. Under the hood, though, all of these functions execute in a nonblocking fashion, so you have the benefit of nonblocking async functions, with the simplicity and readability of synchronous code. Brilliant! Object rest and Object spread In ES6, we saw the introduction of array rest and spread operations. These new additions make it easier for you to combine and decompose arrays. ES7 takes this one level further by providing similar functionality for objects. Object rest This is a extension to the existing ES6 destructuring operation. On assignment of the properties during destructuring, if there is an additional ...rest parameter, all the remaining keys and values are assigned to it as another object. For example: const myObject = { lorem : 'ipsum', dolor : 'sit', amet : 'foo', bar : 'baz' }; const { lorem, dolor, ...others } = myObject; // lorem === 'ipsum' // dolor === 'sit' // others === { amet : 'foo', bar : 'baz' } Object spread This is similar to object rest, but is used for constructing objects instead of destructuring them: const obj1 = { amet : 'foo', bar : 'baz' }; const myObject = { lorem : 'ipsum', dolor : 'sit', ...obj1 }; /* myObject === { lorem : 'ipsum', dolor : 'sit', amet : 'foo', bar : 'baz' }; */ This is an alternative way of expressing the Object.assign function already present in ES6. In the precding code, myObject, is a new object, constructed using some properties of obj1 (there is no reference to obj). The equivalent way of doing this in ES6 would be: const myObject = Object.assign({ lorem : 'ipsum', dolor : 'sit' }, obj1); Of course, the object spread notation is much more readable, and the recommended way of assigning new objects, if you choose to adopt it. Observables The Object.observe function is a great new addition for asynchronously monitoring changes made to objects. Using this feature, you will be able to handle any sort of change made to objects, along with seeing how and when that change was made. Let's look at an example of how Object.observe will work: const myObject = {}; Object.observe(myObject, (changes) => { const [{ name, object, type, oldValue }] = changes; console.log(`You tried to ${type} the ${name} property`); }); myObject.foo = 'bar'; //You tried to add the foo property Caveat Although this is a good feature, as of this writing, Object.observe is being tagged as obsolete, which means that this feature could be removed at any time in the future. While it’s still ok to play around and experiment with this, it is recommended not to use it in production systems and larger applications. Additional utility methods There have been additional methods added to the String and Array prototypes: Array.prototype.includes: This checks whether an array includes an element or not: [1,2,3].includes(1); //true String.prototype.padLeft and String.prototype.padRight: 'abc'.padLeft(10); //"abc " 'abc'.padRight(10); //" abc" String.prototype.trimLeft and String.prototype.trimRight: 'n t abc n t'.trimLeft(); //"abc n t" 'n t abc n t'.trimRight(); //"n t abc" Working with ES7 today Many of the features mentioned here are still in the proposal phase, but you can still get started using them in your JavaScript application today! The most common tool used to get started is babel. In case you want to make a browser application, babel is perfect for compiling all of your code to regular ES5. Alternatively, you can use the many babel plugins already available to use babel with your favorite toolbelt or build system. In case you have trouble setting up your project, there are many yeoman generators to help you get started. If you are planning to use ES7 to build a node module or an application in node, there is a yeoman generator available for that as well. About the author Soham Kamani is a Full stack web developer and electronics hobbyist. He is especially interested in JavaScript, Python, and IOT. He can be found on Twitter at @sohamkamani and at sohamkamani.com.
Read more
  • 0
  • 0
  • 15086

article-image-healthcare-analytics-logistic-regression-to-reduce-patient-readmissions
Guest Contributor
20 Dec 2017
8 min read
Save for later

Healthcare Analytics: Logistic Regression to Reduce Patient Readmissions

Guest Contributor
20 Dec 2017
8 min read
[box type="info" align="" class="" width=""]We bring to you another guest post by Benjamin Rojogan on Logistic regression to aid healthcare sector in reducing patient readmission. Ben's previous post on ensemble methods to optimize machine learning models is also available for a quick read here.[/box] ER visits are not cheap for any party involved. Whether this be the patient or the insurance company. However, this does not stop some patients from being regular repeat visitors. These recurring visits are due to lack of intervention for problems such as substance abuse, chronic diseases and mental illness. This increases costs for everybody in the healthcare system and reduces quality of care by playing a role in the overflowing of Emergency Departments (EDs). Research teams at UW and other universities are partnering with companies like Kensci to figure out how to approach the problem of reducing readmission rates. The ability to predict the likelihood of a patient’s readmission will allow for targeted intervention which in turn will help reduce the frequency of readmissions. Thus making the population healthier and hopefully reducing the estimated 41.3 billion USD healthcare costs for the entire system. How do they plan to do it? With big data and statistics, of course. A plethora of algorithms are available for data scientists to use to approach this problem. Many possible variables could affect the readmission and medical costs. Also, there are also many different ways researchers might pose their questions. However, the researchers at UW and many other institutions have been heavily focused on reducing the readmission rate simply by trying to calculate whether a person would or would not be readmitted. In particular, this team of researchers was curious about chronic ailments. Patients with chronic ailments are likely to have random flare ups that require immediate attention. Being able to predict if a patient will have an ER visit can lead to managing the cause more effectively. One approach taken by the data science team at UW as well as the Department of Family and Community Medicine at the University of Toronto was to utilize logistic regression to predict whether or not a patient would be readmitted. Patient readmission can be broken down into a binary output: either the patient is readmitted or not. As such logistic regression has been a useful model in my experience to approach this problem. Logistic Regression to predict patient readmissions Why do data scientists like to use logistic regression? Where is it used? And how does it compare to other data algorithms? Logistic regression is a statistical method that statisticians and data scientists use to classify people, products, entities, etc. It is used for analyzing data that produces a binary classification based on one or many independent variables. This means, it produces two clear classifications (Yes or No, 1 or 0, etc). With the example above, the binary classification would be: is the patient readmitted or not? Other examples of this could be whether to give a customer a loan or not, whether a medical claim is fraud or not, whether a patient has diabetes or not. Despite its name, logistic regression does not provide the same output like linear regression (per se). There are some similarities, for instance, the linear model is somewhat consistent as you might notice in the equation below where you see what is very similar to a linear equation. But the final output is based on the log odds. Linear regression and multivariate regression both take one to many independent variables and produce some form of continuous function. Linear regression could be used to predict the price of a house, a person’s age or the cost of a product an e-commerce should display to each customer. The output is not limited to only a few discrete classifications. Whereas logistic regression produces discrete classifiers. For instance, an algorithm using logistic regression could be used to classify whether or not a certain stock price would be either >$50 a share or <$50 a share. Linear regression would be used to predict if a stock share would be worth $50.01, $50.02….etc. Logistic regression is a calculation that uses the odds of a certain classification. In the equation above, the symbol you might know as pi actually represents the odds or probability. To reduce the error rate, we should predict Y = 1 when p ≥ 0.5 and Y = 0 when p < 0.5. This creates a linear classifier, a boundary that when the coefficients β0 + x · β has a p value that is p < 0.5 then Y = 0. By generating coefficients that help predict the logit transformation, the method allows to classify for the characteristic of interest. Now that is a lot of complex math mumbo jumbo. Let’s try to break it down into simpler terms. Probability vs. Odds Let’s start with probability. Let’s say a patient has the probability of 0.6 of being readmitted. Then the probability that the patient won’t be readmitted is .4. Now, we want to take this and convert it into odds. This is what the formula above is doing. You would take .6/.4 and get odds of 1.5. That means the odds of the patient being readmitted are 1.5 to 1. If instead the probability was .5 for both being readmitted and not being readmitted, then the odds would be 1:1. Now the next step in the logistic regression model would be to take the odds and get the “Log odds”. You do this by taking the 1.5 and put it into the log portion of the equation. Now you will get .18(rounded). In logistic regression, we don’t actually know p. That is what we are trying to essentially find and model using various coefficients and input variables. Each input provides a value that changes how much more likely an event will or will not occur. All of these coefficients are used to calculate the log odds. This model can take multiple variables like age, sex, height, etc. and specify how much of an effect each variable has on the odds an event will occur. Once the initial model is developed, then comes the work of deciding its value. How does a business go from creating an algorithm inside a computer and translate it into action. Some of us like to say the “computers” are the easy part. Personally I find the hard part to be the “people”. After all, at the end of the day, it comes down to business value. Will an algorithm save money or not? That means it has to be applied in real life. This could take the form of a new initiative, strategy, product recommendation, etc. You need to find the outliers that are worth going after! For instance, if we go back to the patient readmission example again. The algorithm points out patients with high probabilities of being readmitted. However if the readmission costs are low, they will probably be ignored..sadly. That is how businesses (including hospitals) look at problems. Logistic regression is a great tool for binary classification. It is unlike many other algorithms that estimate continuous variables or estimate distributions. This statistical method can be utilized to classify whether a person will be likely to get cancer because of environmental variables like proximity to a highway, smoking habits, etc? This method has been used effectively in the medical, financial and insurance industry successfully for a while. Knowing when to use what algorithm takes time. However, the more problems a data scientist faces, the faster they will recognize whether to use logistic regression or decision trees. Using logistic regression provides the opportunity for healthcare institutions to accurately target at risk individuals who should receive a more tailored behavioral health plan to help improve their daily health habits. This in turn opens the opportunity for better health for patients and lower costs for hospitals. [box type="shadow" align="" class="" width=""] About the Author Benjamin Rogojan Ben has spent his career focused on healthcare data. He has focused on developing algorithms to detect fraud, reduce patient readmission and redesign insurance provider policy to help reduce the overall cost of healthcare. He has also helped develop analytics for marketing and IT operations in order to optimize limited resources such as employees and budget. Ben privately consults on data science and engineering problems both solo as well as with a company called Acheron Analytics. He has experience both working hands-on with technical problems as well as helping leadership teams develop strategies to maximize their data.[/box]
Read more
  • 0
  • 0
  • 14962

article-image-data-science-windows-big-no
Aaron Lazar
13 Apr 2018
5 min read
Save for later

Data science on Windows is a big no

Aaron Lazar
13 Apr 2018
5 min read
I read a post from a Linkedin connection about a week ago. It read: “The first step in becoming a data scientist: forget about Windows.” Even if you’re not a programmer, that's pretty controversial. The first nerdy thought I had was, that’s not true. The first step to Data Science is not choosing an OS, it’s statistics! Anyway, I kept wondering what’s wrong with doing data science on Windows, exactly. Why is the legacy product (Windows), created by one of the leaders in Data Science and Artificial Intelligence, not suitable to support the very thing it is driving? As a publishing professional and having worked with a lot of authors, one of the main issues I’ve faced while collaborating with them is the compatibility of platforms, especially when it comes to sharing documents, working with code, etc. At least 80 percent of the authors I’ve worked with have been using something other than Windows. They are extremely particular about the platform they’re working on, and have usually chosen Linux. I don’t know if they consider it a punishable offence, but I’ve been using Windows since I was 12, even though I have played around with Macs and machines running Linux/Unix. I’ve never been affectionately drawn towards those machines as much as my beloved laptop that is happily rolling on Windows 10 Pro. Why is data science on Windows is a bad idea? When Microsoft created Windows, its main idea was to make the platform as user friendly as possible, and it focused every ounce of energy on that and voila! They created one of the most simplest operating systems that one could ever use. Microsoft wanted to make computing easy for everyone - teachers, housewives, kids, business professionals. However, they did not consider catering to the developer community as much as its users. Now that’s not to say that you can’t really use a Windows machine to code. Of course, you can run Python or R programs. But you’re likely to face issues with compatibility and speed. If you’re choosing to use the command line, and something goes wrong, it’s a real PITA to debug on Windows. Also, if you’re doing cluster computing with other Linux/Macs, it’s better to have one of them yourself. Many would agree that Windows is more likely to suffer a BSoD (Blue Screen of Death) than a Mac or a Unix machine, messing up your algorithm that’s been running for a long time. [box type="note" align="" class="" width=""]Check out our most read post 15 useful Python libraries to make your Data science tasks easier. [/box] Is it all that bad? Well, not really. In fact, if you need to pump in a couple more gigs of RAM, you can’t think of doing that on a Mac. Although you might still encounter some weird stuff like those mentioned above, on a Windows PC, you can always Google up a workaround. Don’t beat yourself up if you own a PC. You can always set up a dual boot, running a Linux distribution parallely. You might want to check out Vagrant for this. Also, you’ll be surprised if you’re a Mac owner and you plan some heavy duty Deep Learning on a GPU, you can’t really run CUDA without messing things up. CUDA will only work well with NVIDIAs GPUs on a PC. In Joey Tribbiani's words “This is a moo point.” To me, data science is really OS agnostic. For instance, now with Docker, you don’t really have to worry much about which OS you’re running - so from that perspective, data science on Windows may work for you. Still feel for Windows? Well, there are obviously drawbacks. You’ll still keep living with the fear of isolation that Microsoft tries to create in the minds of customers. Moreover, you’ll be faced with “slowdom” if that’s a word, what with all the background processes eating away your computing power! You’ll be defying everything that modern computing is defined by - KISS, Open Source, Agile, etc. Another important thing you need to keep in mind is that when you’re working with so much data, you really don’t wanna get hacked! Last but not the least, if you’re intending to dabble with AI and Blockchain, your best bet is not going to be Windows. All said and done, if you’re a budding data scientist who’s looking to buy some new equipment, you might want to consider a few things before you invest in your machine. Think about what you’ll be working with, what tools you might want to use and if you want to play safe, it’s best to go with a Linux system. If you have the money and want to flaunt it, while still enjoying support from most tools, think about a Mac. And finally, if you’re brave and are not worried about having two OSes running on your system, go in for a Windows PC. So the next time someone decides to gift you a Windows PC, don’t politely decline right away. Grab it and swiftly install a Linux distro! Happy coding! :) *I will put an asterisk here, for the thoughts put in this article are completely my personal opinion and it might differ from person to person. Go ahead and share your thoughts in the comments section below.
Read more
  • 0
  • 4
  • 14842
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-is-youtubes-ai-algorithm-evil
Amarabha Banerjee
30 Sep 2018
6 min read
Save for later

Is YouTube's AI Algorithm evil?

Amarabha Banerjee
30 Sep 2018
6 min read
YouTube is at the center of content creation, content distribution, and advertising activities for some time now. The impact of YouTube can be estimated from the 1.8 billion YouTube users worldwide. While the YouTube video hosting concept has been a great success story for content creators, the video viewing and recommendation model has been in the middle of a brewing controversy lately. The Controversy Logan Paul was already a top rated YouTube star when he stumbled across a hanging dead body in a Japanese forest which is famous as a suicide spot. After the initial shock and awe, Logan Paul seemed quite amused and commented “Dude, his hands are purple,” then he turned to his friends and giggled. “You ever stand next to a dead guy?”. This particular instance was a shocking moment for YouTubers all across the globe. Disapproving reactions had poured in and the video was taken down 24 hours later by YouTube. In those 24 hours, the video managed to garner 6 million views. Even after the furious backlash, users complained that they were still seeing recommendations of Logan Paul’s videos. That brought the emphasis back on the recommendation system that YouTube uses. YouTube Video Recommendation Back in 2005, when YouTube first started out, it had a uniform homepage for all users. This meant that every YouTube user would see the same homepage and the creators who would feature there, would get a huge boost in their viewership. Their selection was based on their subscriber count, views and user engagement metrics e.g. likes, comments, shares etc. This inspired other users to become creators and start contributing content to become a part of the YouTube family. In 2006, YouTube was bought by Google. Their policies and homepage started evolving gradually. As ads started showing on YouTube videos, the scenario changed quite quickly. Also, with the rapid rise in the number of users, Google had thought it to be a good idea to curate the homepage as per each user’s watch history, subscriptions, and likes. This was a good move in principle since it helped the users to see what they wanted to see. As a part of their next level innovation, a machine learning model was created to suggest or recommend videos to users. The goal of this deep neural network based recommendation engine was to increase watch time of every video so that users stay longer on the platform. What did it change and How When Youtube’s machine learning algorithm shows a few videos in your feed as “Recommended for you”, it predicts what you want to see from your watch history and watch history of similar users. If you interact with any of these videos and watch it for a certain amount of time, the recommendation engine considers it as a success and starts curating a list based on your interactions with its suggested videos. The more data it gathers about your choices and watch history, the more confident it becomes of its own video decisions. The major goal of Youtube’s recommendation engine is to attract your attention and get you hooked to the platform to get more watch time. More watch time means more revenue and more scope for targeted ads. What this changes, is the fundamental concept of choice and the exercising of user discretion. The moment the YouTube Algorithm considers watch time as the most important metric to recommend videos to you, less importance goes into the organic interactions on YouTube, which includes liking, commenting and subscribing to videos and channels. Users get to see video recommendations based on the YouTube Algorithm’s user understanding and its goal of maximizing watch time, with less importance given to user choices. Distorted Reality and YouTube This attention maximizing model is the fundamental working mechanism of mostly all social media networks. But YouTube has not been implicated in the accusation of distorting reality and spreading the fake news as much as Facebook has been in mainstream media. But times are changing and so are the viewpoints related to YouTube’s influence on the global population and its ability to manipulate important public opinion. Guillaume Chaslot, a 36-year-old French computer programmer with a Ph.D. in artificial intelligence, was one of those engineers who was in the core team to develop and perfect the YouTube algorithm. In his own words “YouTube is something that looks like reality, but it is distorted to make you spend more time online. The recommendation algorithm is not optimizing for what is truthful, or balanced, or healthy for democracy.” Chaslot explains that the algorithm never stays the same. It is constantly changing the weight it gives to different signals; the viewing patterns of a user, for example, or the length of time a video is watched before someone clicks away.” Chaslot was fired by Google in 2013 over performance issues. His claim was that he wanted to bring about a change in the approach of the YouTube algorithm to make it more aligned with democratic values instead of being devoted to just increasing the watch time. Where are we headed I am not qualified or righteous enough to answer the direct question - is YouTube good or bad. YouTube creates opportunities for millions of creators worldwide to showcase their talent and present it to a global audience without worrying about country or boundaries. This itself is a huge power for an internet application. But the crucial point to remember here is whether YouTube is using this power to just make the users glued to the screen. Do they really care if you are seeing divisive content or prejudiced flat earther conspiracies as recommended videos? The algorithm can be tweaked to include parameters which will remove unintended bias such as whether a video is propagating fake news or influencing voters minds in an unlawful way. But that is near impossible as machines lack morality or empathy or even common sense. To incorporate humane values such as honesty and morality into an AI system is like creating an AI that is more human than a machine. This is why machine augmented human intelligence will play a more and more crucial role in the near future. The possibilities are endless, be it good or bad. Whether we progress or digress, might not be in our hands anymore. But what might be in our hands is to come together to put effective checkpoints to identify and course correct scenarios where algorithms rule wild. Sex robots, artificial intelligence, and ethics: How desire shapes and is shaped by algorithms Like newspapers, Google algorithms are protected by the First amendment California replaces cash bail with algorithms
Read more
  • 0
  • 0
  • 14428

article-image-8-machine-learning-best-practices
Melisha Dsouza
02 Sep 2018
9 min read
Save for later

8 Machine learning best practices [Tutorial]

Melisha Dsouza
02 Sep 2018
9 min read
Machine Learning introduces a huge potential to reduce costs and generate new revenue in an enterprise. Application of machine learning effectively helps in solving practical problems smartly within an organization. Machine learning automates tasks that would otherwise need to be performed by a live agent. It has made drastic improvements in the past few years, but many a time, a machine needs the assistance of a human to complete its task. This is why it is necessary for organizations to learn best practices in machine learning which you will learn in this article today. This article is an excerpt from a book written by Chiheb Chebbi titled Mastering Machine Learning for Penetration Testing Feature engineering in machine learning Feature engineering and feature selection are essential to every modern data science product, especially machine learning based projects. According to research, over 50% of the time spent building the model is occupied by cleaning, processing, and selecting the data required to train the model. It is your responsibility to design, represent, and select the features. Most machine learning algorithms cannot work on raw data. They are not smart enough to do so. Thus, feature engineering is needed, to transform data in its raw status into data that can be understood and consumed by algorithms. Professor Andrew Ng once said: "Coming up with features is difficult, time-consuming, requires expert knowledge. 'Applied machine learning' is basically feature engineering." Feature engineering is a process in the data preparation phase, according to the cross-industry standard process for data mining: The term Feature Engineering itself is not a formally defined term. It groups together all of the tasks for designing features to build intelligent systems. It plays an important role in the system. If you check data science competitions, I bet you have noticed that the competitors all use the same algorithms, but the winners perform the best feature engineering. If you want to enhance your data science and machine learning skills, I highly recommend that you visit and compete at www.kaggle.com: When searching for machine learning resources, you will face many different terminologies. To avoid any confusion, we need to distinguish between feature selection and feature engineering. Feature engineering transforms raw data into suitable features, while feature selection extracts necessary features from the engineered data. Featuring engineering is selecting the subset of all features, without including redundant or irrelevant features. Machine learning best practices Feature engineering enhances the performance of our machine learning system. We discuss some tips and best practices to build robust intelligent systems. Let's explore some of the best practices in the different aspects of machine learning projects. Information security datasets Data is a vital part of every machine learning model. To train models, we need to feed them datasets. While reading the earlier chapters, you will have noticed that to build an accurate and efficient machine learning model, you need a huge volume of data, even after cleaning data. Big companies with great amounts of available data use their internal datasets to build models, but small organizations, like startups, often struggle to acquire such a volume of data. International rules and regulations are making the mission harder because data privacy is an important aspect of information security. Every modern business must protect its users' data. To solve this problem, many institutions and organizations are delivering publicly available datasets, so that others can download them and build their models for educational or commercial use. Some information security datasets are as follows: The Controller Area Network (CAN) dataset for intrusion detection (OTIDS): http://ocslab.hksecurity.net/Dataset/CAN-intrusion-dataset The car-hacking dataset for intrusion detection: http://ocslab.hksecurity.net/Datasets/CAN-intrusion-dataset The web-hacking dataset for cyber criminal profiling: http://ocslab.hksecurity.net/Datasets/web-hacking-profiling The API-based malware detection system (APIMDS) dataset: http://ocslab.hksecurity.net/apimds-dataset The intrusion detection evaluation dataset (CICIDS2017): http://www.unb.ca/cic/datasets/ids-2017.html The Tor-nonTor dataset: http://www.unb.ca/cic/datasets/tor.html The Android adware and general malware dataset: http://www.unb.ca/cic/datasets/android-adware.html Use Project Jupyter The Jupyter Notebook is an open source web application used to create and share coding documents. I highly recommend it, especially for novice data scientists, for many reasons. It will give you the ability to code and visualize output directly. It is great for discovering and playing with data; exploring data is an important step to building machine learning models. Jupyter's official website is http://jupyter.org/: To install it using pip, simply type the following: python -m pip install --upgrade pip python -m pip install jupyter Speed up training with GPUs As you know, even with good feature engineering, training in machine learning is computationally expensive. The quickest way to train learning algorithms is to use graphics processing units (GPUs). Generally, though not in all cases, using GPUs is a wise decision for training models. In order to overcome CPU performance bottlenecks, the gather/scatter GPU architecture is best, performing parallel operations to speed up computing. TensorFlow supports the use of GPUs to train machine learning models. Hence, the devices are represented as strings; following is an example: "/device:GPU:0" : Your device GPU "/device:GPU:1" : 2nd GPU device on your Machine To use a GPU device in TensorFlow, you can add the following line: with tf.device('/device:GPU:0'): <What to Do Here> You can use a single GPU or multiple GPUs. Don't forget to install the CUDA toolkit, by using the following commands: Wget "http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_8.0.44-1_amd64.deb" sudo dpkg -i cuda-repo-ubuntu1604_8.0.44-1_amd64.deb sudo apt-get update sudo apt-get install cuda Install cuDNN as follows: sudo tar -xvf cudnn-8.0-linux-x64-v5.1.tgz -C /usr/local export PATH=/usr/local/cuda/bin:$PATH export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64" export CUDA_HOME=/usr/local/cuda Selecting models and learning curves To improve the performance of machine learning models, there are many hyper parameters to adjust. The more data that is used, the more errors that can happen. To work on these parameters, there is a method called GridSearchCV. It performs searches on predefined parameter values, through iterations. GridSearchCV uses the score() function, by default. To use it in scikit-learn, import it by using this line: from sklearn.grid_search import GridSearchCV Learning curves are used to understand the performance of a machine learning model. To use a learning curve in scikit-learn, import it to your Python project, as follows: from sklearn.learning_curve import learning_curve Machine learning architecture In the real world, data scientists do not find data to be as clean as the publicly available datasets. Real world data is stored by different means, and the data itself is shaped in different categories. Thus, machine learning practitioners need to build their own systems and pipelines to achieve their goals and train the models. A typical machine learning project respects the following architecture: Coding Good coding skills are very important to data science and machine learning. In addition to using effective linear algebra, statistics, and mathematics, data scientists should learn how to code properly. As a data scientist, you can choose from many programming languages, like Python, R, Java, and so on. Respecting coding's best practices is very helpful and highly recommended. Writing elegant, clean, and understandable code can be done through these tips: Comments are very important to understandable code. So, don't forget to comment your code, all of the time. Choose the right names for variables, functions, methods, packages, and modules. Use four spaces per indentation level. Structure your repository properly. Follow common style guidelines. If you use Python, you can follow this great aphorism, called the The Zen of Python, written by the legend, Tim Peters: "Beautiful is better than ugly. Explicit is better than implicit. Simple is better than complex. Complex is better than complicated. Flat is better than nested. Sparse is better than dense. Readability counts. Special cases aren't special enough to break the rules. Although practicality beats purity. Errors should never pass silently. Unless explicitly silenced. In the face of ambiguity, refuse the temptation to guess. There should be one-- and preferably only one --obvious way to do it. Although that way may not be obvious at first unless you're Dutch. Now is better than never. Although never is often better than *right* now. If the implementation is hard to explain, it's a bad idea. If the implementation is easy to explain, it may be a good idea. Namespaces are one honking great idea -- let's do more of those!" Data handling Good data handling leads to successfully building machine learning projects. After loading a dataset, please make sure that all of the data has loaded properly, and that the reading process is performing correctly. After performing any operation on the dataset, check over the resulting dataset. Business contexts An intelligent system is highly connected to business aspects because, after all, you are using data science and machine learning to solve a business issue or to build a commercial product, or for getting useful insights from the data that is acquired, to make good decisions. Identifying the right problems and asking the right questions are important when building your machine learning model, in order to solve business issues. In this tutorial, we had a look at somes tips and best practices to build intelligent systems using Machine Learning. To become a master at penetration testing using machine learning with Python,  check out this book  Mastering Machine Learning for Penetration Testing Why TensorFlow always tops machine learning and artificial intelligence tool surveys Intelligent Edge Analytics: 7 ways machine learning is driving edge computing adoption in 2018 Tackle trolls with Machine Learning bots: Filtering out inappropriate content just got easy
Read more
  • 0
  • 0
  • 14425

article-image-developers-guide-to-software-architecture-patterns
Sugandha Lahoti
06 Aug 2018
11 min read
Save for later

Developer's guide to Software architecture patterns

Sugandha Lahoti
06 Aug 2018
11 min read
As we all know, patterns are a kind of simplified and smarter solution for a repetitive concern or recurring challenge in any field of importance. In the field of software engineering, there are primarily many designs, integration, and architecture patterns. In this article, we will cover the need for software patterns and describe the most prominent and dominant software architecture patterns. This article is an excerpt from Architectural Patterns by Pethuru Raj, Anupama Raman, and Harihara Subramanian. Why software patterns? There is a bevy of noteworthy transformations happening in the IT space, especially in software engineering. The complexity of recent software solutions is continuously going up due to the continued evolution of the business expectations. With complex software, not only does the software development activity become very difficult, but also the software maintenance and enhancement tasks become tedious and time-consuming. Software patterns come as a soothing factor for software architects, developers, and operators. Types of software patterns Several newer types of patterns are emerging in order to cater to different demands. This section throws some light on these. An architecture pattern expresses a fundamental structural organization or schema for complex systems. It provides a set of predefined subsystems, specifies their unique responsibilities, and includes the decision-enabling rules and guidelines for organizing the relationships between them. The architecture pattern for a software system illustrates the macro-level structure for the whole software solution. A design pattern provides a scheme for refining the subsystems or components of a system, or the relationships between them. It describes a commonly recurring structure of communicating components that solves a general design problem within a particular context. The design pattern for a software system prescribes the ways and means of building the software components. There are other patterns, too. The dawn of the big data era mandates for distributed computing. The monolithic and massive nature of enterprise-scale applications demands microservices-centric applications. Here, application services need to be found and integrated in order to give an integrated result and view. Thus, there are integration-enabled patterns. Similarly, there are patterns for simplifying software deployment and delivery. Other complex actions are being addressed through the smart leverage of simple as well as composite patterns. Software architecture patterns Let's look at some of the prominent and dominant software architecture patterns. Object-oriented architecture (OOA) Objects are the fundamental and foundational building blocks for all kinds of software applications. Therefore, the object-oriented architectural style has become the dominant one for producing object-oriented software applications. Ultimately, a software system is viewed as a dynamic collection of cooperating objects, instead of a set of routines or procedural instructions. We know that there are proven object-oriented programming methods and enabling languages, such as C++, Java, and so on. The properties of inheritance, polymorphism, encapsulation, and composition being provided by OOA come in handy in producing highly modular (highly cohesive and loosely coupled), usable and reusable software applications. The object-oriented style is suitable if we want to encapsulate logic and data together in reusable components. Also, the complex business logic that requires abstraction and dynamic behavior can effectively use this OOA. Component-based assembly (CBD) architecture Monolithic and massive applications can be partitioned into multiple interactive and smaller components. When components are found, bound, and composed, we get the full-fledged software applications.  CBA does not focus on issues such as communication protocols and shared states. Components are reusable, replaceable, substitutable, extensible, independent, and so on. Design patterns such as the dependency injection (DI) pattern or the service locator pattern can be used to manage dependencies between components and promote loose coupling and reuse. Such patterns are often used to build composite applications that combine and reuse components across multiple applications. Aspect-oriented programming (AOP) aspects are another popular application building block. By deft maneuvering of this unit of development, different applications can be built and deployed. The AOP style aims to increase modularity by allowing the separation of cross-cutting concerns. AOP includes programming methods and tools that support the modularization of concerns at the level of the source code. Agent-oriented software engineering (AOSE) is a programming paradigm where the construction of the software is centered on the concept of software agents. In contrast to the proven object-oriented programming, which has objects (providing methods with variable parameters) at its core, agent-oriented programming has externally specified agents with interfaces and messaging capabilities at its core. They can be thought of as abstractions of objects. Exchanged messages are interpreted by receiving agents, in a way specific to its class of agents. Domain-driven design (DDD) architecture Domain-driven design is an object-oriented approach to designing software based on the business domain, its elements and behaviors, and the relationships between them. It aims to enable software systems that are a correct realization of the underlying business domain by defining a domain model expressed in the language of business domain experts. The domain model can be viewed as a framework from which solutions can then be readied and rationalized. DDD is good if we have a complex domain and we wish to improve communication and understanding within the development team. DDD can also be an ideal approach if we have large and complex enterprise data scenarios that are difficult to manage using the existing techniques. Client/server architecture This pattern segregates the system into two main applications, where the client makes requests to the server. In many cases, the server is a database with application logic represented as stored procedures. This pattern helps to design distributed systems that involve a client system and a server system and a connecting network. The main benefits of the client/server architecture pattern are: Higher security: All data gets stored on the server, which generally offers a greater control of security than client machines. Centralized data access: Because data is stored only on the server, access and updates to the data are far easier to administer than in other architectural styles. Ease of maintenance: The server system can be a single machine or a cluster of multiple machines. The server application and the database can be made to run on a single machine or replicated across multiple machines to ensure easy scalability and high availability. However, the traditional two-tier client/server architecture pattern has numerous disadvantages. Firstly, the tendency of keeping both application and data on a server can negatively impact system extensibility and scalability. The server can be a single point of failure. The reliability is the main worry here. To address these issues, the client-server architecture has evolved into the more general three-tier (or N-tier) architecture. This multi-tier architecture not only surmounts the issues just mentioned but also brings forth a set of new benefits. Multi-tier distributed computing architecture The two-tier architecture is neither flexible nor extensible. Hence, multi-tier distributed computing architecture has attracted a lot of attention. The application components can be deployed in multiple machines (these can be co-located and geographically distributed). Application components can be integrated through messages or remote procedure calls (RPCs), remote method invocations (RMIs), common object request broker architecture (CORBA), enterprise Java beans (EJBs), and so on. The distributed deployment of application services ensures high availability, scalability, manageability, and so on. Web, cloud, mobile, and other customer-facing applications are deployed using this architecture. Thus, based on the business requirements and the application complexity, IT teams can choose the simple two-tier client/server architecture or the advanced N-tier distributed architecture to deploy their applications. These patterns are for simplifying the deployment and delivery of software applications to their subscribers and users. Layered/tiered architecture This pattern is an improvement over the client/server architecture pattern. This is the most commonly used architectural pattern. Typically, an enterprise software application comprises three or more layers: presentation/user interface layer, business logic layer, and data persistence layer. The presentation layer is primarily usded for user interface applications (thick clients) or web browsers (thin clients). With the fast proliferation of mobile devices, mobile browsers are also being attached to the presentation layer. Such tiered segregation comes in handy in managing and maintaining each layer accordingly. The power of plug-in and play gets realized with this approach. Additional layers can be fit in as needed. There are model view controller (MVC) pattern-compliant frameworks hugely simplifying enterprise-grade and web-scale applications. MVC is a web application architecture pattern. The main advantage of the layered architecture is the separation of concerns. That is, each layer can focus solely on its role and responsibility. The layered and tiered pattern makes the application: Maintainable Testable Easy to assign specific and separate roles Easy to update and enhance layers separately This architecture pattern is good for developing web-scale, production-grade, and cloud-hosted applications quickly and in a risk-free fashion. When there are business and technology changes, this layered architecture comes in handy in embedding newer things in order to meet varying business requirements. Event-driven architecture (EDA) The world is eventually becoming event-driven. That is, applications have to be sensitive and responsive proactively, pre-emptively, and precisely. Whenever there is an event happening, applications have to receive the event information and plunge into the necessary activities immediately. The request and reply notion paves the way for the fire and forgets tenet. The communication becomes asynchronous. There is no need for the participating applications to be available online all the time. EDA is typically based on an asynchronous message-driven communication model to propagate information throughout an enterprise. It supports a more natural alignment with an organization's operational model by describing business activities as series of events. EDA does not bind functionally disparate systems and teams into the same centralized management model. EDA ultimately leads to highly decoupled systems. The common issues being introduced by system dependencies are getting eliminated through the adoption of the proven and potential EDA. We have seen various forms of events used in different areas. There are business and technical events. Systems update their status and condition emitting events to be captured and subjected to a variety of investigations in order to precisely understand the prevailing situations. The submission of web forms and clicking on some hypertexts generate events to be captured. Incremental database synchronization mechanisms, RFID readings, email messages, short message service (SMS), instant messaging, and so on are events not to be taken lightly. There are event processing engines, message-oriented middleware (MoM) solutions such as message queues and brokers to collect and stock event data and messages. Millions of events can be collected, parsed, and delivered through multiple topics through these MoM solutions. As event sources/producers publish notifications, event receivers can choose to listen to or filter out specific events and make proactive decisions in real-time on what to do next. EDA style is built on the fundamental aspects of event notifications to facilitate immediate information dissemination and reactive business process execution. In an EDA environment, information can be propagated to all the services and applications in real-time. The EDA pattern enables highly reactive enterprise applications. Real-time analytics is the new normal with the surging popularity of the EDA pattern. Service-oriented architecture (SOA) With the arrival of service paradigms, software packages and libraries are being developed as a collection of services. Services are capable of running independently of the underlying technology. Also, services can be implemented using any programming and script languages. Services are self-defined, autonomous, and interoperable, publicly discoverable, assessable, accessible, reusable, and compostable. Services interact with one another through messaging. There are service providers/developers and consumers/clients. Every service has two parts: the interface and the implementation. The interface is the single point of contact for requesting services. Interfaces give the required separation between services. All kinds of deficiencies and differences of service implementation get hidden by the service interface. Precisely speaking, SOA enables application functionality to be provided as a set of services, and the creation of personal as well as professional applications that make use of software services. In short, SOA is for service-enablement and service-based integration of monolithic and massive applications. The complexity of enterprise process/application integration gets moderated through the smart leverage of the service paradigm. To summarize, we detailed the prominent and dominant software architecture patterns and how they are used for producing and running any kind of enterprise-class and production-grade software applications. To know more about patterns associated with object-oriented, component-based, client-server, and cloud architectures, grab the book Architectural Patterns. Why we need Design Patterns? Implementing 5 Common Design Patterns in JavaScript (ES8) An Introduction to Node.js Design Patterns
Read more
  • 0
  • 0
  • 14288

article-image-alteryx-vs-tableau-choosing-the-right-data-analytics-tool-for-your-business
Guest Contributor
04 Mar 2019
6 min read
Save for later

Alteryx vs. Tableau: Choosing the right data analytics tool for your business

Guest Contributor
04 Mar 2019
6 min read
Data Visualization is commonly used in the modern world, where most business decisions are taken into consideration by analyzing the data. One of the most significant benefits of data visualization is that it enables us to visually access huge amounts of data in easily understandable visuals. There are many areas where data visualization is being used. Some of the data visualization tools include Tableau, Alteryx, Infogram, ChartBlocks, Datawrapper, Plotly, Visual.ly, etc. Tableau and Alteryx are industry standard tools and have dominated the data analytics market for a few years now and still running strong without any strong competition. In this article, we will understand the core differences between Alteryx tool and Tableau. This will help us in deciding which tool to use for what purposes. Tableau is one of the top-rated tools which helps the analysts to carry out business intelligence and data visualization activities. Using Tableau, the users will be able to generate compelling dashboards and stunning data visualizations. Tableau’s interactive user interface helps users to quickly generate reports where they can drill down the information to a granular level. Alteryx is a powerful tool widely used in data analytics and also provides meaningful insights to the executive level personnel. With the user-friendly interface, the user will be able to extract the data, transform the data, and load the data within the Alteryx tool. Why use Alteryx with Tableau? The use of Alteryx with Tableau is a powerful combination when it comes to getting value-added data decisions. With Alteryx, businesses can manipulate their data and provide input to the Tableau platform, which in return will be able to showcase strong data visualizations. This will help the businesses to take appropriate actions which are backed up with data analysis. Alteryx and Tableau tools are widely used within organizations where the decisions can be taken into consideration based on the insights obtained from data analysis. Talking about data handling, Alteryx is a powerful ETL platform where data can be analyzed in different formats. When it comes to data representation, Tableau is a perfect match. Further, using Tableau the reports can be shared across team members. Nowadays, most of the businesses want to see real-time data and want to understand business trends. The combination of Alteryx and Tableau allows the data analysts to analyze the data, and generate meaningful insights to the users, on-the-fly. Here, data analysis can be executed within the Alteryx tool where the raw data is handled, and then the data representation or visualization is done in Tableau, so both of these tools go hand in hand. Tableau vs Alteryx The table below lists the differences between the tools. Alteryx Tableau This tool is known as a smart data analytics platform. This tool is known for its data visualization capabilities. 2. Can connect with different data sources and can synthesize the raw data. A standard ETL process is possible. 2. Can connect with different data sources and provide data visualization within minutes from the gathered data. 3. Helps in terms of the data analysis 3. Helps in terms of building appealing graphs. 4. The GUI is okay and widely accepted. 4. The GUI is one of the best features where graphs can be easily built by using drag and drop options. 5. Technical knowledge is necessary because it involves in data sources integrations, and also data blending activity. 5. Technical knowledge is not necessary, because all the data will be polished and only the user has to build graphs/visualization. 6.  Once the data blending activity is completed, the users will be able to share the file which can be consumed by Tableau. 6. Once the graphs are prepared, the reports can be easily shared among team members without any hassle. 7. A lot of flexibility while using this tool for data blending activity. 7. Flexibility while using the tool for data visualization. 8. Using this tool, the users will be able to do spatial and predictive analysis 8. Possible by representing the data in an appropriate format. 9.  One of the best tools when it comes to data preparations. 9. Not feasible to prepare the data in Tableau when it is compared to Alteryx. 10. Data representation cannot be done accurately. 10. It is a wonderful tool for data representation. 11. Has one time feeds- Annual fees 11. Has an option to pay monthly as well. 12. Has a drag and drop interface where the user can develop a workflow easily. 12. Has a drag and drop interface where the user will be able to build a visualization in no time. Alteryx and Tableau Integration As discussed earlier, these two tools have their own advantages and disadvantages, but when integrated together, they can do wonders with the data. This integration between Tableau and Alteryx makes the task of visualizing the Alteryx generated answers quite simple. The data is first loaded into the Alteryx tool and is then extracted in the form of .tde files (i.e. Tableau Data Extracted Files). These .tde files will be consumed by Tableau tool to do the data visualization part. On a regular basis, the data extracted file from Alteryx tool (i.e. .tde files) will be generated and will replace the old .tde files. Thus, by integrating Alteryx and Tableau, we can: Cleanse, combine, as well as collect all the data sources that are relevant and enrich them with the help of third-party data - everything in one workflow. Give analytical context to your data by providing predictive, location-based, and deep spatial analytics. Publish your analytic workflows’ results to Tableau for intuitive, rich visualizations that help you in making decisions more quickly. Tableau and Alteryx do not require any advanced skill-set as both tools have simple drag and drop interfaces. You can create a workflow in Alteryx that can process data in a sequential manner. In a similar way, Tableau enables you to build charts by dragging various fields to be utilized, to specified areas. The companies which have a lot of data to analyze, and can spend large amounts of money on analytics, can use these two tools. There doesn’t exist any significant challenges during Tableau, Alteryx integration. Conclusion When Tableau and Alteryx are used together, it is really useful for the businesses so that the senior management can take decisions based on the data insights provided by these tools. These two tools compliment each other and provide high-quality service to businesses. Author Bio Savaram Ravindra is a Senior Content Contributor at Mindmajix.com. His passion lies in writing articles on different niches, which include some of the most innovative and emerging software technologies, digital marketing, businesses, and so on. By being a guest blogger, he helps his company acquire quality traffic to its website and build its domain name and search engine authority. Before devoting his work full time to the writing profession, he was a programmer analyst at Cognizant Technology Solutions. Follow him on LinkedIn and Twitter. How to share insights using Alteryx Server How to do data storytelling well with Tableau [Video] A tale of two tools: Tableau and Power BI  
Read more
  • 0
  • 0
  • 14177
article-image-what-can-artificial-intelligence-do-for-the-aviation-industry
Guest Contributor
14 May 2019
6 min read
Save for later

What can Artificial Intelligence do for the Aviation industry

Guest Contributor
14 May 2019
6 min read
The use of AI (Artificial Intelligence) technology in commercial aviation has brought some significant changes in the ways flights are being operated today. World’s leading airliner service providers are now using AI tools and technologies to deliver a more personalized traveling experience to their customers. From building AI-powered airport kiosks to using it for automating airline operations and security checking, AI will play even more critical roles in the aviation industry. Engineers have found AI can help the aviation industry with machine vision, machine learning, robotics, and natural language processing. Artificial intelligence has been found to be highly potent and various researches have shown how the use of artificial intelligence can bring significant changes in aviation. Few airlines now use artificial intelligence for predictive analytics, pattern recognition, auto scheduling, targeted advertising, and customer feedback analysis showing promising results for better flight experience. A recent report shows that aviation professionals are thinking to use artificial intelligence to monitor pilot voices for a hassle-free flying experience of the passengers. This technology is to bring huge changes in the world of aviation. Identification of the Passengers There’s no need to explain how modern inventions are contributing towards the betterment of mankind and AI can help in air transportation in numerous ways. Check-in before boarding is a vital task for an airline and they can simply take the help of artificial intelligence to do it easily, the same technology can be also used for identifying the passengers as well. American airline company Delta Airlines took the initiative in 2017. Their online check-in via Delta mobile app and ticketing kiosks have shown promising results and nowadays you can see many airlines taking similar features to the whole new level. The Transportation Security Administration of the United States has introduced new AI technology to identify potential threats at the John F. Kennedy, Los Angeles International Airport and Phoenix airports. Likewise, Hartsfield-Jackson Airport is planning to launch America’s first biometric terminal. Once installed, “the AI technology will make the process of passenger identification fast and easy for officials. Security scanners, biometric identification”, and machine learning are some of the AI technologies that will make a number of jobs easy for us. In this way, AI helps us predict disruption in airline services. Baggage Screening Baggage screening is another tedious but important task that needs to be done at the airport. However, AI has simplified the process of baggage screening. The American airlines once conducted a competition on app development on artificial intelligence and Team Avatar became the winner of the competition for making an app that would allow the users to determine the size of their baggage at the airport. Osaka Airport in Japan is planning to install the Syntech ONE 200, which is an AI technology developed to screen baggage for multiple passenger lanes. Such tools will not only automate the process of baggage screening but also help authorities detect illegal items effectively. Syntech ONE 200is compatible with the X-ray security system and it increases the probability of identification of potential threats. Assisting Customers AI can be used to assist customers in the airport and it can help a company reduce its operational costs and labor costs at the same time. Airlines companies are now using AI technologies to help their customers to resolve issues quickly by getting accurate information on future flights trips on their internet-enabled devices. More than 52% of airlines companies across the world have planned to install AI-based tools to improve their customer service functions in the next five years. Artificial Intelligence can answer various common questions of the customers, assisting them for check-in requests, the status of the flight and more. Nowadays artificial intelligence is also used in air cargo for different purposes such as revenue management, safety, and maintenance and it has shown impressive results till date. Maintenance Prediction Airlines companies are planning to implement AI technology to predict potential failures of maintenance on aircraft. Leading aircraft manufacturer Airbus is taking measures to improve the reliability of aircraft maintenance. They are using Skywise, a cloud-based data storing system. It helps the fleet to collect and record a huge amount of real-time data. The use of AI in the predictive maintenance analytics will pave the way for a systematic approach on how and when the aircraft maintenance should be done.  Nowadays you can see how top-rated airlines use artificial intelligence to make the process of maintenance easy and improve the user experience at the same time. Pitfalls of using AI in Aviation Despite being considered as a future of the aviation industry,  AI has some pitfalls. For instance, it takes time for implementation and it cannot be used as an ideal tool for customer service. The recent incident of Ethiopian Airlines Boeing 737 was an eye-opener for us and it clearly represents the drawback of AI technology in the aviation sector. The Boeing 737 crashed a few minutes after it took off from the capital of Ethiopia. The failure of the MCAC system was the key reasons behind the fatal accident. Also, AI is quite expensive; for example, if an airline company is planning to deploy a chatbot, it will have to invest more than $15,000. Thus, it would be a hard thing for small companies to invest for the same and this could create a barrier between small and big airlines in the future. As the market is becoming highly competitive, big airlines will conquer the market and the small airlines might face an existential threat due to this reason.   Conclusion The use of artificial intelligence in aviation has made many tasks easy for airlines and airport authorities across the world. From identifying passengers to screening the bags and providing fast and efficient customer care solutions. Unlike the software industry, the risks of real life harms are exponentially higher in the aviation industry. While other industries have started using this technology long back, the adoption of AI in aviation has been one of caution, and rightly so. As the aviation industry embraces the benefits of artificial intelligence and machine learning, it must also invest in putting in place checks and balances to identify, reduce and eliminate harmful consequences of AI, whether intended or otherwise.  As Silicon Valley reels in ethical dilemmas, the aviation industry will do well to learn from Silicon Valley while making a transition to a smart future. The aviation industry known for its rigorous safety measures and processes may, in fact, have a thing or two to teach Silicon Valley when it comes to designing, adopting and deploying AI systems into live systems that have high-risk profiles. Author Bio Maria Brown is Content Writer, Blogger and maintaining Social Media Optimization for 21Twelve Interactive. She believes in sharing her solid knowledge base with a focus on entrepreneurship and business. You can find her on Twitter.
Read more
  • 0
  • 0
  • 14127

article-image-whats-new-in-vr-haptics
Natasha Mathur
16 Jul 2018
8 min read
Save for later

What’s new in VR Haptics?

Natasha Mathur
16 Jul 2018
8 min read
Virtual Reality is evolving at a staggering rate. Some of the humankind’s most exciting tools and technologies are coming to the Virtual reality Space. One such technology which is taking over the VR world and making it more powerful is the VR haptics technology. VR Haptics technology offers an extra dimension to the VR world by letting users feel the virtual environment via the sense of touch, in addition to visual and aural perception. It makes you feel truly immersive in the artificial world. Imagine yourself in a desert seeing the sand and feeling it glide under your feet as you walk. It uses external devices like Gloves, Shoes, Joysticks, etc, via which users can receive feedback in the form of vibrations from these computer applications. This feedback provides physical sensations in the hand or other parts of the body. It also provides a realistic simulation of the movements and behaviors, similar to those realized in the real world. VR Haptics: a growing domain The VR haptics technology is growing beyond creating vibrations in game controllers. Now, in the near future, you might able to cuddle a dog and feel it licking your face in the VR world. This speaks volumes about the pace at which the haptic technology is growing. One famous example which discusses modern VR is the popular sci-fi novel “Ready Player One”. It illustrates the possibilities of haptic technology in the future. The novel explores the journey of a guy as he sets foot into a virtual reality simulator (OASIS). He uses a headset and a pair of gloves to maneuver around the virtual world. Apart from the gloves, a lot of future concept products are also covered in the novel which makes the illusion of immersion easier to picture, such as towers emitting smells in the VR world and Wind/Temperature generators that mimic real-life. Haptics came about just as head mounted displays (HMD) came to light in the 2010s. HMDs allowed people to see the virtual reality while haptic feedback gave people the opportunity to experience the virtual world and to act within it. Texture, temperature, pressure, taste, smell and other non-visual sensory inputs became real in VR. Apart from virtual reality games and apps, Haptics feedback is used widely in personal computers, mobile devices, robots, and more. But, in this article, we’ll stick to the use of haptic technology or haptic feedback in the VR space. Usually, most VR users use Touch Controllers for haptic feedback. But, recently, a lot of third-party companies are coming out with products such as gloves for systems like the Oculus Rift & HTC Vive. Here is a list of recent developments in the haptic technology for the VR world. Super affordable VR Haptic gloves by Plexus Most of the currently available options in the VR haptics field are somewhat pricey but earlier this month, Plexus announced their new product, a VR haptic and sensor glove. https://vimeo.com/276517370 Source: Plexus Key features Plexus VR haptics gloves offer a fully modular tracking solution which is capable of tracking up to 0.01 degrees of precision. These gloves are capable of individual finger tracking as well as tracking each joint on the finger, thereby, offering higher precision in the VR world. It is compatible with the HTC Vive, Oculus Rift as well as Windows Mixed Reality devices. The VR haptic gloves also come with additional adapter plates. The development kit version of the Plexus haptic gloves, priced at $249 per glove pair, can be pre-ordered on the official Plexus Website. The company will begin shipping in August 2018 but at the moment, shipping is only available to USA, Europe, Canada and Australia. Kaaya Tech’s full body tracking HoloSuit Kaaya came out with a motion capture (MoCap) suit called HoloSuit, last month, which offers motion capture as well as haptic feedback. HoloSuit is the world’s first affordable, wireless, easy to use, bi-directional, full body motion capture suit. User’s entire body movement data is captured by Holosuit and it uses haptic feedback to send information back to the user. https://www.youtube.com/watch?v=SEQsDR32gII&t=122s  Source: HoloSuit It can be used in various areas such as sports, healthcare, education, entertainment or industrial operations. Key Features The HoloSuit consists of 36 embedded sensors in the pro version and 26 embedded sensors in the less complex version. Embedded sensors carry out all the work of capturing body motion which is necessary for world-scale tracking. It also consists of 9 haptic feedback devices, and 6 embedded firing buttons ( buttons that govern specific tasks such as saving the game, pausing, etc ) which are dispersed across both arms, legs, and all the ten fingers. It delivers data wirelessly either through Wifi or Bluetooth LE to a VR setup by using Unity or a Wi-Fi SDK. The HoloSuit doesn’t come with an external camera tracking option. It supports all the major platforms such as Windows, macOS, iOS, and Android devices. A complete HoloSuit is quite expensive and starts at a regular price of $999. Jacket and Jersey are priced at $499, jersey or track pants for $399, and a pair of gloves are available for $799. HoloSuit Pro is priced at $1,599. Shipping for the full body VR haptic HoloSuit will start this November. Disney’s VR Haptic “Force Jacket” Disney came out with their VR haptic jacket, namely, “Force Jacket” back in April. It provides users with precisely directed force along with a high-frequency vibration which is felt against the user’s upper body in sync with the visual medium. The prototype is made out of a converted life jacket and is provided with 26 airbags. https://www.youtube.com/watch?v=5BOFHEow608   Source: DisneyResearchHub The Force Jacket is created by engineers at Disney Research, MIT and Carnegie Mellon University. Key Features The Haptic Jacket uses an air compressor and a vacuum pump. These air compartments in the jacket can be inflated to exert a force on the user’s body relative to force sensitive resistors. 26 air compartments are activated using microcontrollers for either pressure or vibrotactile feedback or both. Controllers are used to activating the solenoid valves which are connected to the vacuum. There are certain Jacket inflation parameters like speed, force, and duration which are specified using the haptic effects editor. The jacket makes use of the motion interface to sequentially inflate the compartments for simulating motion across the body. Each airbag within the haptic jacket can be influenced to mimic sensations such as being hit in the chest by a snowball, getting tapped on the shoulder, lime dripping on their back, getting punched in the side, and a snake coiling its body around the user. The jacket is mainly to be used in the entertainment and gaming industry and is not available for the consumer market. But, it seems to have great potential in the future for other applications as well. VR gloves by Haptx Haptx announced a pair of VR gloves back in November of last year. The gloves use micro-pneumatics technology for detailed haptics and force feedback (the ability to restrict your fingers’ movement to simulate holding objects) in the fingers. https://www.youtube.com/watch?v=2C2_kbjtjRU Source: HaptX Key Features It features technology that enables it to provide 100 points of tactile displacement feedback. It offers up to five pounds of resistance per finger. It also comes with sub-millimeter precision motion tracking The glove uses SDK of HaptX’s design, which is created by using Unreal Engine’s physics system. This tells the glove when and where it needs to apply haptic effects as well as when and how to engage the force feedback. No information on pricing or worldwide availability has been released by the company yet. But, it is rumored to launch the VR gloves for the consumer market sometime later this year. Apart from these products, there are other minor advancements that keep happening in the VR haptics space. For example, Heather Culbertson, Assistant Professor of USC's computer department, recently created a haptic armband which is capable of mimicking the sensation of a human touch. VR aims to provide you with an environment where you feel truly immersive and where you can feel the objects as in the real world. These products are bringing the VR world a step closer to achieve richer levels of immersive experiences. Gone are the days when haptic feedback was limited to just vibrating controllers and joysticks. As the technology advances, the whole new world of VR haptic devices is here to make your VR experience as seamlessly immersive as possible. In fact, some people even believe that without Haptics, VR is nothing but a picture and a sound. Game developers say Virtual Reality is here to stay CTA announces its first AR/VR Standard terminology Top 7 modern Virtual Reality hardware systems  
Read more
  • 0
  • 0
  • 14102

article-image-what-is-interactive-machine-learning
Amey Varangaonkar
23 Jul 2018
4 min read
Save for later

What is interactive machine learning?

Amey Varangaonkar
23 Jul 2018
4 min read
Machine learning is a useful and effective tool to have when it comes to building prediction models or to build a useful data structure from an avalanche of data. Many ML algorithms are in use today for a variety of real-world use cases. Given a sample dataset, a machine learning model can give predictions with only certain accuracy, which largely depends on the quality of the training data fed to it. Is there a way to increase the prediction accuracy by somehow involving humans in the process? The answer is yes, and the solution is called as ‘Interactive Machine Learning’. Why we need interactive machine learning As we already discussed above, a model can give predictions only as good as the quality of the training data fed to it. If the quality of the training data is not good enough, the model might: Take more time to learn and then give accurate predictions Quality of predictions will be very poor This challenge can be overcome by involving humans in the machine learning process. By incorporating human feedback in the model training process, it can be trained faster and more efficiently to give more accurate predictions. In the widely adopted machine learning approaches, including supervised and unsupervised learning or even active learning for that matter, there is no way to include human feedback in the training process to improve the accuracy of predictions. In case of supervised learning, for example, the data is already pre-labelled and is used without any actual inputs from the human during the training process. For this reason alone, the concept of interactive machine learning is seen by many machine learning and AI experts as a breakthrough. How interactive machine learning works Machine Learning Researchers Teng Lee, James Johnson and Steve Cheng have suggested a novel way to include human inputs to improve the performance and predictions of the machine learning model. It has been called as the ‘Transparent Boosting Tree’ algorithm, which is a very interesting approach to combine the advantages of machine learning and human inputs in the final decision making process. The Transparent Boosting Tree, or TBT in short, is an algorithm that would visualize the model and the prediction details of each step in the machine learning process to the user, take his/her feedback, and incorporate it into the learning process. The ML model is in charge of updating the assigned weights to the inputs, and filtering the information shown to the user for his/her feedback. Once the feedback is received, it can be incorporated by the ML model as a part of the learning process, thus improving it. A basic flowchart of the interactive machine learning process is as shown: Interactive Machine Learning More in-depth information on how interactive machine learning works can be found in their paper. What can Interactive machine learning do for businesses With the rising popularity and applications of AI across all industry verticals, humans may have a key role to play in the learning process of an algorithm, apart from just coding it. While observing the algorithm’s own outputs or evaluations in the form of visualizations or plain predictions, humans can suggest way to to improve that prediction by giving feedback in the form of inputs such as labels, corrections or rankings. This helps the models in two ways: Increases the prediction accuracy Time taken for the algorithm to learn is shortened considerably Both the advantages can be invaluable to businesses, as they look to incorporate AI and machine learning in their processes, and look for faster and more accurate predictions. Interactive Machine Learning is still in its nascent stage and we can expect more developments in the domain to surface in the coming days. Once production-ready, it will undoubtedly be a game-changer. Read more Active Learning: An approach to training machine learning models efficiently Anatomy of an automated machine learning algorithm (AutoML) How machine learning as a service is transforming cloud
Read more
  • 0
  • 0
  • 14038
article-image-how-assess-your-tech-teams-skills
Hari Vignesh
20 Sep 2017
5 min read
Save for later

How to assess your tech team’s skills

Hari Vignesh
20 Sep 2017
5 min read
For those of us that manage others, effectiveness is largely driven by the skills and motivation of those that report to us. So whether you are a CIO, IT division leader, or a front-line manager, you need to spend the time to assess the currents skills, abilities and career aspirations of your staff and help them put in place the plans that can support their development. And yet, you need to do this in such a way that still supports the overall near-term objectives of the organization, and properly balances the need for professional development against the day-to-day needs of the organization. There are certifications for competence in many different products. Having such certifications is very valuable and gives one a sense of the skill-set of an individual. But how do you assess someone as a journeyman programmer, tester or systems engineer, or perhaps as a master in one’s chosen discipline? This evaluation is overly subjective and places too much emphasis on “book knowledge” rather than practical application of that knowledge to develop new, innovative solutions or approaches that the organization truly needs. In other words, how do you assess the knowledge, skills and abilities (KSAs) of a person to perform their job role? This assessment problem is two-fold: For a specific IT discipline, you need a comprehensive framework by which to understand the types of skills and knowledge you should have each level — from novice to expert. For each discipline, you also need a way to accurately assess the current level ability of your technical staff members to create the baseline by which you can develop their skills to move to higher levels of proficiency. This not only helps the individual develop a realistic and achievable plan, but also gives you insights into where you have significant skills gaps in your organization. Skills Framework for the Information Age (SFIA) In 2003, a non-profit organization was founded called the Skills Framework for the Information Age (SFIA), which provides a comprehensive framework of skills in IT technologies and disciplines based on a broad industry “body of knowledge.” SFIA currently covers 97 professional skills required by professionals in roles involving information and communications technology. These skills are organized into six categories, as follows: Strategy and Architecture Change and Transformation Development and Implementation Delivery and Operation Skills and Quality Relationships and Engagement Each of the skills are described at one or more of SFIA’s seven levels of attainment — from a novice to expert. Find out more about this framework here. Although the framework helps define your needed competencies, it doesn’t tell you if your workers have the skills that match them. Building your own effective framework In order to accurately assess the current ability level of your technical staff members is to create the baseline from which you can develop their skills to higher levels of proficiency. So, the best way to progress would be by identifying the goals of the team or org and then building your own framework. So, how do we proceed? List the roles within your team To start with you need a list of the role types within your team. This isn’t the same thing as having a listing of every position on your org chart. You want to simplify the process by grouping together like roles. List the skills needed for each role Now that you’ve created a list of role types, the next step is to list the skills needed for each of these roles. What do the skills look like? They could be behavioral like “Listens to customer needs carefully to determine requirements” or they could be more technical like this sample list of engineering skills: Writing quality code Design skills Writing optimal code Programming patterns Once you have this list, it’s a valuable resource in itself. Create a survey It’s ideal if you can find out all of the relevant skills a person has, not just those for their current role. To do this, create a survey that makes it easy for your people to respond. This essentially means you need to keep it short and not ask the same question twice. To achieve this, the survey should group together each of the major role types. Use the list you created in step 2 as your starting point for this. Let’s say you have an engineering group within your organization. It may have a number of different role types within it, but there’s probably common skills across many of them. For example, many of the role types may require people to be skilled at “Programming.” Rather than listing skills more than once under each relevant role type, list them once under a common group heading. Survey your workforce With the survey designed, you are now ready to ask your workforce to respond to it. The size of your team and the number of roles will determine how you go about doing this. It’s a good practice to communicate to survey participants to explain why you are asking for their response and what will happen with the information. Analyze the data You can now reap the rewards of your skills audit process. You can analyze: The skill gaps in specific roles Skill gaps within teams or organization groups Potential successors for certain roles The number of people who have critical skills Future skill requirements This assessment not only helps employees create realistic and achievable individual development plans, but also gives you insight into where you have significant skills gaps in your team or in your organization. Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades.
Read more
  • 0
  • 0
  • 14018

article-image-12-ubiquitous-artificial-intelligence-powered-apps-that-are-changing-lives
Bhagyashree R
30 Aug 2018
11 min read
Save for later

12 ubiquitous artificial intelligence powered apps that are changing lives

Bhagyashree R
30 Aug 2018
11 min read
Artificial Intelligence is making it easier for people to do things every day. You can schedule your day, search for photos of loved ones, type emails on the go, or get things done with the virtual assistant. AI also provides innovative ways of tackling existing problems, from healthcare to advancing scientific discovery. According to Gartner’s Top 10 Strategic Technology Trends for 2018, the next few years will see every app, application, and service incorporating AI at some level. With major companies like Google, Amazon, IBM investing in AI and incorporating AI in their products, this statement, instead of a prediction is becoming a fact. Apple’s IPhone X comes with a Facial Recognition System, Samsung’s Bixby, Amazon’s Alexa, Google’s Google Assistant, and the recently launched Android Pie. Android Pie learns your preferences based on your usage patterns and gets better over time. It even provides you a breakdown of the time you spend on your phone. AI comes with endless possibilities, things that we used to dream of are now becoming a part of our day to day life. So, I have listed here, in no particular order, some of those innovative applications: Microsoft’s Seeing AI - Eye for the visually impaired Source: Microsoft Seeing AI is a perfect example of how technology is improving our lives. It is an intelligent camera app that uses computer vision to audibly help blind and visually impaired people to know about their surroundings. It comes with functionalities like reading out short text and documents for you, giving you description about a person, identifies currencies, colour, handwriting, light and even images in other apps using the device's camera. A data scientist named Anirudh Koul started this project (called Deep Vision earlier) to help his grandfather who was gradually losing his vision. Two breakthroughs by the Microsoft researchers facilitated him to further his idea: vision-to-language and image classification. To make the app this advance and real-time, they used the idea of making servers communicate with Microsoft Cognitive Services. This app brings in four technologies together to provide users with an array of functionalities: OCR, barcode scanner, facial recognition, and scene recognition. Check out this YouTube tutorial to understand how it works. Download App Store Ada - Healthcare in your hand Source: Digital Health Ada, with a very simple and conversational UI, helps you understand what could be wrong if you or someone you care about is not feeling well. Just like any doctor’s appointment, it starts with your basic details, then does an assessment, in which it asks several personalized questions related to the symptoms, and then gives a report. The report consists of a summary, possible causes, and less-likely causes. It also allows you to share the report as a PDF. After training over several years using real world cases, Ada has become a handy health advisor. Its platform is powered by a sophisticated Artificial Intelligence engine combined with large medical knowledge base covering many thousands of conditions, symptoms and findings. In every medical assessment, Ada takes all of a patient’s information into account, including past medical history, symptoms, risk factors and more. Using machine learning and multiple closed feedback loops, Ada becomes more intelligent. Download App Store Google Play Store Plume Air Report - An air pollution monitor Source: Plume Labs Blog Industrialization and urbanization definitely comes with their side effects, the main being air pollution. It has become inevitable to keep yourself safe from the pollution, but now at least you can be aware of the air pollution levels in your area. Plume Air Report forecasts how air quality will evolve hour by hour over the next 24 hours similar to weather forecast. You can also easily compare the air quality between cities. It gives you insight on all pollutants (PM2.5, PM10, O3, NO2), with absolute concentration levels and your local air quality scale. It uses machine learning and atmospheric sciences to deliver real-time and hourly forecast air quality data. First, latest pollution levels is collected from over 12,000 monitoring stations and 80 public agencies around the world and then filtered for errors. Local atmospheric data (wind, temperature, atmosphere, etc.) is sourced to track their influence on pollution levels in your city. A team of data scientists analyzes local specifics such as geographical features and human activities. Finally, AI algorithms and atmospheric models are developed that turn this giant amount of data into hourly forecasts. Download App Store Google Play Store Aura - Mindfulness meets AI Source: Popular Science In this fast life, slow down a little and give yourself a time out with Aura. Aura is a new kind of mindfulness app that learns about you and simplifies your learning through guided meditations. It helps in reducing stress and increases positivity through 3-minute meditations, personalized by Artificial Intelligence. Aura is an intelligent app that leverages machine learning to give you a unique experience. After every exercise, you can rate your experience and Aura will learn how to provide more tailored meditations according to your needs. You can even track your mood and learn your mood patterns. Download App Store Google Play Store Replika - An emotive chatbot as a friend for life Source: Medium Want to be friends with someone who is always there to listen to you, talk to you, and never judges you? Then Replika is for you! It helps you make a real connection with an unreal friend. The idea of building Replika came from a very tragic background. The founder of the software company, Luka, Eugenia Kudya, lost her best friend in an accident in November 2015. She used to go through their messenger texts to bring back their memories. This is how she got this idea to develop a chatbot making it learn from the sample texts sent by her best friend. In her own words, “Most of the companies try to build an app that talks, but we tried to build an app that could listen well”. The chatbot uses neural network facilitating more natural one-on-one conversation with its user, and over time, learn how to speak like them. The source code is freely available for developers under the name CakeChat. It comes with a pre-trained model that you can use as is to run a chatbot that maintains a conversation in a certain emotional state. You can also build a variety of other conversational agents by using your own dataset, for example, persona-based model, emotional chatting machine, topic-centric model. To know more about the background and evolution of Replika, check out this amazing YouTube video. Download App Store Google Play Store Google Assistant - Your personal Google Source: Google Assistant When talking of AI-powered apps, voice assistants probably come first in your mind. Google Assistant makes your life easier and helps in organizing your day better. You can manage your little tasks, plan your day, enjoy entertainment, and get answers. It can also sync to your other devices including Google Home, smart TVs, laptops, and more. To give users smart assistance, Google Assistant relies on Artificial Intelligence technologies such as natural language processing, natural language understanding, and machine learning to understand what the user is saying, and to make suggestions or act on that language input. Download App Store Google Play Store Hound - Say it, Get it Source: Android Apps In an array of virtual assistants to choose from, Hound understands your voice commands better. You do not need to give “search query” like commands and can have a more natural conversation. Hound can be used for variety of tasks, some of them are: search, discover, and play music, set alarms, timers, and reminders, call, text, navigate hands-free, get the weather forecast. Hound’s speed and accuracy comes from their powerful Houndify platform. This platform combines Speech Recognition and Natural Language Understanding into a single step, which is called Speech-to-Meaning. Download App Store Google Play Store Picai - An app that picks filters for your pics, keeping you looking your best always Source: Google Play Store Picai with the help of Artificial Intelligence, recommends picture-perfect filters by analyzing the scene. It automatically analyzes the scene and with the help of object recognition detects the type of the object, for example, a plant, a girl, etc. It then uses a proprietary deep learning model to recommend two optimum filters from 100+ filters. What makes this app stand out is the split-screen filter selection, which makes the filter selection easier for the users. When using this app be warned of the picture quality and app size (76 MB), but it is definitely worth trying! Download Google Play Store Microsoft Pix - The pro photographer Source: MSPoweuser Named one of the 50 Best Apps of the Year by Time Magazine, Microsoft Pix helps you take better photos without the extra effort! It solves the problem of “not living in the moment”. It comes with some amazing features like, hyperlapse, live images Microsoft Pix Comix, artistic styles to transform your photos, smart settings that automatically checks scene and lighting between each shutter tap, and updates settings between each shot, and more. Microsoft Pix uses Artificial Intelligence to improve the image, such as cropping edges, enhancing color and tone, and sharpening focus. It includes enhanced deep-learning capabilities around image understanding. It captures a burst of 10 frames with each shutter click and uses AI to select three best shots. Before the remaining photos are deleted, it uses data from the entire burst to remove noise. These best, enhanced images are ready in about a second. The app also detects whether your eyes are open or not using the facial recognition technology. Download App Store ELSA - Your machine learning English teacher Source: TechCrunch ELSA (English Language Speech Assistant)  helps you in learning English and bettering your pronunciation every day. It provides you a curriculum tailored just, regular feedback, progress tracking, common phrases used in daily life. You can practice in a relaxed environment and improve your speaking skills to prepare for the TOEFL, IELTS, TOEIC ELSA coaches you in improving your English pronunciations by using speech recognition, deep learning, and Artificial Intelligence. Download App Store Google Play Store Socratic - Homework in a snap Source: Google Play Store Socratic is your new helper, apart from your parents, in completing those complex Math problems. You just need to take a photo of your homework and can get explanations, videos, step-by-step help, instantly. Also, these resources are jargon-free, helping you understand the concepts better. It supports all subjects including Math (Algebra, Calculus, Statistics, Graphing, etc), Science, Chemistry, History, English, Economics, and more. Socratic uses Artificial Intelligence to figure out the concepts you need to learn in order to answer it. For this it combines cutting-edge computer vision technologies, which read questions from images, with machine learning classifiers. These classifiers are built using millions of sample homework questions, to accurately predict which concepts will help you solve your question. Download App Store Google Play Store Recent News - Stay informed Source: Recent News Recent News is an app that will provide you customized news. Some of the features that it comes with to give you the daily dose of news include one-minute news summary with very quick load time, hot news, local news, and personalized recommendations, instantly share news on Facebook, Twitter, and other social networks, and many more. It uses Artificial Intelligence to learn about your interests, suggest relevant articles, and propose topics you might like to follow. So, the more you use it the better it becomes! The app is surely innovative and saves time, but I do wish the developers applied some innovation in the app’s name as well :P Download App Store Google Play Store And that’s the end of my list. People say, “Smartphones and apps are becoming smarter, and we are becoming dumber”. But I would like to say that these apps, with the right usage, empower us to become smarter. Agree? 7 Popular Applications of Artificial Intelligence in Healthcare 5 examples of Artificial Intelligence in Web apps What Should We Watch Tonight? Ask a Robot, says Matt Jones from OVO Mobile [Interview]
Read more
  • 0
  • 0
  • 13923