Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech Guides

852 Articles
article-image-how-can-artificial-intelligence-support-your-big-data-architecture
Natasha Mathur
26 Sep 2018
6 min read
Save for later

How can Artificial Intelligence support your Big Data architecture?

Natasha Mathur
26 Sep 2018
6 min read
Getting a big data project in place is a tough challenge. But making it deliver results is even harder. That’s where artificial intelligence comes in. By integrating artificial intelligence into your big data architecture, you’ll be able to better manage, and analyze data in a way that provide a substantial impact on your organization. With big data getting even bigger over the next couple of years, AI won’t simply be an optional extra, it will be essential. According to IDC, the accumulated volume of big data will increase from 4.4 zettabytes to roughly 44 zettabytes or 44 trillion GB, by 2020. Only by using Artificial Intelligence will you really be able to properly leverage such huge quantities of data. The International Data Corporation (IDC) also predicted a need for 181,000 people with deep analytical skills, data management, and interpretation skills, this year. AI comes to rescue again. AI can ultimately compensate for the lack of analytical resources today with the power of machine learning that enables automation.  Now that we know why Big data needs AI, let’s have a look at how AI helps big data. But, for that, you first need to understand the big data architecture. While it’s clear that artificial intelligence is an important development in the context of big data, what are the specific ways it can support and augment your big data architecture? It can, in fact, help you across every component in the architecture. That’s good news for anyone working with big data, and good for organizations that depend on it for growth as well. Artificial Intelligence in Big data Architecture In a big data architecture, data is collected from different data sources and then moves forward to other layers. Artificial Intelligence in data sources Using machine learning, this process of structuring data becomes easier, thereby, making it easier for the organizations to store and analyze their data. Now, keep in mind that large amounts of data from various sources can sometimes make data analysis even harder. This is because we now have access to heterogeneous sources of data that add different dimensions and attributes to the data. This further slows down the entire process of collecting data. To make things much quicker and more accurate, it’s important to consider only the most important dimensions. This process is what’s called data dimensionality reduction (DDR). With DDR, it is important to keep note of the fact that the model should always convey the same information without any loss of insight or intelligence. Principal Component Analysis or PCA is another useful machine learning method that’s used for dimensionality reduction. PCA performs feature extraction, meaning it combines all the input variables from the data, then drops the “least important” variables while making sure to retain the most valuable parts of all of the variables. Also, each of the “new” variables after PCA are independent of each other. Artificial Intelligence in data storage Once data is collected from the data source, it then needs to be stored. AI can allow you to automate storage with machine learning. This also makes structuring the data easier. Machine learning models automatically learn to recognize patterns, regularities, and interdependencies from unstructured data and then adapt, dynamically and independently, to new situations. K-means clustering is one of the most popular unsupervised algorithms for data clustering, which is used when there’s large-scale data without any defined categories or groups. The K-means Clustering algorithm performs pre-clustering or classification of data into larger categories. Unstructured data gets stored as binary objects, annotations are stored in NoSQL databases, and raw data is ingested into data lakes. All this data act as input to machine learning models. This approach is great as it automates refining of the large-scale data. So, as the data keeps coming, the machine learning model will keep storing it depending on what category it fits. Artificial Intelligence in data analysis After the data storage layer comes the data analysis part. There are numerous machine learning algorithms that help with effective and quick data analysis in big data architecture. One such algorithm that can really step up the game when it comes to data analysis is Bayes Theorem. Bayes theorem uses stored data to ‘predict’ the future. This makes it a wonderful fit for big data. The more data you feed to a Bayes algorithm, the more accurate its predictive results become. Bayes Theorem determines the probability of an event based on prior knowledge of conditions that might be related to the event. Another machine learning algorithm that great for performing data analysis are decision trees. Decision trees help you reach a particular decision by presenting all possible options and their probability of occurrence. They’re extremely easy to understand and interpret. LASSO (least absolute shrinkage and selection operator) is another algorithm that will help with data analysis. LASSO is a regression analysis method. It is capable of performing both variable selection and regularization which enhances the prediction accuracy and interpretability of the outcome model. The lasso regression analysis can be used to determine which of your predictors are most important. Once the analysis is done, the results are presented to other users or stakeholders. This is where data utilization part comes into play. Data helps to inform decision making at various levels and in different departments within an organization. Artificial intelligence takes big data to the next level Heaps of data gets generated every day by organizations all across the globe. Given such huge amount of data, it can sometimes go beyond the reach of current technologies to get right insights and results out of this data. Artificial intelligence takes the big data process to another level, making it easier to manage and analyze a complex array of data sources. This doesn’t mean that humans will instantly lose their jobs - it simply means we can put machines to work to do things that even the smartest and most hardworking humans would be incapable of. There’s a saying that goes "big data is for machines; small data is for people”, and it couldn’t be any truer. 7 AI tools mobile developers need to know How AI is going to transform the Data Center How Serverless computing is making AI development easier
Read more
  • 0
  • 0
  • 3829

article-image-how-will-ai-impact-job-roles-in-cybersecurity
Melisha Dsouza
25 Sep 2018
7 min read
Save for later

How will AI impact job roles in Cybersecurity

Melisha Dsouza
25 Sep 2018
7 min read
"If you want a job for the next few years, work in technology. If you want a job for life, work in cybersecurity." -Aaron Levie, chief executive of cloud storage vendor Box The field of cybersecurity will soon face some dire, but somewhat conflicting, views on the availability of qualified cybersecurity professionals over the next four or five years. Global Information Security Workforce Study from the Center for Cyber Safety and Education, predicts a shortfall of 1.8 million cybersecurity workers by 2022. The cybersecurity workforce gap will hit 1.8 million by 2022. On the flipside,  Cybersecurity Jobs Report, created by the editors of Cybersecurity Ventures highlight that there will be 3.5 million cybersecurity job openings by 2021. Cybercrime will feature more than triple the number of job openings over the next 5 years. Living in the midst of a digital revolution caused by AI- we can safely say that AI will be the solution to the dilemma of “what will become of human jobs in cybersecurity?”. Tech enthusiasts believe that we will see a new generation of robots that can work alongside humans and complement or, maybe replace, them in ways not envisioned previously. AI will not make jobs easier to accomplish, but also bring about new job roles for the masses. Let’s find out how. Will AI destroy or create jobs in Cybersecurity? AI-driven systems have started to replace humans in numerous industries. However, that doesn’t appear to be the case in cybersecurity. While automation can sometimes reduce operational errors and make it easier to scale tasks, using AI to spot cyberattacks isn’t completely practical because such systems yield a large number of false positives. It lacks the contextual awareness which can lead to attacks being wrongly identified or missed completely. As anyone who’s ever tried to automate something knows, automated machines aren’t great at dealing with exceptions that fall outside of the parameters to which they have been programmed. Eventually, human expertise is needed to analyze potential risks or breaches and make critical decisions. It’s also worth noting that completely relying on artificial intelligence to manage security only leads to more vulnerabilities - attacks could, for example, exploit the machine element in automation. Automation can support cybersecurity professionals - but shouldn’t replace them Supported by the right tools, humans can do more. They can focus on critical tasks where an automated machine or algorithm is inappropriate. In the context of cybersecurity, artificial intelligence can do much of the 'legwork' at scale in processing and analyzing data, to help inform human decision making. Ultimately, this isn’t a zero-sum game -  humans and AI can work hand in hand to great effects. AI2 Take, for instance, the project led by the experts at the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Lab. AI2 (Artificial Intelligence + Analyst Intuition) is a system that combines the capabilities of AI with the intelligence of human analysts to create an adaptive cybersecurity solution that improves over time. The system uses the PatternEx machine learning platform, and combs through data looking for meaningful, predefined patterns. For instance, a sudden spike in postback events on a webpage might indicate an attempt at staging a SQL injection attack. The top results are then presented to a human analyst, who will separate any false positives and flags legitimate threats. The information is then fed into a virtual analyst that uses human input to learn and improve the system’s detection rates. On future iterations, a more refined dataset is presented to the human analyst, who goes through results and once again "teaches" the system to make better decisions. AI2 is a perfect example that shows man and machine can complement each other’s strengths to create something even more effective. It’s worth remembering that in any company that uses AI for cybersecurity, automated tools and techniques require significant algorithm training, data markup. New cybersecurity job roles and the evolution of the job market The bottom line of this discussion is that- AI will not destroy cybersecurity jobs, but it will drastically change them. The primary focus of many cybersecurity jobs can be going through the hundreds of security tools available, determining what tools and techniques are most appropriate for their organization’s needs. Of course, as systems move to the cloud, these decisions will already be made because cloud providers will offer in-built security solutions. This means that the number of companies that will need a full staff of cybersecurity experts will be drastically reduced. Instead, companies will need more individuals that understand issues like the potential business impact and risk of different projects and architectural decisions. This demands a very different set of skills and knowledge compared to the typical current cybersecurity role - it is less directly technical and will require more integration with other key business decision makers. AI can provide assistance, but it can’t offer easy answers. Humans and AI working together Companies concerned with cybersecurity legal compliance and effective real-world solutions should note that cybersecurity and information technology professionals are best-suited for tasks such as risk analysis, policy formulation and cyber attack response. Human intervention is can help AI systems to learn and evolve. Take the example of Spain-based antivirus company called Panda Security, that had a number of people reverse-engineering malicious code and writing signatures. In today's times, to keep pace with overflowing amounts of data, the company would need hundreds of thousands of engineers to deal with malicious code. Enter AI and only a small team of engineers are required-- to look at more than 200,000 new malware samples per day. Is AI going to steal cybersecurity engineers their jobs? So what about former employees that used to perform this job? Have they been laid off? The answer is a straight No! But they will need to upgrade their skill set. In the world of cybersecurity, AI is going to create new jobs, as it throws up new problems to be analyzed and solved. It’s going to create what are being called "new collar" jobs - this is something that IBM’s hiring strategy has already taken into account. Once graduates enter the IBM workforce, AI enters the equation to help them get a fast start. Even the Junior analysts can have the ability to investigate a new malware infecting mobile phones of employees. AI would quickly research the new malware impacting the phones, and identify the characteristics reported by others and to provide a recommended course of action. This would relieve analysts from the manual work of going through reams of data and lines of code - in theory, it should make their job more interesting and more fun. Artificial intelligence and the human workforce, then, aren’t in conflict when it comes to cybersecurity. Instead, they can complement each other to create new job opportunities that will test the skills of the upcoming generation, and lead experienced professionals into new and maybe more interesting directions. It will be interesting to see how cybersecurity workforce makes use of AI in the future. Intelligent Edge Analytics: 7 ways machine learning is driving edge computing adoption in 2018 15 millions jobs in Britain at stake with AI robots set to replace humans at workforce 5 ways artificial intelligence is upgrading software engineering    
Read more
  • 0
  • 0
  • 8786

article-image-the-amped-up-web-by-google
Amarabha Banerjee
23 Sep 2018
5 min read
Save for later

The AMPed up web by Google

Amarabha Banerjee
23 Sep 2018
5 min read
Google apparently wants all the web developers to adopt the AMP approach for their websites. The AMP project was announced by Google on October 7, 2015, and AMP pages first became available to web users in February 2016. Mobile search is more popular presently as compared to desktop search. It is important for web pages to appear in Google’s mobile search results, and this is why AMP is not optional for web publishers. Without AMP, a publisher’s articles will be extremely unlikely to appear in the Top Stories carousel on mobile search in Google. This means the developers will have two options for them, either to design the complete app in the AMP format, or have two formats ready, one as per their own design considerations, the other as per the Google AMP format. Does that really work for the developers? We will try to address that question here in this article. The trouble with Web content - Searchability and Indexing The searchability of your application is heavily dependent on the structure & format of your app. To be found on Google search is dependant on how easily Google can crawl and index your application. The main challenge for indexing is the vast nature of internet and the wide variety of applications that exist. The absence of a structure or a particular format makes the task of checking website content and categorizing them very difficult. This was the primary reason why Google had come up with the idea of Accelerated Mobile Pages. The purpose of adopting AMP is to make all web and mobile applications conform to a certain structure, so that they can be easily classified and categorized. Since the implementation of ‘mobile first’ approach - an approach that puts more emphasis on the mobile platform and UI considerations for mobile devices, the AMP way has been slowly becoming the most preferred way of app designing. But the real question here is are developers adopting this particular design thinking willingly or are they finding themselves running out of other options with Google forcing its hand on how they design their apps. The Ground Realities of the Web - Diversity vs Uniformity The truth is that the internet is a diverse playing ground. It’s a place for information sharing. As such, the general consensus is not exactly in line with Google’s vision of a uniform web. Google started off as a search engine whose main task was to be the gateway of information - to lead people to the specific web addresses. From there on, they have evolved to be one of the leading beneficiaries of the world wide web. The next step in Google’s evolution seems to be quite natural to take control over content and hosting. Google has also recently announced that they are going to lay down undersea internet cable from Japan to Guam, and from Guam to Australia. They are portraying this decision as an economic decision which will save them money eventually after the cables are laid. But some are seeing this as a step to remove external dependencies and as a step closer to total control over the internet. Google’s recent partnering deal with WordPress is a proof that Google is taking steps towards owning up the web hosting space. AMP specification means that Google will have the final say over design specifications. The diversity in design will suffer as you would not want to spend time to design a site that won’t be indexed by Google. Hence the developers will only have two options, use the pre-designed template provided by Google, or make two specific website designs, one as per their own design consideration and the other one as per AMP. But Google will keep showing you error signs if your AMP version doesn’t match the main design. Hence the choice finally narrows down to choosing AMP. The trouble with AMP Your content published using AMP is stored in a Google cache and hence repeated views are loaded from the cache. This also means that the user will actually spend more time in Google’s own page and will see Google’s ads and not the ones which the content creator had put up. This by extension means loss of revenue for the actual content creator. Using Analytics is far more difficult in AMP-based pages. The AMP pages are difficult to customize and hence difficult to design without looking similar. So the web might end up with similar looking apps with similar buttons and UIs all across. The AMP model takes its own decisions as per how it actually shows your content. So you don’t get to choose your metadata being displayed, but Google does. That means less control over your content. With Google controlling the extent to which your website data is displayed, all the pages are going to look similar with very little metadata info shown, fake stories will appear parallel to normal news thumbnails because there will be very little text displayed to enable to make a call, whether a story is true or false. All of these come with the caveat that AMPed up pages will rank higher on Google. If that’s the proverbial carrot used to lure web publishers to bring their content under Google’s umbrella, then we must say that it’s going to be a tricky choice for developers. No one wants a slow web with irregular indexing and erroneous search results. But how far are we prepared to let go of individuality and design thinking in this process, that’s a question to ponder about. Google wants web developers to embrace AMP. Great news for users, more work for developers. Like newspapers, Google algorithms are protected by the First Amendment Implementing Dependency Injection in Google Guice
Read more
  • 0
  • 0
  • 3653
Banner background image

article-image-rust-as-a-game-programming-language-is-it-any-good
Amarabha Banerjee
22 Sep 2018
4 min read
Save for later

Rust as a Game Programming Language: Is it any good?

Amarabha Banerjee
22 Sep 2018
4 min read
We have moved lightyears away from the the handheld gaming days. The good old Tetris and Mario games were easy to use, low on graphics, super difficult to program in-spite of their apparent simpler appearance. Although it’s difficult to trace back to the language in which all of these games were written, many of them were written in the C family of languages, which contributed to the difficulty in programming them. Rust has been touted as one of the successors of C. Which in-turn brings the question back - if C was difficult for coding, then how exactly is Rust going to be different? The answer of this question lies in the approach of Rust. Rust was designed primarily as a systems programming language by the Mozilla Foundation. The primary game development language over the past 20 years have been C/C++ majorly. Rust brings a fresh change in approach - from Object Oriented to Data Oriented. The problem with object oriented programming was summarized nicely by Catherine West from Chucklefish. According to her, treating game elements like NPC, game worlds, as Objects might work well at a small level. But when you are trying to create your own game engine, then treating game elements as Objects will imply creating a lot of super sized objects with complex layers of dependencies. The Rust approach, on the other hand, is data oriented. This implies that every element is treated as data. This simplifies the process of creating midsized game engines a lot. Chucklefish being a significant name in 2D game development, this statement from Catherine West comes as a major boost for developers who want to use Rust for developing 2D games. She has although expressed her doubts on using Rust for 3D game development. Another important personality who has recently come out in support of Rust is Andrea Pessino -  CTO of Ready at Dawn. Ready at Dawn is a well established game studio known for games such as The Order: 1886, Daxter and various God of War titles. His tweet read like this. This is another feather in Rust’s cap for game development. The present state of game development in Rust is quite encouraging. There are quite a few low level graphical libraries like GFX.  GFX is a low-level abstraction layer over platform specific graphical interfaces (OpenGL, Metal, Vulkan). It offers some handy wrapper over windows backend (glutin the Rust one, or wrapper around Vulkan system, GLFW and more). GFX is still at a very early stage of development with the present version being 0.17. Although major game engines like Unity, and Unreal are yet to support Rust for game development, there exist a few complete game engines which allow you to create complete games with Rust using their framework. The first one is Piston. It is the oldest game engine for Rust. It is also the most stable and one with great documentation. However, many people find Piston confusing and hard to use as it is super-modular by design. Sometimes it is even hard to understand which module to load for achieving a certain goal or build a certain component of a game. Amethyst is a more recent game engine/framework inspired by commercial monolithic game engines. It comes with all the necessary dependencies in its package. However it is evolving quickly and hence the present documentation is already outdated. However there is a vibrant community which is looking to include more and more developers into its foray. Hence this gives an opportunity to new developers to get into game development with Rust and get involved with a game engine also. GGEZ is a simple 2D game engine inspired by the LÖVE engine. This library is more suited at creating simple 2D games for hobbyists. GGEZ is also very new and changes quickly. The design simplicity is an incentive for indie developers and hobbyists to start creating games with it. Some other popular libraries include: noise-rs / a noise generator rlua / High level bindings between Rust and Lua sfxr / Reimplementation of DrPetter’s “sfxr” sound effect generator as a Rust library The conclusion that we can draw from here is that Rust has a lot of promise when it comes to game development. With the data oriented approach, easy memory management and access to low level performance enhancement techniques, Rust can become a full fledged game development language in the near future. Best game engines for Artificial Intelligence game development Implementing Unity game engine and assets for 2D game development How to use arrays, lists, and dictionaries in Unity for 3D game development
Read more
  • 0
  • 0
  • 41126

article-image-6-common-challenges-faced-by-android-app-developers
Guest Contributor
21 Sep 2018
5 min read
Save for later

6 common challenges faced by Android App developers

Guest Contributor
21 Sep 2018
5 min read
The primary target for businesses while working on mobile apps is the Android platform, thanks to the massive market share the mobile operating system holds. It’s popularity can be attributed to the fact that it is open source and is regular updated with new enhancements and features. Android devices generally tend to differ based on the mobile hardware features even when powered by the same version of the Android OS. This is why it is essential that when developing apps for Android, developers create mobile apps capable of targeting a diverse range of mobile devices running on different versions of Android OS. During the various stages of planning, developing and testing, developers need to focus comprehensively on the apps functionality, accessibility, usability, performance, and security so that users can be engaged despite their choice of device. Also, they also need to look for ways to make the apps deliver a more personalized user experience across the various devices an operating system. Furthermore, developers need to understand and find solutions to the common challenges involved in android app development. Common Challenges Android App Developers Face 1. Hardware Features The Android OS is unlike any other mobile operating system. For one thing, it is an open source system. Alphabet gives manufacturers the leeway to customize the operating system to their specific needs. Also, there are no regulations on the devices being released by the different manufacturers. As a result, you can find various Android devices with different hardware features running on the same Android version. Two smartphones running on Android latest ver, for example, may have different screen resolutions, camera, screen size, and other hardware structures. During android app development, developers need to account for all of this to ensure the application delivers a personalized experience to each user. 2. Lack of Uniform User Interface Design Rules Since Google is yet to release any standard UI (user interface) design rules or process for mobile app developers, most developers don’t follow any standard UI development rules or procedure. Because developers are creating custom UI interfaces in their preferred way, a lot of apps tend to function or look different across different devices. This diversity and incompatibility of the UI usually affects the user experience that the Android app directly delivers. Smart developers prefer to go for a responsive layout that’ll keep the UI consistent across different devices. Moreover, developers need to test the UI of the app extensively by combining emulators and real mobile devices. Designing a UI that makes the app deliver the same user experience across varying Android devices is one of the more daunting challenges developers face. 3. API Incompatibility A lot of developers make use of third-party APIs to enhance the functionality and interoperability of a mobile device. Unfortunately, not all third-party APIs available for Android app development are of high quality.. Some APIs were created for a particular Android version and will not work on devices running on a different version of the operating system. Developers usually have to come up with ways to make a single API work on all Android versions, a task they often find to be very challenging. 4. Security Flaws As previously mentioned, Android is an open source software, and because of that, manufacturers find it easy to customize Android to their desired specifications. However, this openness and the massive market size makes Android a frequent target for security attacks. There have been several instances where the security of millions of Android mobile devices have been affected by security flaws and bugs like mRST, Stagefright, FakeID, ‘Certifi-gate,’ TowelRoot and Installer Hijacking. Developers need to include robust security features in their applications and utilize the latest encryption mechanisms to keep user information secure and out of the hands of hackers. 5. Search Engine Visibility The latest data from Statista shows that Google Play Store contains a higher number of mobile apps. Additionally, a large number of Android users prefer free apps than paid apps which is why developers need to promote their mobile applications to increase their download numbers and employ application monetization options. The best way to promote the app to reach their target audience is to use comprehensive digital marketing strategies. Most developers make use of digital marketing professionals to promote their apps aggressively. 6. Patent Issues Google doesn’t implement any guidelines for the evaluation of the quality of new apps that are getting submitted to the Play Store. This lack of a quality assessment guideline causes a lot of patent-related issues for developers. Some developers, to avoid patent issues, have to modify and redesign their apps in the future. As per my personal experience, I have tried to cover general challenges faced by Android app developers. I’m sure keeping wary of these challenges would help developers to build successful apps in the most hassle free way. Author Bio Harnil Oza is the CEO of Hyperlink InfoSystem, one of the leading app development companies in New York, USA and India who deliver mobile solutions mainly on Android and iOS platform. He regularly contributes his knowledge on leading blogging sites. LEGO launches BrickHeadz Builder AR, a new and free Android app to bring bricks and toys to life How Android app developers can convert iPhone apps How to Secure and Deploy an Android App
Read more
  • 0
  • 0
  • 12699

article-image-7-ai-tools-mobile-developers-need-to-know
Bhagyashree R
20 Sep 2018
11 min read
Save for later

7 AI tools mobile developers need to know

Bhagyashree R
20 Sep 2018
11 min read
Advancements in artificial intelligence (AI) and machine learning has enabled the evolution of mobile applications that we see today. With AI, apps are now capable of recognizing speech, images, and gestures, and translate voices with extraordinary success rates. With a number of apps hitting the app stores, it is crucial that they stand apart from competitors by meeting the rising standards of consumers. To stay relevant it is important that mobile developers keep up with these advancements in artificial intelligence. As AI and machine learning become increasingly popular, there is a growing selection of tools and software available for developers to build their apps with. These cloud-based and device-based artificial intelligence tools provide developers a way to power their apps with unique features. In this article, we will look at some of these tools and how app developers are using them in their apps. Caffe2 - A flexible deep learning framework Source: Qualcomm Caffe2 is a lightweight, modular, scalable deep learning framework developed by Facebook. It is a successor of Caffe, a project started at the University of California, Berkeley. It is primarily built for production use cases and mobile development and offers developers greater flexibility for building high-performance products. Caffe2 aims to provide an easy way to experiment with deep learning and leverage community contributions of new models and algorithms. It is cross-platform and integrates with Visual Studio, Android Studio, and Xcode for mobile development. Its core C++ libraries provide speed and portability, while its Python and C++ APIs make it easy for you to prototype, train, and deploy your models. It utilizes GPUs when they are available. It is fine-tuned to take full advantage of the NVIDIA GPU deep learning platform. To deliver high performance, Caffe2 uses some of the deep learning SDK libraries by NVIDIA such as cuDNN, cuBLAS, and NCCL. Functionalities Enable automation Image processing Perform object detection Statistical and mathematical operations Supports distributed training enabling quick scaling up or down Applications Facebook is using Caffe2 to help their developers and researchers train large machine learning models and deliver AI on mobile devices. Using Caffe2, they significantly improved the efficiency and quality of machine translation systems. As a result, all machine translation models at Facebook have been transitioned from phrase-based systems to neural models for all languages. OpenCV - Give the power of vision to your apps Source: AndroidPub OpenCV short for Open Source Computer Vision Library is a collection of programming functions for real-time computer vision and machine learning. It has C++, Python, and Java interfaces and supports Windows, Linux, Mac OS, iOS and Android. It also supports the deep learning frameworks TensorFlow and PyTorch. Written natively in C/C++, the library can take advantage of multi-core processing. OpenCV aims to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception in the commercial products. The library consists of more than 2500 optimized algorithms including both classic and state-of-the-art computer vision algorithms. Functionalities These algorithms can be used for the following: To detect and recognize faces Identify objects Classify human actions in videos Track camera movements and moving objects Extract 3D models of objects Produce 3D point clouds from stereo cameras Stitch images together to produce a high-resolution image of an entire scene Find similar images from an image database Applications Plickers is an assessment tool, that lets you poll your class for free, without the need for student devices. It uses OpenCV as its graphics and video SDK. You just have to give each student a card called a paper clicker, and use your iPhone/iPad to scan them to do instant checks-for-understanding, exit tickets, and impromptu polls. Also check out FastCV BoofCV TensorFlow Lite and Mobile - An Open Source Machine Learning Framework for Everyone Source: YouTube TensorFlow is an open source software library for building machine learning models. Its flexible architecture allows easy model deployment across a variety of platforms ranging from desktops to mobile and edge devices. Currently, TensorFlow provides two solutions for deploying machine learning models on mobile devices: TensorFlow Mobile and TensorFlow Lite. TensorFlow Lite is an improved version of TensorFlow Mobile, offering better performance and smaller app size. Additionally, it has very few dependencies as compared to TensorFlow Mobile, so it can be built and hosted on simpler, more constrained device scenarios. TensorFlow Lite also supports hardware acceleration with the Android Neural Networks API. But the catch here is that TensorFlow Lite is currently in developer preview and only has coverage to a limited set of operators. So, to develop production-ready mobile TensorFlow apps, it is recommended to use TensorFlow Mobile. Also, TensorFlow Mobile supports customization to add new operators not supported by TensorFlow Mobile by default, which is a requirement for most of the models of different AI apps. Although TensorFlow Lite is in developer preview, its future releases “will greatly simplify the developer experience of targeting a model for small devices”. It is also likely to replace TensorFlow Mobile, or at least overcome its current limitations. Functionalities Speech recognition Image recognition Object localization Gesture recognition Optical character recognition Translation Text classification Voice synthesis Applications The Alibaba tech team is using TensorFlow Lite to implement and optimize speaker recognition on the client side. This addresses many of the common issues of the server-side model, such as poor network connectivity, extended latency, and poor user experience. Google uses TensorFlow for advanced machine learning models including Google Translate and RankBrain. Core ML - Integrate machine learning in your iOS apps Source: AppleToolBox Core ML is a machine learning framework which can be used to integrate machine learning model in your iOS apps. It supports Vision for image analysis, Natural Language for natural language processing, and GameplayKit for evaluating learned decision trees. Core ML is built on top of the following low-level APIs, providing a simple higher level abstraction to these: Accelerate optimizes large-scale mathematical computations and image calculations for high performance. Basic neural network subroutines (BNNS) provides a collection of functions using which you can implement and run neural networks trained with previously obtained data. Metal Performance Shaders is a collection of highly optimized compute and graphic shaders that are designed to integrate easily and efficiently into your Metal app. To train and deploy custom models you can also use the Create ML framework. It is a machine learning framework in Swift, which can be used to train models using native Apple technologies like Swift, Xcode, and Other Apple frameworks. Functionalities Face and face landmark detection Text detection Barcode recognition Image registration Language and script identification Design games with functional and reusable architecture Applications Lumina is a camera designed in Swift for easily integrating Core ML models - as well as image streaming, QR/Barcode detection, and many other features. ML Kit by Google - Seamlessly build machine learning into your apps Source: Google ML Kit is a cross-platform suite of machine learning tools for its Firebase mobile development platform. It comprises of Google's ML technologies, such as the Google Cloud Vision API, TensorFlow Lite, and the Android Neural Networks API together in a single SDK enabling you to apply ML techniques to your apps easily. You can leverage its ready-to-use APIs for common mobile use cases such as recognizing text, detecting faces, identifying landmarks, scanning barcodes, and labeling images. If these APIs don't cover your machine learning problem, you can use your own existing TensorFlow Lite models. You just have to upload your model on Firebase and ML Kit will take care of the hosting and serving. These APIs can run on-device or in the cloud. Its on-device APIs process your data quickly and work even when there’s no network connection. Its cloud-based APIs leverage the power of Google Cloud Platform's machine learning technology to give you an even higher level of accuracy. Functionalities Automate tedious data entry for credit cards, receipts, and business cards, or help organize photos. Extract text from documents, which you can use to increase accessibility or translate documents. Real-time face detection can be used in applications like video chat or games that respond to the player's expressions. Using image labeling you can add capabilities such as content moderation and automatic metadata generation. Applications A popular calorie counter app, Lose It! uses Google ML Kit Text Recognition API to quickly capture nutrition information to ensure it’s easy to record and extremely accurate. PicsArt uses ML Kit custom model APIs to provide TensorFlow–powered 1000+ effects to enable millions of users to create amazing images with their mobile phones. Dialogflow - Give users new ways to interact with your product Source: Medium Dialogflow is a Natural Language Understanding (NLU) platform that makes it easy for developers to design and integrate conversational user interfaces into mobile apps, web applications, devices, and bots. You can integrate it on Alexa, Cortana, Facebook Messenger, and other platforms your users are on. With Dialogflow you can build interfaces, such as chatbots and conversational IVR that enable natural and rich interactions between your users and your business. It provides this human-like interaction with the help of agents. Agents can understand the vast and varied nuances of human language and translate that to standard and structured meaning that your apps and services can understand. It comes in two types: Dialogflow Standard Edition and Dialogflow Enterprise Edition. Dialogflow Enterprise Edition users have access to Google Cloud Support and a service level agreement (SLA) for production deployments. Functionalities Provide customer support One-click integration on 14+ platforms Supports multilingual responses Improve NLU quality by training with negative examples Debug using more insights and diagnostics Applications Domino’s simplified the process of ordering pizza using Dialogflow’s conversational technology. Domino's leveraged large customer service knowledge and Dialogflow's NLU capabilities to build both simple customer interactions and increasingly complex ordering scenarios. Also check out Wit.ai Rasa NLU Microsoft Cognitive Services - Make your apps see, hear, speak, understand and interpret your user needs Source: Neel Bhatt Cognitive Services is a collection of APIs, SDKs, and services to enable developers easily add cognitive features to their applications such as emotion and video detection, facial, speech, and vision recognition, among others. You need not be an expert in data science to make your systems more intelligent and engaging. The pre-built services come with high-quality RESTful intelligent APIs for the following: Vision: Make your apps identify and analyze content within images and videos. Provides capabilities such as image classification, optical character recognition in images, face detection, person identification, and emotion identification. Speech: Integrate speech processing capabilities into your app or services such as text-to-speech, speech-to-text, speaker recognition, and speech translation. Language: Your application or service will understand the meaning of the unstructured text or the intent behind a speaker's utterances. It comes with capabilities such as text sentiment analysis, key phrase extraction, automated and customizable text translation. Knowledge: Create knowledge-rich resources that can be integrated into apps and services. It provides features such as QnA extraction from unstructured text, knowledge base creation from collections of Q&As, and semantic matching for knowledge bases. Search: Using Search API you can find exactly what you are looking for across billions of web pages. It provides features like ad-free, safe, location-aware web search, Bing visual search, custom search engine creation, and many more. Applications To safeguard against fraud, Uber uses the Face API, part of Microsoft Cognitive Services, to help ensure the driver using the app matches the account on file. Cardinal Blue developed an app called PicCollage, a popular mobile app that allows users to combine photos, videos, captions, stickers, and special effects to create unique collages. Also check out AWS machine learning services IBM Watson These were some of the tools that will help you integrate intelligence into your apps. These libraries make it easier to add capabilities like speech recognition, natural language processing, computer vision, and many others, giving users the wow moment of accomplishing something that wasn’t quite possible before. Along with choosing the right AI tool, you must also consider other factors that can affect your app performance. These factors include the accuracy of your machine learning model, which can be affected by bias and variance, using correct datasets for training, seamless user interaction, and resource-optimization, among others. While building any intelligent app it is also important to keep in mind that the AI in your app is solving a problem and it doesn’t exist because it is cool. Thinking from the user’s perspective will allow you to assess the importance of a particular problem. A great AI app will not just help users do something faster, but enable them to do something they couldn’t do before. With the growing popularity and the need to speed up the development of intelligent apps, many companies ranging from huge tech giants to startups are providing AI solutions. In the future we will definitely see more developer tools coming into the market, making AI in apps a norm. 6 most commonly used Java Machine learning libraries 5 ways artificial intelligence is upgrading software engineering Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence
Read more
  • 0
  • 0
  • 18631
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-sex-robots-artificial-intelligence-and-ethics-how-desire-shapes-and-is-shaped-by-algorithms
Richard Gall
19 Sep 2018
9 min read
Save for later

Sex robots, artificial intelligence, and ethics: How desire shapes and is shaped by algorithms

Richard Gall
19 Sep 2018
9 min read
The ethics of artificial intelligence seems to have found its way into just about every corner of public life. From law enforcement to justice, through to recruitment, artificial intelligence is both impacting both the work we do and the way we think. But if you really want to get into the ethics of artificial intelligence you need to go further than the public realm and move into the bedroom. Sex robots have quietly been a topic of conversation for a number of years, but with the rise of artificial intelligence they appear to have found their way into the mainstream - or at least the edges of the mainstream. There’s potentially some squeamishness when thinking about sex robots, but, in fact, if we want to think seriously about the consequences of artificial intelligence - from how it is built to how it impacts the way we interact with each other and other things - sex robots are a great place to begin. Read next: Introducing Deon, a tool for data scientists to add an ethics checklist Sexualizing artificial intelligence It’s easy to get caught up in the image of a sex doll, plastic skinned, impossible breasts and empty eyes, sad and uncanny, but sexualized artificial intelligence can come in many other forms too. Let’s start with sex chatbots. These are, fundamentally, a robotic intelligence that is able to respond to and stimulate a human’s desires. But what’s significant is that they treat the data of sex and sexuality as primarily linguistic - the language people use to describe themselves, their wants, their needs their feelings. The movie Her is a great example of a sexualised chatbot. Of course, the digital assistant doesn’t begin sexualised, but Joaquin Phoenix ends up falling in love with his female-voiced digital assistant through conversation and intimate interaction. The physical aspect of sex is something that only comes later. https://youtu.be/3n5muEWaE_Q Ai Furuse - the Japanese sex chatbot But they exist in real life too. The best example out these is Ai Furuse, a virtual girlfriend that interacts with you in an almost human-like manner. Ai Furuse is programmed with a dictionary of more than 30,000 words, and is able to respond to conversational cues. But more importantly, AI Furuse is able to learn from conversations. She can gather information about her interlocutor and, apparently, even identify changes in their mood. The more you converse with the chatbot, the more intimate and closer your relationship should be (in theory). Immediately, we can begin to see some big engineering questions. These are primarily about design, but remember - wherever you begin to think about design we’re starting to move towards the domain of ethics as well. The very process of learning through interaction requires the AI to be programmed in certain ways. It's a big challenge for engineers to determine what’s really important in these interactions. The need to make judgements on how users behave. The information that’s passed to the chatbot needs to be codified and presented in a way that can be understood and processed. That requires some work in itself. The models of desire on which Ai Furuse are necessarily limited. They bear the marks of the engineers that helped to create 'her'. It becomes a question of ethics once we start to ask if these models might be normative in some way. Do they limit or encourage certain ways of interacting? Desire algorithms In the context of one chatbot that might not seem like a big deal. But if (or as) the trend moves into the mainstream, we start to enter a world where the very fact of engineering chatbots inadvertently engineers the desires and sexualities that are expressed towards them. In this instance, not only do we shape the algorithms (which is what’s meant to happen), we also allow these ‘desire algorithms’ to shape our desires and wants too. Storing sexuality on the cloud But there’s another more practical issue as well. If the data on which sex chatbots or virtual lovers runs on the cloud, we’re in a situation where the most private aspects of our lives are stored somewhere that could easily be accessed by malicious actors. This a real risk of Ai Furuse, where cloud space is required for your ‘virtual girlfriend’ to ‘evolve’ further. You pay for additional cloud space. It’s not hard to see how this could become a problem in the future. Thousands of sexual and romantic conversations could be easily harvested for nefarious purposes. Sex robots, artificial intelligence and the problem of consent Language, then, is the kernel of sexualised artificial intelligence. Algorithms, when made well, should respond, process, adapt to and then stimulate further desire. But that's only half the picture. The physical reality of sex robots - both as literal objects, but also the physical effects of what they do - only adds a further complication into the mix. Questions about what desire is - why we have it, what we should do with it - are at the forefront of this debate. If, for example, a paedophile can use a child-like sex robot as a surrogate object of his desires, is that, in fact, an ethical use of artificial intelligence? Here the debate isn’t just about the algorithm, but how it should be deployed. Is the algorithm performing a therapeutic purpose, or is it actually encouraging a form of sexuality that fails to understand the concept of harm and consent? This is an important question in the context of sex robots, but it’s also an important question for the broader ethics of AI. If we can build an AI that is able to do something (ie. automate billions of jobs) should we do it? Who’s responsibility is it to deal with the consequences? The campaign against sex robots These are some of the considerations that inform the perspective of the Campaign Against Sex Robots. On their website, they write: "Over the last decades, an increasing effort from both academia and industry has gone into the development of sex robots – that is, machines in the form of women or children for use as sex objects, substitutes for human partners or prostituted persons. The Campaign Against Sex Robots highlights that these kinds of robots are potentially harmful and will contribute to inequalities in society. We believe that an organized approach against the development of sex robots is necessary in response the numerous articles and campaigns that now promote their development without critically examining their potentially detrimental effect on society." For the campaign, sex robots pose a risk in that they perpetuate already existing inequalities and forms of exploitation in society. They prevent us from facing up to these inequalities. They argue that it will “reduce human empathy that can only be developed by an experience of mutual relationship.” Consent and context Consent is the crucial problem when it comes to artificial intelligence. And you could say that it points to one of the limitations of artificial intelligence that we often miss - context. Algorithms can’t ever properly understand context. There will, undoubtedly be people who disagree with this. Algorithms can, for example, understand the context of certain words and sentences, right? Well yes, that may be true, but that’s not strictly understanding context. Artificial intelligence algorithms are set a context, one from which they cannot deviate. They can’t, for example, decide that actually encouraging a pedophile to act out their fantasies is wrong. It is programmed to do just that. But the problem isn’t simply with robot consent. There’s also an issue with how we consent to an algorithm in this scenario. As journalist Adam Rogers writes in this article for Wired, published at the start of 2018: "It’s hard to consent if you don’t know to whom or what you’re consenting. The corporation? The other people on the network? The programmer?" Rogers doesn’t go into detail on this insight, but it gets to the crux of the matter when discussing artificial intelligence and sex robots. If sex is typically built on a relationship between people, with established forms of communication that establish both consent and desire, what happens when this becomes literally codified? What happens when these additional layers of engineering and commerce get added on top of basic sexual interaction? Is the problem that we want artificial intelligence to be human? Towards the end of the same piece, Rogers finds a possible solutions from privacy researcher Sarah Jamie Lewis. Lewis wonders whether one of the main problems with sex robots is this need to think in humanoid terms. “We’re already in the realm of devices that look like alien tech. I looked at all the vibrators I own. They’re bright colors. None of them look like a penis that you’d associate with a human. They’re curves and soft shapes.” Of course, this isn’t an immediate solution - sex robots are meant to stimulate sex in its traditional (arguably heteronormative) sense. What Lewis suggests, and Rogers seems to agree with, is really just AI-assisted masturbation. But their insight is still useful. On reflection, there is a very real and urgent question about the way in which we deploy artificial intelligence. We need to think carefully about what we want it to replicate and what we want it to encourage. Sex robots are the starting point for thinking seriously about artificial intelligence It’s worth noting that when discussing algorithms we end up looping back onto ourselves. Sex robots, algorithms, artificial intelligence - they’re a problem insofar as they pose questions about what we really value as humans. They make us ask what we want to do with our time, and how we want to interact with other people. This is perhaps a way forward for anyone that builds or interacts with algorithms. Whether they help you get off, or find your next purchase. Consider what you’re algorithm is doing - what’s it encouraging, storing , processing, substituting. We can’t prepare for a future with artificial intelligence without seriously considering these things.
Read more
  • 0
  • 0
  • 8718

article-image-best-machine-learning-datasets-for-beginners
Natasha Mathur
19 Sep 2018
13 min read
Save for later

Best Machine Learning Datasets for beginners

Natasha Mathur
19 Sep 2018
13 min read
“It’s not who has the best algorithm that wins. It’s who has the most data” ~ Andrew Ng If you would look at the way algorithms were trained in Machine Learning, five or ten years ago, you would notice one huge difference. Training algorithms in Machine Learning are much better and efficient today than it used to be a few years ago. All credit goes to the hefty amount of data that is available to us today. But, how does Machine Learning make use of this data? Let’s have a look at the definition of Machine Learning. “Machine Learning provides computers or machines the ability to automatically learn from experience without being explicitly programmed”. Machines “learn from experience” when they’re trained, this is where data comes into the picture. How’re they trained? Datasets!   This is why it is so crucial that you feed these machines with the right data for whatever problem it is that you want these machines to solve. Why datasets matter in Machine Learning? The simple answer is because Machines too like humans are capable of learning once they see relevant data. But where they vary from humans is the amount of data they need to learn from. You need to feed your machines with enough data in order for them to do anything useful for you. This why Machines are trained using massive datasets. We can think of machine learning data like a survey data, meaning the larger and more complete your sample data size is, the more reliable your conclusions will be. If the data sample isn’t large enough then it won’t be able to capture all the variations making your machine reach inaccurate conclusions, learn patterns that don’t really exist, or not recognize patterns that do. Datasets help bring the data to you. Datasets train the model for performing various actions. They model the algorithms to uncover relationships, detect patterns, understand complex problems as well as make decisions. Apart from using datasets, it is equally important to make sure that you are using the right dataset, which is in a useful format and comprises all the meaningful features, and variations. After all, the system will ultimately do what it learns from the data. Feeding right data into your machines also assures that the machine will work effectively and produce accurate results without any human interference required. For instance, training a speech recognition system with a textbook English dataset will result in your machine struggling to understand anything but textbook English. So, any loose grammar, foreign accents, or speech disorders would get missed out. For such a system, using a dataset comprising all the infinite variations in a spoken language among speakers of different genders, ages, and dialects would be a right option. So keep in mind that it is important that the quality, variety, and quantity of your training data is not compromised as all these factors help determine the success of your machine learning models. Top Machine Learning Datasets for Beginners Now, there are a lot of datasets available today for use in your ML applications. It can be confusing, especially for a beginner to determine which dataset is the right one for your project. It is better to use a dataset which can be downloaded quickly and doesn’t take much to adapt to the models. Further, always use standard datasets that are well understood and widely used. This lets you compare your results with others who have used the same dataset to see if you are making progress. You can pick the dataset you want to use depending on the type of your Machine Learning application. Here’s a rundown of easy and the most commonly used datasets available for training Machine Learning applications across popular problem areas from image processing to video analysis to text recognition to autonomous systems. Image Processing There are many image datasets to choose from depending on what it is that you want your application to do. Image processing in Machine Learning is used to train the Machine to process the images to extract useful information from it. For instance, if you’re working on a basic facial recognition application then you can train it using a dataset that has thousands of images of human faces. This is how Facebook knows people in group pictures. This is also how image search works in Google and in other visual search based product sites. Dataset Name Brief Description 10k US Adult Faces Database This database consists of 10,168 natural face photographs and several measures for 2,222 of the faces, including memorability scores, computer vision, and psychological attributes. The face images are JPEGs with 72 pixels/in resolution and 256-pixel height. Google's Open Images Open Images is a dataset of 9 million URLs to images which have been annotated with labels spanning over 6000 categories. These labels cover more real-life entities and the images are listed as having a Creative Commons Attribution license. Visual Genome This is a dataset of over 100k images densely annotated with numerous region descriptions ( girl feeding elephant), objects (elephants), attributes(large), and relationships (feeding). Labeled Faces in the Wild This database comprises more than 13,000 images of faces collected from the web. Each face is labeled with the name of the person pictured.   Fun and easy ML application ideas for beginners using image datasets: Cat vs Dogs: Using Cat and Stanford Dogs dataset to classify whether an image contains a dog or a cat. Iris Flower classification: You can build an ML project using Iris flower dataset where you classify the flowers in any of the three species. What you learn from this toy project will help you learn to classify physical attributes based content to build some fun real-world projects like fraud detection, criminal identification, pain management ( eg; ePAT which detects facial hints of pain using facial recognition technology), and so on. Hot dog - Not hot dog: Use the Food 101 dataset, to distinguish different food types as a hot dog or not. Who knows, you could end up becoming the next Emmy award nominee! Sentiment Analysis As a beginner, you can create some really fun applications using Sentiment Analysis dataset. Sentiment Analysis in Machine Learning applications is used to train machines to analyze and predict the emotion or sentiment associated with a sentence, word, or a piece of text. This is used in movie or product reviews often. If you are creative enough, you could even identify topics that will generate the most discussions using sentiment analysis as a key tool. Dataset Name Brief Description Sentiment140 A popular dataset, which uses 160,000 tweets with emoticons pre-removed Yelp Reviews An open dataset released by Yelp, contains more than 5 million reviews on Restaurants, Shopping, Nightlife, Food, Entertainment, etc. Twitter US Airline Sentiment Twitter data on US airlines starting from February 2015, labeled as positive, negative, and neutral tweets. Amazon reviews This dataset contains over 35 million reviews from Amazon spanning 18 years. Data include information on products, user ratings, and the plaintext review.   Easy and Fun Application ideas using Sentiment Analysis Dataset: Positive or Negative: Using Sentiment140 dataset in a model to classify whether given tweets are negative or positive. Happy or unhappy: Using Yelp Reviews dataset in your project to help machine figure out whether the person posting the review is happy or unhappy.   Good or Bad: Using Amazon Reviews dataset, you can train a machine to figure out whether a given review is good or bad. Natural Language Processing Natural language processing deals with training machines to process and analyze large amounts of natural language data. This is how search engines like Google know what you are looking for when you type in your search query. Use these datasets to make a basic and fun NLP application in Machine Learning: Dataset Name Brief Description Speech Accent Archive This dataset comprises 2140 speech samples from different talkers reading the same reading passage. These Talkers come from 177 countries and have 214 different native languages. Each talker is speaking in English. Wikipedia Links data This dataset consists of almost 1.9 billion words from more than 4 million articles. Search is possible by word, phrase or part of a paragraph itself. Blogger Corpus A dataset comprising 681,288 blog posts gathered from blogger.com. Each blog consists of minimum 200 occurrences of commonly used English words.   Fun Application ideas using NLP datasets: Spam or not: Using Spambase dataset, you can enable your application to figure out whether a given email is spam or not. Video Processing Video Processing datasets are used to teach machines to analyze and detect different settings, objects, emotions, or actions and interactions in videos. You’ll have to feed your machine with a lot of data on different actions, objects, and activities. Dataset Name Brief Description UCF101 - Action Recognition Data Set This dataset comes with 13,320 videos from 101 action categories. Youtube 8M YouTube-8M is a large-scale labeled video dataset. It contains millions of YouTube video IDs, with high-quality machine-generated annotations from a diverse vocabulary of 3,800+ visual entities.   Fun Application ideas using video processing dataset: Action detection: Using UCF101 - Action Recognition DataSet, or Youtube 8M, you can train your application to detect the actions such as walking, running etc, in a video. Speech Recognition Speech recognition is the ability of a machine to analyze or identify words and phrases in a spoken language. Feed your machine with the right and good amount of data, and it will help it in the process of recognizing speech. Combine speech recognition with natural language processing, and get Alexa who knows what you need. Dataset Name Brief Description Gender Recognition by Voice and speech analysis This database identifies a voice as male or female, depending on the acoustic properties of voice and speech. The dataset contains 3,168 recorded voice samples, collected from male and female speakers. Human Activity Recognition w/Smartphone Human Activity Recognition database consists of recordings of 30 subjects performing activities of daily living (ADL) while carrying a smartphone ( Samsung Galaxy S2 ) on the waist. TIMIT TIMIT provides speech data for acoustic-phonetic studies and for the development of automatic speech recognition systems. It comprises broadband recordings of 630 speakers of eight major dialects of American English, each reading ten phonetically rich sentences, phonetic and word transcriptions. Speech Accent Archive This dataset contains 2140 speech samples, each from a different talker reading the same reading passage. Talkers come from 177 countries and have 214 different native languages. Each talker is speaking in English.   Fun Application ideas using Speech Recognition dataset: Accent detection: Use Speech Accent Archive dataset, to make your application identify different accents from a given sample of accents. Identify the activity: Use Human Activity Recognition w/Smartphone dataset to help your application detect the human activity. Natural Language Generation Natural Language generation refers to the ability of machines to simulate the human speech. It can be used to translate written information into aural information or assist the vision-impaired by reading out aloud the contents of a display screen. This is how Alexa or Siri respond to you. Dataset Name Brief Description Common Voice by Mozilla Common Voice dataset contains speech data read by users on the Common Voice website from a number of public sources like user-submitted blog posts, old books, movies, etc. LibriSpeech This dataset consists of nearly 500 hours of clean speech of various audiobooks read by multiple speakers, organized by chapters of the book with both the text and the speech.   Fun Application ideas using Natural Language Generation dataset: Converting text into Audio: Using Blogger Corpus dataset, you can train your application to read out loud the posts on blogger. Autonomous Driving Build some basic self-driving Machine Learning Applications. These Self-driving datasets will help you train your machine to sense its environment and navigate accordingly without any human interference. Autonomous cars, drones, warehouse robots, and others use these algorithms to navigate correctly and safely in the real world. Datasets are even more important here as the stakes are higher and the cost of a mistake could be a human life. Dataset Name Brief Description Berkeley DeepDrive BDD100k This is one of the largest datasets for self-driving AI currently. It comprises over 100,000 videos of over 1,100-hour driving experiences across different times of the day and weather conditions. Baidu Apolloscapes Large dataset consisting of 26 different semantic items such as cars, bicycles, pedestrians, buildings, street lights, etc. Comma.ai This dataset consists of more than 7 hours of highway driving. It includes details on car’s speed, acceleration, steering angle, and GPS coordinates. Cityscape Dataset This is a large dataset that contains recordings of urban street scenes in 50 different cities. nuScenes This dataset consists of more than 1000 scenes with around 1.4 million image, 400,000 sweeps of lidars (laser-based systems that detect the distance between objects), and 1.1 million 3D bounding boxes ( detects objects with a combination of RGB cameras, radar, and lidar).   Fun Application ideas using Autonomous Driving dataset: A basic self-driving application: Use any of the self-driving datasets mentioned above to train your application with different driving experiences for different times and weather conditions.   IoT Machine Learning in building IoT applications is on the rise these days. Now, as a beginner in Machine Learning, you may not have advanced knowledge on how to build these high-performance IoT applications using Machine Learning, but you certainly can start off with some basic datasets to explore this exciting space. Dataset Name Brief Description Wayfinding, Path Planning, and Navigation Dataset This dataset consists of samples of trajectories in an indoor building (Waldo Library at Western Michigan University) for navigation and wayfinding applications. ARAS Human Activity Dataset This dataset is a Human activity recognition Dataset collected from two real houses. It involves over 26 millions of sensor readings and over 3000 activity occurrences.   Fun Application ideas using IoT dataset: Wearable device to track human activity: Use the ARAS Human Activity Dataset to train a wearable device to identify human activity. Read Also: 25 Datasets for Deep Learning in IoT Once you’re done going through this list, it’s important to not feel restricted. These are not the only datasets which you can use in your Machine Learning Applications. You can find a lot many online which might work best for the type of Machine Learning Project that you’re working on. Some popular sources of a wide range of datasets are Kaggle,  UCI Machine Learning Repository, KDnuggets, Awesome Public Datasets, and Reddit Datasets Subreddit. With all this information, it is now time to use these datasets in your project. In case you’re completely new to Machine Learning, you will find reading, ‘A nonprogrammer’s guide to learning Machine learning’quite helpful. Regardless of whether you’re a beginner or not, always remember to pick a dataset which is widely used, and can be downloaded quickly from a reliable source. How to create and prepare your first dataset in Salesforce Einstein Google launches a Dataset Search Engine for finding Datasets on the Internet Why learn machine learning as a non-techie?
Read more
  • 0
  • 0
  • 15616

article-image-what-is-pytorch-and-how-does-it-work
Sunith Shetty
18 Sep 2018
7 min read
Save for later

What is PyTorch and how does it work?

Sunith Shetty
18 Sep 2018
7 min read
PyTorch is a Python-based scientific computing package that uses the power of graphics processing units. It is also one of the preferred deep learning research platforms built to provide maximum flexibility and speed. It is known for providing two of the most high-level features; namely, tensor computations with strong GPU acceleration support and building deep neural networks on a tape-based autograd systems. There are many existing Python libraries which have the potential to change how deep learning and artificial intelligence are performed, and this is one such library. One of the key reasons behind PyTorch’s success is it is completely Pythonic and one can build neural network models effortlessly. It is still a young player when compared to its other competitors, however, it is gaining momentum fast. A brief history of PyTorch Since its release in January 2016, many researchers have continued to increasingly adopt PyTorch. It has quickly become a go-to library because of its ease in building extremely complex neural networks. It is giving a tough competition to TensorFlow especially when used for research work. However, there is still some time before it is adopted by the masses due to its still “new” and “under construction” tags. PyTorch creators envisioned this library to be highly imperative which can allow them to run all the numerical computations quickly. This is an ideal methodology which fits perfectly with the Python programming style. It has allowed deep learning scientists, machine learning developers, and neural network debuggers to run and test part of the code in real time. Thus they don’t have to wait for the entire code to be executed to check whether it works or not. You can always use your favorite Python packages such as NumPy, SciPy, and Cython to extend PyTorch functionalities and services when required. Now you might ask, why PyTorch? What’ so special in using it to build deep learning models? The answer is quite simple, PyTorch is a dynamic library (very flexible and you can use as per your requirements and changes) which is currently adopted by many of the researchers, students, and artificial intelligence developers. In the recent Kaggle competition, PyTorch library was used by nearly all of the top 10 finishers. Some of the key highlights of PyTorch includes: Simple Interface: It offers easy to use API, thus it is very simple to operate and run like Python. Pythonic in nature: This library, being Pythonic, smoothly integrates with the Python data science stack. Thus it can leverage all the services and functionalities offered by the Python environment. Computational graphs: In addition to this, PyTorch provides an excellent platform which offers dynamic computational graphs, thus you can change them during runtime. This is highly useful when you have no idea how much memory will be required for creating a neural network model. PyTorch Community PyTorch community is growing in numbers on a daily basis. In the just short year and a half, it has shown some great amount of developments that have led to its citations in many research papers and groups. More and more people are bringing PyTorch within their artificial intelligence research labs to provide quality driven deep learning models. The interesting fact is, PyTorch is still in early-release beta, but the way everyone is adopting this deep learning framework at a brisk pace shows its real potential and power in the community. Even though it is in the beta release, there are 741 contributors on the official GitHub repository working on enhancing and providing improvements to the existing PyTorch functionalities. PyTorch doesn’t limit to specific applications because of its flexibility and modular design. It has seen heavy use by leading tech giants such as Facebook, Twitter, NVIDIA, Uber and more in multiple research domains such as NLP, machine translation, image recognition, neural networks, and other key areas. Why use PyTorch in research? Anyone who is working in the field of deep learning and artificial intelligence has likely worked with TensorFlow before, Google’s most popular open source library. However, the latest deep learning framework - PyTorch solves major problems in terms of research work. Arguably PyTorch is TensorFlow’s biggest competitor to date, and it is currently a much favored deep learning and artificial intelligence library in the research community. Dynamic Computational graphs It avoids static graphs that are used in frameworks such as TensorFlow, thus allowing the developers and researchers to change how the network behaves on the fly. The early adopters are preferring PyTorch because it is more intuitive to learn when compared to TensorFlow. Different back-end support PyTorch uses different backends for CPU, GPU and for various functional features rather than using a single back-end. It uses tensor backend TH for CPU and THC for GPU. While neural network backends such as THNN and THCUNN for CPU and GPU respectively. Using separate backends makes it very easy to deploy PyTorch on constrained systems. Imperative style PyTorch library is specially designed to be intuitive and easy to use. When you execute a line of code, it gets executed thus allowing you to perform real-time tracking of how your neural network models are built. Because of its excellent imperative architecture and fast and lean approach it has increased overall PyTorch adoption in the community. Highly extensible PyTorch is deeply integrated with the C++ code, and it shares some C++ backend with the deep learning framework, Torch. Thus allowing users to program in C/C++ by using an extension API based on cFFI for Python and compiled for CPU for GPU operation. This feature has extended the PyTorch usage for new and experimental use cases thus making them a preferable choice for research use. Python-Approach PyTorch is a native Python package by design. Its functionalities are built as Python classes, hence all its code can seamlessly integrate with Python packages and modules. Similar to NumPy, this Python-based library enables GPU-accelerated tensor computations plus provides rich options of APIs for neural network applications. PyTorch provides a complete end-to-end research framework which comes with the most common building blocks for carrying out everyday deep learning research. It allows chaining of high-level neural network modules because it supports Keras-like API in its torch.nn package. PyTorch 1.0: The path from research to production We have been discussing all the strengths PyTorch offers, and how these make it a go-to library for research work. However, one of the biggest downsides is, it has been its poor production support. But this is expected to change soon. PyTorch 1.0 is expected to be a major release which will overcome the challenges developers face in production. This new iteration of the framework will merge Python-based PyTorch with Caffe2 allowing machine learning developers and deep learning researchers to move from research to production in a hassle-free way without the need to deal with any migration challenges. The new version 1.0 will unify research and production capabilities in one framework thus providing the required flexibility and performance optimization for research and production. This new version promises to handle tasks one has to deal with while running the deep learning models efficiently on a massive scale. Along with the production support, PyTorch 1.0 will have more usability and optimization improvements. With PyTorch 1.0, your existing code will continue to work as-is, there won’t be any changes to the existing API. If you want to stay updated with all the progress to PyTorch library, you can visit the Pull Requests page. The beta release of this long-awaited version is expected later this year. Major vendors like Microsoft and Amazon are expected to provide complete support to the framework across their cloud products. Summing up, PyTorch is a compelling player in the field of deep learning and artificial intelligence libraries, exploiting its unique niche of being a research-first library. It overcomes all the challenges and provides the necessary performance to get the job done. If you’re a mathematician, researcher, student who is inclined to learn how deep learning is performed, PyTorch is an excellent choice as your first deep learning framework to learn. Read more Can a production-ready Pytorch 1.0 give TensorFlow a tough time? A new geometric deep learning extension library for Pytorch releases! Top 5 tools for reinforcement learning
Read more
  • 0
  • 0
  • 25927

article-image-is-your-enterprise-measuring-the-right-devops-metrics
Guest Contributor
17 Sep 2018
6 min read
Save for later

Is your Enterprise Measuring the Right DevOps Metrics?

Guest Contributor
17 Sep 2018
6 min read
As of 2018, 17% of the companies worldwide have fully adopted DevOps while 14% are still in the consideration stage. Amazon, Netflix and Target are few of the companies that have attained success with DevOps. Amazon’s move to Amazon Web Services resulted in their ability to scale their capacity up or down as needed for the servers, thus allowing their engineers to deploy their own code to the server whenever they wanted to. This resulted in continuous deployment, thus reducing the duration as well as number of outages experienced by the companies using AWS. Netflix used DevOps to improve their cloud infrastructure and to ensure smooth streaming of videos online. When you say “we have adopted DevOps in your Enterprise”, what do you really mean? It means you have adopted a software philosophy that integrates software development and operations, thus reducing the time to market your end product. The questions which come next are: How do you measure the true success of DevOps in your organization? Have you been working on the right metrics all along? Let’s talk about first measuring DevOps in organizations. It is all about uptime, transactions per second, bugs fixed, the commits and other operational as well as productivity metrics. This is what most organizations tend to look at as metrics, when you talk about DevOps. But are these the Right DevOps Metrics? For a while, companies have been working on a set of metrics, discussed above, to determine the success of the DevOps. However, these are not the right metrics, and should not be considered. A metric is an indicator of the performance of the DevOps, and not every single indicator will determine the success. Your metrics might differ based on the data you collect. You would end up collecting large volumes of data; however, not every data available can be converted into a metric. Here’s how you can determine the metrics for your DevOps. Avoid using too many metrics You should, at the most, use 10 metrics. We suggest using less than 10 in fact. The fewer the metrics used, the better your judgment would be. You should broaden your perspective when choosing the metrics. It is important to choose metrics that account for the overall organizational health, and don’t just take into consideration the operational and development data. Metrics that connect with your organization What is the ultimate aim for your organization? How would you determine your organization is successful? The answer to these questions will help you determine the metrics. Most organizations determine their success based on customer experience and the overall operational efficiency. You will need to choose metrics that help you determine these two values. Tie the metrics to your goals As a businessperson, you are more concerned with customer attrition, bad feedback and non-returning customers than the lines of code that goes into creating a successful software product. You will need to tie your DevOps success metrics to these goals. While you are concerned about the failure of your website or the downtime, the true concern is the customer’s abandonment of your website. Causes that affect the DevOps While the business metrics will help you measure the success to a certain extent, there are certain things that affect the operations and development teams. You will need to check these causes, and go to the root to understand how it affects the DevOps teams  and what needs to be done to create a balance between the development and operational teams. Next, we will talk about the actual DevOps metrics that you should take into consideration when deriving value for your organization and measuring the success. The Velocity With most of the enterprise elements being automated, velocity is one of the most important metrics that will determine the success of your DevOps. The idea is to get the updates out to the users in the quickest and fastest way possible, without compromising on security or reliability. You stay competitive, offer new features and boost customer retention. The two variables that help measure this tangible metric include deployment frequency and deployment lead time. The former measures the frequency of releases and the latter measures the speed at which the team commits a code and pushes forth the update. Service Quality Service quality directly impacts the goals set forth by the organization, and is intangible. The idea is to maintain the service quality throughout the releases and  changes made to the application. The variables that determine this metric include change failure rate, number of support tickets and MTTR (Mean time to recovery). When you release an update, and that leads to an error or fault in the application, it is the change failure rate. In case there are bugs or performance issues in your releases, and these are being reported, then the variable number of support tickets or errors comes into existence. MTTR is the variable that measures the number of issues resolved and the time taken to solve them. The idea is to be more responsive to the problems faced by the customers. User Experience This is the final metric that impacts the success of your DevOps. You need to check if all the features and updates you have insisted upon are in sync with the user needs. The variables that are concerned with measuring this aspect include feature usage and business impact. You will need to check how many people from the target audience are using the new feature update you have released, and determine their personas. You can check the number of sessions, completed transactions and duration of the session to quantify the number of people. Check their profiles to get their personas.. Planning your DevOps strategy It is not easy to roll out DevOps in your organization, and expect agility immediately. You need to have a perfect strategy, align it to your business goals, and determine the effective DevOps metrics to determine the success of your roll out. Planning is of essence for a thorough roll out of DevOps. It is important to consider every data, when you have DevOps in your organization. Make sure you store and analyze every data, and use the data that suits the DevOps metrics you have determined for success. It is important that the DevOps metrics are aligned to your business goals and the objectives you have defined. About Author: Vishal Virani is a Founder and CEO of Coruscate Solutions, a mobile app development company. He enjoys writing about technology, mobile apps, custom web development and latest industry trends.
Read more
  • 0
  • 0
  • 3045
article-image-what-the-future-holds-for-it-support-companies
Guest Contributor
16 Sep 2018
6 min read
Save for later

What the Future Holds for IT Support Companies

Guest Contributor
16 Sep 2018
6 min read
In the technological era, many industries are finding new ways to carve out space for themselves. From healthcare to hospitality, rapid developments in tech have radically changed the way we do business, and adaptation is a must. The information technology industry holds a unique position in the world. With the changing market, IT support companies must also adapt to new technology more quickly than anyone else to ensure competitiveness. Decreased Individual Support, Increased Organizational Support Individual, discrete tech users are requiring less and less IT support than ever before. Every day, the human race produces 2.5 quintillion bytes of data – a staggering amount of information that no individual could possibly consume within a lifetime. That rate is increasing every day. With such widespread access to information, individuals are now able to find solutions with unrivaled ease and speed and the need for live, person-to-person support has decreased. Adding to the figure is the growing presence of younger generations who have grown up in a world saturated with technology. Children born in the 2000’s and later have never seen a world without smartphones, Bluetooth, and the World Wide Web. Development alongside technology not only implants a level of comfort with using technology but also with adapting to its constant changes. For the newest cohort of young adults, troubleshooting is no longer a professional task, but a household one. Alternatively, businesses require just as much support as ever. The accelerating pace of software development has opened up a new world of opportunity for organizations to optimize data management, customer support, marketing, finance, and more. But it’s also created a new, highly-competitive market where late-adopters run the risk of falling hard. Adapting to and using new information technology systems takes a highly-organized and knowledgeable support team, and that role is increasingly being outsourced outside organization walls. Companies like CITC are stepping up to provide more intensive expertise to businesses that may have, in the past, utilized in-house teams to manage IT systems.Source:Unsplash Improving Customer Service While individual tech users may need increasingly less company-provided support, they continue to expect increasingly better service. Likewise, organizations expect more from their IT support companies – faster incident response, simplified solutions, and the opportunity to troubleshoot on-the-spot. In response to these demands, IT support organizations are experimenting with modern models of support. Automation and Prediction-Based Support with Artificial Intelligence Artificial intelligence has become a part of everyday life for much of the world. From Google Assistant and Siri to self-driving cars, AI has offered a myriad of tools for streamlining daily tasks and simplifying our lives. It’s no surprise that developers are looking for ways to integrate Artificial Intelligence into IT support as well. Automated responses and prediction-based support offer a quick, cost-effective option that allows IT professionals to spend their time on tasks that require a more nuanced approach. However, automation of IT support comes with its own set of problems. First, AI lacks a human touch. Trying to discuss a potentially imminent problem with a bot can be a recipe for frustration, and the unreliability of problem self-reporting poses the additional risk of ineffective or inappropriate solutions. Automation also poses security risks which can be especially hazardous to industries with valuable trade secrets. Community-Based Support A second shift that has begun to take place in the IT support industry is a move towards community-based support. Crowdsourced solutions, like AI, can help carry some of the burden of smaller problems by allowing staff to focus energy on more pressing tasks. Unlike automated solutions, however, community-based support allows for human-to-human interaction and more seamless, collaborative, feedback-based troubleshooting. However, community-based support has limited applications. Crowdsourced solutions, contrary to the name, are often only the work of a few highly-qualified individuals. The turnaround for answers from a qualified source can be extensive, which is unacceptable under many circumstances. Companies offering a community-based support platform must moderate contributions and fact-check solutions, which can end up being nearly as resource intensive as traditional support services. While automation and community-based support offer new alternatives to traditional IT support for individual tech users, organizational support requires a much different approach. Organizations that hire IT support companies expect expert solutions, and dedicated staffing are a must. The Shift: Management to Adoption Software is advancing at a rapid pace. In the first quarter of 2018, there were 3,800,000 apps available for Android users and 2,000,000 for Apple. The breadth of enterprise software is just as large, with daily development in all industry sectors. Businesses are no longer able to adopt a system and stick with it - they must constantly adapt to new, better technology and seek out the most innovative solutions to compete in their industries. Source: Unsplash IT support companies must also increasingly dedicate themselves to this role. IT support is no longer a matter of technology management, but of adaptation and adoption. New IT companies bring a lot to the table here. Much like the newer generations of individual users, new companies can offer a fresh perspective on software developments and a more fluid ability to adapt to changes in the market. Older IT companies also have plenty to offer, however. Years of traditional support experience can be a priceless asset that inspires confidence from clients. All IT support organizations should focus on being able to adapt to the future of technology. This can be done by increasingly making rapid changes in software, an increasing reliance on digital technology, and transitioning to digital resource management. Additionally, support companies must be able to provide innovative solutions that strike an effective balance between automation and interaction. Ultimately, companies must realize that the future is already here and that traditional methods of service are no longer adequate for the changing landscape of tech. In Conclusion The world of technology is changing and, with it, the world of IT support. Support needs are rapidly shifting from customer service to enterprise support, and IT support companies must adapt to serve the needs of all industry sectors. Companies will need to find innovative solutions to not only provide management of technology but allow companies to adapt seamlessly into new technological advancements. This article is written by a student of New York City College of Technology. New York City College of Technology is a baccalaureate and associate degree-granting institution committed to providing broad access to high quality technological and professional education for a diverse urban population.
Read more
  • 0
  • 0
  • 1832

article-image-defending-your-business-from-the-next-wave-of-cyberwar-iot-threats
Guest Contributor
15 Sep 2018
6 min read
Save for later

Defending your business from the next wave of cyberwar: IoT Threats

Guest Contributor
15 Sep 2018
6 min read
There’s no other word for the destabilization of another nation through state action other than war -- even if it’s done with ones and zeros. Recent indictments of thirteen Russians and three Russian companies tampering with US elections is a stark reminder. Without hyperbole it is safe to say that we are in the throes of an international cyber war and the damage is spreading massively over the corporate economy. Reports have reached a fever pitch and the costs globally are astronomical. According to Cybersecurity Ventures, damage related to cybercrime in general is projected to hit $6 trillion annually by 2021. Over the past year, journalists for many news agencies have reported credible studies regarding the epidemic of state sponsored cyber attacks. Wired and The Washington Post among many others have outlined threats that have reached the US energy grid and other elements of US infrastructure. However, the cost to businesses is just as devastating. While many attacks have been government targeted, businesses are increasingly at risk from state sponsored cyber campaigns. A recent worldwide threat assessment from the US Department of Justice discusses several examples of state-sponsored cyber attacks that affect commercial entities including diminishing trust from consumers, ransomware proliferation, IoT threats, the collateral damage from disruptions of critical infrastructure, and the disruption of shipping lanes. How Cyberwar Affects Us on a Personal Level An outcome of cyberwarfare that isn’t usually considered, but a large amount of damage is reflected in human capital. This can be found in the undermining of consumer and employee confidence in the ability of a company to protect data. According to a recent study examining how Americans feel about internet privacy in 2018, 51% of respondents said their main concern was online threats stealing their information, and over a quarter listed that they were particularly concerned about companies collecting/sharing their personal data. This kind of consumer fear is justified by a seeming lack of ability of companies to protect the data of individuals. Computing and quantitative business expert Dr. Benjamin Silverstone points out that recent cyber-attacks focus on the information of consumers (rather than other confidential documentation or state secrets which may have greater protection). Silverstone says, “Rather than blaming the faceless cyber-criminals, consumers will increasingly turn to the company that is being impersonated to ask how this sort of thing could happen in the first place. The readiness to share details online, even with legitimate companies, is being affected and this will damage their business in the long term.” So, how can businesses help restore consumer confidence? You should: Increase your budget toward better cybercrime solutions and tell your consumers about it liberally. Proven methods include investing in firewalls with intrusion prevention tools, teaching staff how to detect and avoid malware software, and enforcing strict password protocols to bolster security. Invest in two-factor authorization so that consumers feel safer when accessing your product Educate your consumer base -- it is equally important that everyone be more aware when it comes to cyber attack. Give your consumers regular updates about suspected scams and send tips and tricks on password safety. Ransomware and Malware Attacks CSO Online reports that ransomware damage costs exceeded $5 billion in 2017, 15 times the cost in 2015. Accordingly, Cybersecurity Ventures says that costs from ransomware attacks will rise to $11.5 billion next year. In 2019, they posit, a business will fall victim to a ransomware attack every 14 seconds. But is This International Warfare? The North Korean government’s botnet has been shown to be able to pull off DDoS attacks and is linked to the wannacry ransomware attack. In 2017, over 400,000 machines were infected by the wannacry virus, costing companies  over $4 Billion in over 150 countries. To protect yourself from ransomware attacks: Back up your data often and store in non-networked spaces or on the cloud. Ransomware only works if there is a great deal of data that is at risk. Encrypt whatever you can and keep firewalls/two-factor authorization in place wherever possible. Keep what cyber experts call the  “crown jewels” (the top 5% most important and confidential documents) on a dedicated computer with very limited access. The Next Wave of Threat - IoT IoT devices make mundane tasks like scheduling or coordination more convenient. However, proliferation of these devices create cybersecurity risk. Companies are bringing in devices like printers and coffee makers that are avenues for hackers to enter a network.   Many experts point to IoT as their primary concern. A study from shared assessment found that 97% of IT respondents felt that unsecured IoT devices could cause catastrophic levels of damage to their company. However, less than a third of the companies represented reported thorough monitoring of the risks associated with third-party technology. Here’s a list of how to protect yourself from IoT threats: Evaluate what data IoT devices are accumulating and limit raw storage. Create policies regarding anonymizing user data as much as possible. Apply security patches to any installed IoT device. This can be as simple as making sure you change the default password. Vet your devices - make sure you are buying from sources that (you believe) will be around a long time. If the business you purchase your IoT device from goes under, they will stop updating safety protocols. Make a diversified plan, just in case major components of your software set up are compromised. While we may not be soldiers, a war is currently on that affects us all and everyone must be vigilant. Ultimately, communication is key. Consumers rely on businesses to protect them from individual attack. These are individuals who are more likely to remain your customers if you can demonstrate how you are maneuvering to respond to global threats. About the author           Zach is a freelance writer who likes to cover all things tech. In particular, he enjoys writing about the influence of emerging technologies on both businesses and consumers. When he's not blogging or reading up on the latest tech trend, you can find him in a quiet corner reading a good book, or out on the track enjoying a run. New cybersecurity threats posed by artificial intelligence Top 5 cybersecurity trends you should be aware of in 2018 Top 5 cybersecurity myths debunked  
Read more
  • 0
  • 0
  • 3998

article-image-what-makes-functional-programming-a-viable-choice-for-artificial-intelligence-projects
Prasad Ramesh
14 Sep 2018
7 min read
Save for later

What makes functional programming a viable choice for artificial intelligence projects?

Prasad Ramesh
14 Sep 2018
7 min read
The most common programming languages currently used for AI and machine learning development are Python, R, Scala, Go, among others with the latest addition being Julia. Functional languages as old as Lisp and Haskell were used to implement machine learning algorithms decades ago when AI was an obscure research area of interest. There wasn’t enough hardware and software advancements back them for implementations. Some commonalities in all of the above language options are that they are simple to understand and promote clarity. They use fewer lines of code and lend themselves well to the functional programming paradigm. What is Functional programming? Functional programming is a programming approach that uses logical functions or procedures within its programming structure. It means that the programming is done with expressions instead of statements. In a functional programming (FP) approach, computations are treated as evaluations of mathematical functions and it mostly deals with immutable data. In other words, the state of the program does not change, the functions or procedures are fixed and the output value of a function depends solely on the arguments passed to it. Let’s look at the characteristics of a functional programming approach before we see why they are well suited for developing artificial intelligence programs. Functional programming features Before we see functional programming in a machine learning context, let’s look at some of its characteristics. Immutable: If a variable x is declared and used in the program, the value of the variable is never changed later anywhere in the program. Each time the variable x is called, it will return the same value assigned originally. This makes it pretty straightforward, eliminating the need to think of state change throughout the program. Referential transparency: This means that an expression or computation always results in the same value in any part/context of the program. A referentially transparent programming language’s programs can be manipulated as algebraic equations. Lazy evaluation: Being referentially transparent, the computations yield the same result irrespective of when they are performed. This enables to postpone the computation of values until they are required/called. This means one could evaluate them lazily. Lazy evaluation helps avoids unnecessary computations and saves memory. Parallel programming: Since there is no state change due to immutable variables, the functions in a functional program can work in parallel as instructions. Parallel loops can be easily expressed with good reusability. Higher-order functions: A higher order function can take one or more functions as arguments. They may also be able to return a function as their result. Higher-order functions are useful for refactoring code and to reduce repetition. The map function found in many programming languages is an example of a higher-order function. What kind of programming is good for AI development? Machine learning is a sub-domain of artificial intelligence which deals with concepts of making predictions from data, take actions without being explicitly programmed, recommendation systems and so on. Any programming approach that focuses on logic and mathematical functions is good for artificial intelligence (AI). Once the data is collected and prepared it is time to build your machine learning model.. This typically entails choosing a model, then training and testing the model with the data. Once the desired accuracy/results are achieved, then the model is deployed. Training on the data requires data to be consistent and the code to be able to communicate directly with the data without much abstraction for least unexpected errors. For AI programs to work well, the language needs to have a low level implementation for faster communication with the processor. This is why many machine learning libraries are created in C++ to achieve fast performance. OOP with its mutable objects and object creation is better suited for high-level production software development, not very useful in AI programs which works with algorithms and data. As AI is heavily based on math, languages like Python and R are widely used languages in AI currently. R lies more towards statistical data analysis but does support machine learning and neural network packages. Python being faster for mathematical computations and with support for numerical packages is used more commonly in machine learning and artificial intelligence. Why is functional programming good for artificial intelligence? There are some benefits of functional programming that make it suitable for AI. It is closely aligned to mathematical thinking, and the expressions are in a format close to mathematical definitions. There are few or no side-effects of using a functional approach to coding, one function does not influence the other unless explicitly passed. This proves to be great for concurrency, parallelization and even debugging. Less code and more consistency The functional approach uses fewer lines of code, without sacrificing clarity. More time is spent in thinking out the functions than actually writing the lines of code. But the end result is more productivity with the created functions and easier maintenance since there are fewer lines of code. AI programs consist of lots of matrix multiplications. Functional programming is good at this just like GPUs. You work with datasets in AI with some algorithms to make changes in the data to get modified data. A function on a value to get a new value is similar to what functional programming does. It is important for the variables/data to remain the same when working through a program. Different algorithms may need to be run on the same data and the values need to be the same. Immutability is well-suited for that kind of job. Simple approach, fast computations The characteristics/features of functional programming make it a good approach to be used in artificial intelligence applications. AI can do without objects and classes of an object oriented programming (OOP) approach, it needs fast computations and expects the variables to be the same after computations so that the operations made on the data set are consistent. Some of the popular functional programming languages are R, Lisp, and Haskell. The latter two are pretty old languages and are not used very commonly. Python can be used as both, functional and object oriented. Currently, Python is the language most commonly used for AI and machine learning because of its simplicity and available libraries. Especially the scikit-learn library provides support for a lot of AI-related projects. FP is fault tolerant and important for AI Functional programming features make programs fault tolerant and fast for critical computations and rapid decision making. As of now, there may not be many such applications but think of the future, systems for self-driving cars, security, and defense systems. Any fault in such systems would have serious effects. Immutability makes the system more reliable, lazy evaluation helps conserve memory, parallel programming makes the system faster. The ability to pass a function as an argument saves a lot of time and enables more functionality. These features of functional programming make it a fitting choice for artificial intelligence. To further understand why use functional programming for machine learning, read the case made for using the functional programming language Haskell for AI in the Haskell Blog. Why functional programming in Python matters: Interview with best selling author, Steven Lott Grain: A new functional programming language that compiles to Webassembly 8 ways Artificial Intelligence can improve DevOps
Read more
  • 0
  • 0
  • 14172
article-image-how-ai-is-going-to-transform-the-data-center
Melisha Dsouza
13 Sep 2018
7 min read
Save for later

How AI is going to transform the Data Center

Melisha Dsouza
13 Sep 2018
7 min read
According to Gartner analyst Dave Cappuccio, 80% of enterprises will have shut down their traditional data centers by 2025, compared to just 10% today. The figures are fitting considering the host of problems faced by traditional data centers. The solution to all these problems lies right in front of us- Incorporating Intelligence in traditional data centers. To support this claim, Gartner also predicts that by 2020, more than 30 percent of data centers that fail to implement AI and Machine Learning will cease to be operationally and economically viable. Across the globe, Data science and AI are influencing the design and development of the modern data centers. With the surge in the amount of data everyday, traditional data centers will eventually get slow and result in an inefficient output. Utilizing AI in ingenious ways, data center operators can drive efficiencies up and costs down. A fitting example of this is the tier-two automated control system implemented at Google to cool its data centers autonomously. The system makes all the cooling-plant tweaks on its own, continuously, in real-time- thus saving up to 30% of the plant’s energy annually. Source: DataCenter Knowledge AI has enabled data center operators to add more workloads on the same physical silicon architecture. They can aggregate and analyze data quickly and generate productive outputs, which is specifically beneficial to companies that deal with immense amounts of data like hospitals, genomic systems, airports, and media companies. How is AI facilitating data centers Let's look at some of the ways that Intelligent data centers will serve as a solution to issues faced by traditionally operated data centers. #1 Energy Efficiency The Delta Airlines data center outage in 2016, that was attributed to electrical-system failure over a three day period, cost the airlines around $150 million, grounding about 2,000 flights. This situation could have been easily averted had the data centers used Machine Learning for their workings. As data centers get ever bigger, more complex and increasingly connected to the cloud, artificial intelligence is becoming an essential tool for keeping things from overheating and saving power at the same time. According to the Energy Department’s U.S. Data Center Energy Usage Report, the power usage of data centers in the United States has grown at about a 4 percent rate annually since 2010 and is expected to hit 73 billion kilowatt-hours by 2020, more than 1.8 percent of the country’s total electricity use. Data centers also contribute about 2 percent of the world’s greenhouse gas emissions, AI techniques can do a lot to make processes more efficient, more secure and less expensive. One of the keys to better efficiency is keeping things cool, a necessity in any area of computing. Google and DeepMind (Alphabet Inc.’s AI division)use of AI to directly control its data center has reduced energy use for cooling by about 30 percent. #2 Server optimization Data centers have to maintain physical servers and storage equipment. AI-based predictive analysis can help data centers distribute workloads across the many servers in the firm. Data center loads can become more predictable and more easily manageable. Latest load balancing tools with built-in AI capabilities are able to learn from past data and run load distribution more efficiently. Companies will be able to better track server performance, disk utilization, and network congestions. Optimizing server storage systems, finding possible fault points in the system, improve processing times and reducing risk factors will become faster. These will in turn facilitate maximum possible server optimization. #3 Failure prediction / troubleshooting Unplanned downtime in a datacenter can lead to money loss. Datacenter operators need to quickly identify the root case of the failure, so they can prioritize troubleshooting and get the datacenter up and running before any data loss or business impact take place. Self managing datacenters make use of AI based deep learning (DL) applications to predict failures ahead of time. Using ML based recommendation systems, appropriate fixes can be inferred upon the system in time. Take for instance the HPE artificial intelligence predictive engine that identifies and solves trouble in the data center. Signatures are built to identify other users that might be affected. Rules are then developed to instigate a solution, which can be automated. The AI-machine learning solution, can quickly interject through the entire system and stop others from inheriting the same issue. #4 Intelligent Monitoring and storing of Data Incorporating machine learning, AI can take over the mundane job of monitoring huge amounts of data and make IT professionals more efficient in terms of the quality of tasks they handle. Litbit has developed the first AI-powered, data center operator, Dac. It uses a human-to-machine learning interface that combines existing human employee knowledge with real-time data. Incorporating over 11,000 pieces of innate knowledge, Dac has the potential to hear when a machine is close to failing, feel vibration patterns that are bad for HDD I/O, and spot intruders. Dac is proof of how AI can help monitor networks efficiently. Along with monitoring of data, it is also necessary to be able to store vast amounts of data securely. AI holds the potential to make more intelligent decisions on - storage optimization or tiering. This will help transform storage management by learning IO patterns and data lifecycles, helping storage solutions etc. Mixed views on the future of AI in data centers? Let’s face the truth, the complexity that comes with huge masses of data is often difficult to handle. Humans ar not as scalable as an automated solution to handle data with precision and efficiency. Take Cisco’s M5 Unified Computing or HPE’s InfoSight as examples. They are trying to alleviate the fact that humans are increasingly unable to deal with the complexity of a modern data center. One of the consequences of using automated systems is that there is always a possibility of humans losing their jobs and being replaced by machines at varying degrees depending on the nature of job roles. AI is predicted to open its doors to robots and automated machines that will soon perform repetitive tasks in the datacenters. On the bright side, organizations could allow employees, freed from repetitive and mundane tasks, to invest their time in more productive, and creative aspects of running a data center. In addition to new jobs, the capital involved in setting up and maintaining a data center is huge. Now add AI to the Datacenter and you have to invest double or maybe triple the amount of money to keep everything running smoothly. Managing and storing all of the operational log data for analysis also comes with its own set of issues. The log data that acts as the input to these ML systems becomes a larger data set than the application data itself. Hence firms need a proper plan in place to manage all of this data. Embracing AI in data centers would mean greater financial benefits from the outset while attracting more customers. It would be interesting to see the turnout of tech companies following Google’s footsteps and implementing AI in their data centers. Tech companies should definitely watch this space to take their data center operations up a notch. 5 ways artificial intelligence is upgrading software engineering Intelligent Edge Analytics: 7 ways machine learning is driving edge computing adoption in 2018 15 millions jobs in Britain at stake with Artificial Intelligence robots set to replace humans at workforce
Read more
  • 0
  • 0
  • 7385

article-image-how-serverless-computing-is-making-ai-development-easier
Bhagyashree R
12 Sep 2018
5 min read
Save for later

How Serverless computing is making AI development easier

Bhagyashree R
12 Sep 2018
5 min read
AI has been around for quite some time, enabling developers to build intelligent apps that cater to the needs of their users. Not only app developers, businesses are also using AI to gain insights from their data such as their customers’ buying behaviours, the busiest time of the year, and so on. While AI is all cool and fascinating, developing an AI-powered app is not that easy. Developers and data scientists have to invest a lot of their time in collecting and preparing the data, building and training the model, and finally deploying it in production. Machine learning, which is a subset of AI, feels difficult because the traditional development process is complicated and slow. While creating machine learning models we need different tools for different functionalities, which means we should have knowledge of them all. This is certainly not practical. The following factors make the current situation even more difficult: Scaling the inferencing logic Addressing continuous development Making it highly available Deployment Testing Operation This is where serverless computing comes into picture. Let’s dive into what exactly serverless computing is and how it can help in easing AI development. What is serverless computing? Serverless computing is the concept of building and running applications in which the computing resources are provided as scalable cloud services. It is a deployment model where applications, as bundle of functions, are uploaded to a cloud platform and then executed. Serverless computing does not mean that servers are no longer required to host and run code. Of course we need servers, but server management for the applications is taken care of by the cloud provider. This also does not implies that operations engineers are no longer required. In fact, it means that with serverless computing, consumers no longer need to spend time and resources on server provisioning, maintenance, updates, scaling, and capacity planning. Instead, all of these tasks and capabilities are handled by a serverless platform and are completely abstracted away from the developers and IT/operations teams. This allows developers to focus on writing their business logic and operations engineers to elevate their focus to more business critical tasks. Serverless computing is the union of two ideas: Backend as a Service (BaaS): BaaS provides developers a way to link their application with third-party backend cloud storage. It includes services such as, authentication, access to database, and messaging, which are supplied through physical or virtual servers located in the cloud. Function as a Service (FaaS): FaaS allows users to run a specific task or function remotely and after the function is complete, the function results return back to the user. The applications run in stateless compute containers that are event-triggered and fully managed by a third party. AWS Lambda, Google Cloud Function, Azure Functions, and IBM Cloud Functions, are some of the serverless computing providers which enable us to upload a function and the rest is taken care for us automatically. Read also: Modern Cloud Native architectures: Microservices, Containers, and Serverless – Part 2 Why serverless is a good choice for AI development? Along with the obvious advantage of hassle free server management, let’s see what else it has to offer for your artificial intelligence project development: Focus on core tasks Managing servers and deploying a machine learning model is not a good skill match for a data scientist or even for a machine learning engineer. With serverless computing, servers will conveniently vanish from your development and deployment workflow. Auto-scalability This is one of the key benefits of using serverless computing. As long as your model is correctly deployed on the serverless platform, you don’t have to worry about making it scale when your workload raises. Serverless computing gives all businesses, big and small, the ability to use what they need and scale without worrying about complex and time-consuming data migrations. Never pay for idle In traditional application deployment models, users need to pay a fixed and recurring cost for compute resources, regardless of the amount of computing work that is actually being performed by the server. In serverless computing deployment, you only have to pay for service usage. You are only charged for the number of executions and the corresponding duration. Reduces interdependence You can think of machine learning models as functions in serverless, which can be invoked, updated, and deleted. You can do this any time without having any side effect on the rest of the system. Different teams can work independently to develop, deploy, and scale their microservices. This greatly simplifies the orchestration of timelines by Product and Dev Managers. Abstraction from the users Your machine learning model will be exposed as a service to the users with the help of API Gateway. This makes it easier to decentralize your backend, isolate failure on a per-model level, and hide every implementation details from the final user. High availability Serverless applications have built-in availability and fault tolerance. You don't need to architect for these capabilities since the services running the application provide them by default. Serverless computing can facilitate a simpler approach to artificial intelligence by removing the baggage of server maintenance from developers and data scientists. But nothing is perfect, right? It also comes with some drawbacks, number one being, vendor lock-in. Serverless features varies from one vendor to another, which makes it difficult to switch vendors. Another disadvantage is decreased transparency. Your infrastructure is managed by someone else, so understanding the entire system becomes a little bit difficult. Serverless is not an answer to every problem but it is definitely improving each day making AI development easier. What’s new in Google Cloud Functions serverless platform Serverless computing wars: AWS Lambdas vs Azure Functions Google’s event-driven serverless platform, Cloud Function, is now generally available
Read more
  • 0
  • 0
  • 5065