Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech Guides

852 Articles
article-image-top-5-cybersecurity-assessment-tools-for-networking-professionals
Savia Lobo
07 Jun 2018
6 min read
Save for later

Top 5 cybersecurity assessment tools for networking professionals

Savia Lobo
07 Jun 2018
6 min read
Security is one of the major concerns while setting up data centers in the cloud. Although firewalls and managed networking components are deployed by most of the organizations for their data centers, they still fear being attacked by intruders. As such, organizations constantly seek tools that can assist them in gauging how vulnerable their network is and how they can secure their applications therein. Many confuse security assessment with penetration testing and also use it interchangeably. However, there is a notable difference between the two. Security assessment is a process of finding out the different vulnerabilities within a system and prioritize them based on severity and business criticality. On the other hand, penetration testing simulates a real-life attack and maps out paths that a real attacker would take to fulfill the attack. You can check out our article, Top 5 penetration testing tools for ethical hackers to know about some of the pentesting tools. Plethora of tools in the market exist and every tool claims to be the best. Here is our top 5 list of tools to secure your organization over the network. Wireshark Wireshark is one of the popular tools for packet analysis. It is open source under GNU General Public License. Wireshark has a user-friendly GUI  and supports Command Line Input (CLI). It is a great debugging tool for developers who wish to develop a network application. It runs on multiple platforms including Windows, Linux, Solaris, NetBSD, and so on. WireShark community also hosts SharkFest, launched in 2008, for WireShark developers and the user communities. The main aim of this conference is to support Wireshark development and to educate current and future generations of computer science and IT professionals on how to use this tool to manage, troubleshoot, diagnose, and secure traditional and modern networks. Some benefits of using this tool include: Wireshark features live real-time traffic analysis and also supports offline analysis. Depending on the platform, one can read live data from Ethernet, PPP/HDLC, USB, IEEE 802.11, Token Ring, and many others. Decryption support for several protocols such as IPsec, ISAKMP, Kerberos, SNMPv3, SSL/TLS, WEP, and WPA/WPA2 Network captured by this tool can be browsed via a GUI, or via the TTY-mode TShark utility. Wireshark also has the most powerful display filters in whole industry It also provides users with Tshark, a network protocol analyzer, used to analyze packets from the hosts without a UI. Nmap Network Mapper, popularly known as Nmap is an open source licensed tool for conducting network discovery and security auditing.  It is also utilized for tasks such as network inventory management, monitoring host or service uptime, and much more. How Nmap works is, it uses raw IP packets in order to find out the available hosts on the network, the services they offer, the OS on which they are operating, the firewall that they are currently using and much more. Nmap is a quick essential to scan large networks and can also be used to scan single hosts. It runs on all major operating system. It also provides official binary packages for Windows, Linux, and Mac OS X. It also includes Zenmap - An advanced security scanner GUI and a results viewer Ncat - This is a tool used for data transfer, redirection, and debugging. Ndiff - A utility tool for comparing scan results Nping - A packet generation and response analysis tool Nmap is traditionally a command-line tool run from a Unix shell or Windows Command prompt. This makes Nmap easy for scripting and allows easy sharing of useful commands within the user community. With this, experts do not have to move through different configuration panels and scattered option fields. Nessus Nessus, a product of the Tenable.io, is one of the popular vulnerability scanners specifically for UNIX systems. This tool remains constantly updated with 70k+ plugins. Nessus is available in both free and paid versions. The paid version costs around  $2,190 per year, whereas the free version, ‘Nessus Home’ offers limited usage and is licensed only for home network usage. Customers choose Nessus because It includes simple steps for policy creation and needs just a few clicks for scanning an entire corporate network. It offers vulnerability scanning at a low total cost of ownership (TCO) product One can carry out a quick and accurate scanning with lower false positives. It also has an embedded scripting language for users to write their own plugins and to understand the existing ones. QualysGuard QualysGuard is a famous SaaS (Software-as-a-Service) vulnerability management tool. It has a comprehensive vulnerability knowledge base, using which it is able to provide continuous protection against the latest worms and security threats. It proactively monitors all the network access points, due to which security managers can invest less time to research, scan, and fix network vulnerabilities. This helps organizations in avoiding network vulnerabilities before they could be exploited. It provides a detailed technical analysis of the threats via powerful and easy-to-read reports. The detailed report includes the security threat, the consequences faced if the vulnerability is exploited, and also a solution that recommends how the vulnerability can be fixed. One can get a summary of the overall security with QualysGuard’s executive dashboard. The dashboard displays a number of new, active, and re-opened vulnerabilities. It also displays a graph which showcases vulnerabilities based on severity level. Get to know more about QualysGuard on its official website. Core Impact Core Impact is widely used as a comprehensive tool to assess and test security vulnerability within any organization. It includes a large database of professional exploits and is regularly updated. It assists in cleanly exploiting one machine and later creating an encrypted tunnel through it to exploit other machines. Core Impact provides a controlled environment to mimic bad attacks. This helps one to secure their network before the occurrence of an actual attack. One interesting feature of Core Impact is that one can fully test their network, irrespective of the length, quickly and efficiently. These are five popular tools network security professionals use for assessing their networks. However, there are many other tools such as Netsparker, OpenVAS, Nikto, and many more for assessing the security of their network. Every security assessment tool is unique in its own way. However, it all boils down to one’s own expertise and the experience they have, and also the kind of project environment it is used in. Top 5 penetration testing tools for ethical hackers Intel’s Spectre variant 4 patch impacts CPU performance Pentest tool in focus: Metasploit
Read more
  • 0
  • 0
  • 10898

article-image-iot-forensics-security-connected-world
Vijin Boricha
01 May 2018
3 min read
Save for later

IoT Forensics: Security in an always connected world where things talk

Vijin Boricha
01 May 2018
3 min read
Connected physical devices, home automation appliances, and wearable devices are all part of Internet of Things (IoT). All of these have two major things in common that is seamless connectivity and massive data transfer. This also brings with it, plenty of opportunities for massive data breaches and allied cyber security threats. The motive of digital forensics is to identify, collect, analyse, and present digital evidence collected from various mediums in a cybercrime incident. The multiplication of IoT devices and the increased number of cyber security incidents has given birth to IoT forensics. IoT forensics is a branch of digital forensics which deals with IoT-related cybercrimes and includes investigation of connected devices, sensors and the data stored on all possible platforms. If you look at the bigger picture, IoT forensics is a lot more complex, multifaceted and multidisciplinary in approach than traditional forensics. With versatile IoT devices, there is no specific method of IoT forensics that can be broadly used.So identifying valuable sources is a major challenge. The entire investigation will depend on the nature of the connected or smart device in place. For example, evidence could be collected from fixed home automation sensors, or moving automobile sensors, wearable devices or data store on Cloud. When compared to the standard digital forensic techniques, IoT forensics portrays multiple challenges depending on the versatility and complexity of the IoT devices. Following are some challenges that one may face in an investigation: Variance of the IoT devices Proprietary Hardware and Software Data present across multiple devices and platforms Data can be updated, modified, or lost Proprietary jurisdictions for data is stored on cloud or a different geography As such, IoT Forensics requires a multi-faceted approach where evidence can be collected from various sources. We can categorize sources of evidence into three broad groups: Smart devices and sensors; Gadgets present at the crime scene (Smartwatch, home automation appliances, weather control devices, and more) Hardware and Software; the communication link between smart devices and the external world (computers, mobile, IPS, and firewalls) External resources; areas outside the network unders investigation (Cloud, social networks, ISPs and mobile network providers) Once the evidence is successfully collected from an IoT device no matter the file system, operating system, or the platform it is based on, it should be logged and monitored. The main reason behind this is IoT devices data storage are majorly on Cloud due to its scalability and accessibility. There are high possibilities the data on Cloud can be altered which would result to an investigation failure. No doubt Cloud forensics can equally play an important role here but strengthening cyber security best practices should be the ideal motive. With ever evolving IoT devices there will always be a need for unique practice methods and techniques to break through the investigation. Cybercrime keeps evolving and getting bolder by the day. Forensics experts will have to develop skill sets to deal with the variety and complexity of IoT devices to keep up with this evolution. No matter the challenges one faces there is always a unique solution to complex problems. There will always be a need for unique, intelligent, and adaptable techniques to investigate IoT-related crimes and an even greater need for those displaying these capabilities. To learn more on IoT security, you can get you hands on a few of our books; IoT Penetration Testing Cookbook and Practical Internet of Things Security. Why Metadata is so important for IoT Why the Industrial Internet of Things (IIoT) needs Architects 5 reasons to choose AWS IoT Core for your next IoT project  
Read more
  • 0
  • 0
  • 10844

article-image-types-augmented-reality-targets
Aarthi Kumaraswamy
08 Apr 2018
6 min read
Save for later

Types of Augmented Reality targets

Aarthi Kumaraswamy
08 Apr 2018
6 min read
The essence of Augmented Reality is that your device recognizes objects in the real world and renders the computer graphics registered to the same 3D space, providing the illusion that the virtual objects are in the same physical space with you. Since augmented reality was first invented decades ago, the types of targets the software can recognize has progressed from very simple markers for images and natural feature tracking to full spatial map meshes. There are many AR development toolkits available; some of them are more capable than others of supporting a range of targets. The following is a survey of various Augmented Reality target types. We will go into more detail in later chapters, as we use different targets in different projects. Marker The most basic target is a simple marker with a wide border. The advantage of marker targets is they're readily recognized by the software with very little processing overhead and minimize the risk of the app not working, for example, due to inconsistent ambient lighting or other environmental conditions. The following is the Hiro marker used in example projects in ARToolkit: Coded Markers Taking simple markers to the next level, areas within the border can be reserved for 2D barcode patterns. This way, a single family of markers can be reused to pop up many different virtual objects by changing the encoded pattern. For example, a children's book may have an AR pop up on each page, using the same marker shape, but the bar code directs the app to show only the objects relevant to that page in the book. The following is a set of very simple coded markers from ARToolkit: Vuforia includes a powerful marker system called VuMark that makes it very easy to create branded markers, as illustrated in the following image. As you can see, while the marker styles vary for specific marketing purposes, they share common characteristics, including a reserved area within an outer border for the 2D code: Images The ability to recognize and track arbitrary images is a tremendous boost to AR applications as it avoids the requirement of creating and distributing custom markers paired with specific apps. Image tracking falls into the category of natural feature tracking (NFT). There are characteristics that make a good target image, including having a well-defined border (preferably eight percent of the image width), irregular asymmetrical patterns, and good contrast. When an image is incorporated in your AR app, it's first analyzed and a feature map (2D node mesh) is stored and used to match real-world image captures, say, in frames of video from your phone. Multi-targets It is worth noting that apps may be set up to see not just one marker in view but multiple markers. With multitargets, you can have virtual objects pop up for each marker in the scene simultaneously. Similarly, markers can be printed and folded or pasted on geometric objects, such as product labels or toys. The following is an example cereal box target: Text recognition If a marker can include a 2D bar code, then why not just read text? Some AR SDKs allow you to configure your app (train) to read text in specified fonts. Vuforia goes further with a word list library and the ability to add your own words. Simple shapes Your AR app can be configured to recognize basic shapes such as a cuboid or cylinder with specific relative dimensions. Its not just the shape but its measurements that may distinguish one target from another: Rubik's Cube versus a shoe box, for example. A cuboid may have width, height, and length. A cylinder may have a length and different top and bottom diameters (for example, a cone). In Vuforia's implementation of basic shapes, the texture patterns on the shaped object are not considered, just anything with a similar shape will match. But when you point your app to a real-world object with that shape, it should have enough textured surface for good edge detection; a solid white cube would not be easily recognized. Object recognition The ability to recognize and track complex 3D objects is similar but goes beyond 2D image recognition. While planar images are appropriate for flat surfaces, books or simple product packaging, you may need object recognition for toys or consumer products without their packaging. Vuforia, for example, offers Vuforia Object Scanner to create object data files that can be used in your app for targets. The following is an example of a toy car being scanned by Vuforia Object Scanner: Spatial maps Earlier, we introduced spatial maps and dynamic spatial location via SLAM. SDKs that support spatial maps may implement their own solutions and/or expose access to a device's own support. For example, the HoloLens SDK Unity package supports its native spatial maps, of course. Vuforia's spatial maps (called Smart Terrain) does not use depth sensing like HoloLens; rather, it uses visible light camera to construct the environment mesh using photogrammetry. Apple ARKit and Google ARCore also map your environment using the camera video fused with other sensor data. Geolocation A bit of an outlier, but worth mentioning, AR apps can also use just the device's GPS sensor to identify its location in the environment and use that information to annotate what is in view. I use the word annotate because GPS tracking is not as accurate as any of the techniques we have mentioned, so it wouldn't work for close-up views of objects. But it can work just fine, say, standing atop a mountain and holding your phone up to see the names of other peaks within the view or walking down a street to look up Yelp! reviews of restaurants within range. You can even use it for locating and capturing Pokémon. [box type="note" align="" class="" width=""]You read an excerpt from the book, Augmented Reality for Developers, by Jonathan Linowes, and Krystian Babilinski. To learn how to use these targets and to build a variety of AR apps, check the book now![/box]
Read more
  • 0
  • 0
  • 10839

article-image-simple-player-health
Gareth Fouche
22 Dec 2016
8 min read
Save for later

Simple Player Health

Gareth Fouche
22 Dec 2016
8 min read
In this post, we’ll create a simple script to manage player health, then use that script and Unity triggers to create health pickups and environmental danger (lava) in a level. Before we get started on our health scripts, let’s create a prototype 3D environment to test them in. Create a new project with a new scene. Save this as “LavaWorld”. Begin by adding two textures to the project, a tileable rock texture and a tileable lava texture. If you don’t have those assets already, there are many sources of free textures online. Click Here is a good start. Create two new Materials named “LavaMaterial” and “RockMaterial” to match the new textures by right-clicking in the Project pane and selecting Create > Material. Drag the rock texture into the Albedo slot of RockMaterial. Drag the lava texture into the Emission slot of LavaMaterial to create a glowing lava effect. Now our materials are ready to use. In the Hierarchy view, use Create > 3D Object > Cube to create a 3D cube in the scene. Drag RockMaterial into the Materials > Element 0 slot on the Mesh Renderer of your cube in order to change the cube texture from the default blue material to your rock texture. Use the scale controls to stretch and flatten the cube. We now have a simple “rock platform”. Copy and paste the platform a few times, moving the new copies away to form small “islands”. Create a few more copies of the rock platform, scale them so that they’re long and thin, and position them as bridges between the “islands”. For example: Now, create a new cube named “LavaVolume”, and assign it the LavaMaterial. Scale it so that it is large enough to encompass all the islands but shallow (scale the y-axis height down). Move it so that it is lower than the islands, and so they appear to float in a lava field. In order to make it possible that a player can fall into the lava, check the BoxCollider’s “Is Trigger” property on LavaVolume. The Box Collider will now act as a Trigger volume, no longer physically blocking objects that come into contact with it, but notifying the script when an object moves through the collider volume. This presents a problem, as objects will now fall through the lava into infinite space! To deal with this problem, make another copy of the rock platforms and scale/position it so that it’s a similar dimension to the lava, also wide but flat, and position it just below the lava. So it forms a rock “floor” under the lava volume. To make your scene a little nicer, repeat the process to create rock walls around the lava, hiding where the lava volume ends. A few point lights ( Create > Light > Point Light) scattered around the islands will also add interesting visual variety. Now it’s time to add a player! First, import the “Standard Assets” package from the Unity Asset Store (if you don’t know how to do this, google the Unity Asset Store to learn about it). In the newly imported Standard Assets Project folder, go to Characters > FirstPersonCharacter > Prefabs. There you will find the FPSController prefab. Drag it into your scene, rename it to “Player” and position it on one of the islands, like so: Delete the old main camera that you had in your scene; the FPSController has its own camera. If you run the project, you should be able to walk around your scene, from island to island. You can also walk in the lava, but it doesn’t harm you, yet. To make the lava an actual threat, we start by giving our player the ability to track its health. In the Project Pane, right-click and select Create > C# Script. Name the script “Player”. Drag the Player script onto the Player object in the Hierarchy view. Open the script in Visual Studio, and add code as follows: This script exposes a variable, maxHealth, which determines how much health the Player starts with and the maximum health they can ever have. It exposes a function to alter the Player’s current health. And it uses a reference to a Text object to display the Player’s current health on screen. Back to Unity, you can now see the Max Health Property exposed in the inspector. Set Max Health to 100. There is also a field for Current Health Label, but we don’t currently have a GUI. To remedy this, in the Hierarchy view, select Create > UI > Canvas and then Create > UI > Label. This will create the UI root and a text label on it. Change the label’s text to “Health:”, the font size to 20 and colour to white. Drag it to the bottom left corner of the screen (and make sure the Rect Transform anchor is set to bottom left). Duplicate that text label, offset it right a little from the previous text label and change the text to “0”. Rename this new label “CurrentHealthLabel”. The GUI should now look like this: In the Hierarchy view, drag CurrentHealthLabel into your Player script’s “Current Health Label” property. If we run now, we’ll have a display in the bottom corner of the screen showing our Player’s health of 100. By itself, this isn’t particularly exciting. Time to add lava! Create a new c# script as before; call it Lava. Add this Lava script to the LavaVolume scene object. Open the script in Visual Studio and insert the following code: Note the TriggerEnter and TriggerExit functions. Because LavaVolume, the object we’ve added this script to, has a collider with Is Trigger checked, whenever another object enters LavaVolume’s box collider, OnTriggerEnter will be called, with the colliding object’s Collider passed as a parameter. Similarly, when an object leaves LavaVolume’s collider volume, OnTriggerExit will be called. Taking advantage of this functionality, we keep a list of all players who enter the lava. Then, during the Update call, if any players are in the lava, we apply damage to them periodically. damageTickTime determines an interval between every time we apply damage (a “tick”), and damagePerTick determines how much damage we apply per tick. Both properties are exposed in the Inspector by the script, so that they’re customizable. Set the values to Damage Per Tick = 5 and Damage Tick Time = 0.1. Now, if we run the game, stepping in the lava hurts! But, it’s a bit of an anti-climax, since nothing actually happens when our health gets down to 0. Let’s make things a little more fatal. First, use a paint program to create a “You Died!” screen at 1920 x 1080 resolution. Add that image to the project. Under the Import Settings, set the Texture Type to Sprite (2D and UI). Then, from the Hierarchy, select Create > UI > Image. Make the size 1920 x 1080, and set the Source Image property to your new player died sprite image. Go back to your Player Script and extend the code as follows: The additions add a reference to the player died screen, and code in the CheckDead function to check if the player’s health reaches 0, displaying the death screen if it does. The function also disables the FirstPersonController script if the player dies, so that the player can’t continue to move Player around via keyboard/mouse input after Player has died. Return to the Hierarchy view, and drag the player died screen into the exposed Dead Screen property on the Player script. Now, if you run the game, stepping in lava will “kill” the player if they stay in it long enough. Better! But it’s only fair to add a way for the Player to recover health, too. To do so, use a paint program to create a new “medkit” texture. Following the same procedure as used to create the LavaVolume, create a new cube called HealthKit, give it a Material that uses this new medkit texture, and enable “Is Trigger” on the cube’s BoxCollider. Create a new C# script called “Health Pickup”, add it to the cube, and insert the following code: Simpler than the Lava script, this adds health to a Player that collides with it, before disabling itself. Scale the HealthKit object until it looks about the right size for a health pack; then copy and paste a few of the packs across the islands. Now, when you play, if you manage to extricate yourself from the lava after falling in, you can collect a health pack to restore your health! And that brings us to the end of the Simple Player Health tutorial. We have a deadly lava level with health pickups, just waiting for enemy characters to be added. About the author Gareth Fouche is a game developer. He can be found on Github at @GarethNN
Read more
  • 0
  • 0
  • 10819

article-image-diy-iot-projects-you-can-build-under-50
Vijin Boricha
29 Jun 2018
5 min read
Save for later

5 DIY IoT projects you can build under $50

Vijin Boricha
29 Jun 2018
5 min read
Lately, IoT is beginning to play an integral part in various industries, be it at the consumer-level, or at the enterprise side of it. With a lot of big players like Apple, Microsoft, Amazon, and Google entering this market, IoT adoption has scaled tremendously. It is said to have jumped from a hobbyist level to an industry infrastructure where everything functions on smart devices, that can talk. The bulk release of popular IoT products prove that this market is getting bigger and a lot of individuals have been amazed with home automation products such as Amazon Alexa, Apple Homepod, Google Home and others. These devices are one of the most sought-after things for hobbyist and enthusiasts who are interested to do simple automation with sensors. Following are 5 IoT projects ideas that you can build without a hole in the pocket. To learn how to actually build similar kind of projects, check out our books; Internet of Things with Raspberry Pi 3 Smart Internet of Things Projects Raspberry Pi 3 Home Automation Projects Weather control station This project will not only help you measure the room temperature but will also help you measure the altitude and the pressure in the room. For this project you will need the Adafruit Starter Pack for Windows 10 IoT Core on the latest Raspberry Pi kit. Along with the Raspberry Pi Kit you will also be using other sensors that read temperature, pressure, and altitude. To make your weather station advanced, you can connect the device to your cloud account to store the weather data. Hardware Raspberry Pi 2 or 3 Breadboard Adafruit BMP280 Barometric Pressure & Altitude Sensor Software Windows 10 IoT Core Approximate total cost Less than $60 Facial Recognition Door Self-built home security projects are some of the most popular DIY projects because they can be cheaper and simple compared to bulky professional installations. Here's a project that controls entry access using facial recognition, thanks to Microsoft Project Oxford. This project from Mazudo, based on Raspberry Pi and Windows IoT, is posted on Hackster.io. This is a handy project for DIY enthusiasts who want to build a quick security lock for their homes. Hardware Raspberry Pi 3 Breadboard USB camera Relay switch Speaker Software Windows 10 IoT Core Approximate total cost Less than $50 Your very own Alexa Echo Alexa Echo has always been a handy device, which can take notes, schedule reminders for your appointments, and play podcasts for you. Brilliant, isn’t it?  You can build a fully functional customized Alexa Echo with all the features of Alexa, apart from accessing official music servers like Amazon prime. It will also have an integration with recently included third party apps like todoist and Any.do. This DIY Echo can also be connected to your cell phone devices to manage notifications when the timer goes off, and so on. Only one thing that your DIY will be missing is the ability to function as a bluetooth speaker. Hardware Raspberry Pi 3 Breadboard USB speaker and mic Software Raspbian Approximate total cost Less than $50 Pet Feeder You surely don’t want your pet to starve when you’re away, do you? This customized pet feeder is controlled via the internet; set timings and feed your pet automatically later. These pet feeders are directly connected to WiFi using ESP8266 chip. We can easily add features like controlling the device using cell phone and making dashboards using Freeboard. This project can be later upgraded or rightly reprogrammed to fill your snack bowl at regular intervals as well. Hardware Arduino PIR motion sensor ESP8266 ESP-01 Software Arduino IDE ESP8266Flasher.exe Approximate total cost Less than $40 Video Surveillance Robot Video surveillance is a process of monitoring a scenario, person or an environment as a whole. A video surveillance robot can capture the activities happening in the surrounding where it is deployed and can be controlled using a GUI Interface. For further enhancements, you can even connect your device to the cloud and save the recorded data there. Hardware Raspberry Pi ARM Cortex- A7 CPU L293 motor driver Software Raspbian Approximate total cost Less than $50 These are few economical yet highly useful Internet of Things projects, which can be leveraged to improve your daily activities. Still not convinced?. Think of it this way. Buying the microchip board is a one time investment as it can be reused in separate projects. The sensors and other peripherals aren’t that expensive. You might say, it’s just way easier to buy an IoT device. I would argue that, buying an IoT device is not as satisfying as building one for the same purpose. In the end, there are multiple advantages of building one as you can brag about it to your friends and most importantly include it in your resume to give you that edge over others in an interview. Cognitive IoT: How Artificial Intelligence is remoulding Industrial and Consumer IoT Windows 10 IoT Core: What you need to know 5 reasons to choose AWS IoT Core for your next IoT project  
Read more
  • 0
  • 0
  • 10695

article-image-admiring-many-faces-facial-recognition-deep-learning
Sugandha Lahoti
07 Dec 2017
7 min read
Save for later

Admiring the many faces of Facial Recognition with Deep Learning

Sugandha Lahoti
07 Dec 2017
7 min read
Facial recognition technology is not new. In fact, it has been around for more than a decade. However, with the recent rise in artificial intelligence and deep learning, facial technology has achieved new heights. In addition to facial detection, modern day facial recognition technology also recognizes faces with high accuracy and in unfavorable conditions. It can also recognize expressions and analyze faces to generate insights about an individual. Deep learning has enabled a power-packed face recognition system, all geared up to achieve widespread adoption. How has deep learning modernised facial recognition Traditional facial recognition algorithms would recognize images and people using distinct facial features (placement of eye, eye color, nose shape etc.) However, they failed in correct identification in cases of different lighting or slight change in the appearance ( beard growth, aging, or pose). In order to develop facial recognition techniques for a dynamic and ever-changing face, deep learning is proving to be a game changer. Deep Neural nets go beyond the approach of manual extraction. These AI based Neural Networks rely on image pixels to analyze features of a particular face. So they scan faces irrespective of the lighting, ageing, pose, or emotions. Deep learning algorithms remember each time they recognize or fail to recognize a problem. Thus, avoiding repeat mistakes and getting better at each attempt. Deep learning algorithms can also be helpful in converting 2D images to 3D. Facial recognition in practice: Facial Recognition Technology in Multimedia Deep learning enabled facial recognition technologies can be used to track audience reaction and measure different levels of emotions. Essentially it can predict how a member of the audience will react to the remaining film. Not only this, it also helps determine what percentage of users will be interested in a particular movie genre. For example, Microsoft’s Azure Emotion,  an emotion API detects emotions by analysing the facial expressions on an image or video content over time. Caltech and Disney have collaborated to develop a neural network which can track facial expressions. Their deep learning based Factorised Variational Autoencoders (FVAEs) analyze facial expressions of audience for about 10 minutes and then predict how their reaction will be for the rest of the film. These techniques help in estimating whether the viewers are giving the expected reactions at the right place. For example, the viewer is not expected to yawn on a comical scene. With this, Disney can also predict the earning potential of a particular movie. It can generate insights that may help producers create compelling movie trailers to maximize the number of footfalls. Smart TVs are also equipped with sophisticated cameras and deep learning algos for facial recognition ability. They can recognize the face of the person watching and automatically show channels and web applications programmed as their favorites. The British broadcasting corporation uses the facial recognition technology, built by CrowdEmotion. By tracking faces of almost 4,500 audience members watching show trailers, they gauge exact customer emotions about a particular programme. This in turn helps them generate insights to showcase successful commercials. Biometrics in Smartphones A large number of smartphones nowadays are instilled with biometric capabilities. Facial recognition in smartphones are not only used as a means of unlocking and authorizing, but also for making secure transactions and payments. In present times, there has been a rise in chips with built-in deep learning ability. These chips are embedded into smartphones. By having a neural net embedded inside the device, crucial face biometric data never leaves the device or sent to the cloud. This in turn improves privacy and reduces latency. Some of the real-world examples include Intel’s Nervana Neural Network Processor, Google’s TPU, Microsoft’s FPGA, and Nvidia’s Tesla V100. Deep learning models, embedded in a smartphone, can construct a mathematical model of the face which is then stored in the database. Using this mathematical face model, smartphones can easily recognize users even as their face ages or when it is obstructed by wearable accessories. Apple has recently launched the iPhone X facial recognition system termed as FaceID. It maps thousands of points on a user’s face using a projector and an infrared camera (which can operate under varied lighting conditions). This map is then passed to a bionic chip embedded in the smart phone. The chip has a neural network which constructs a mathematical model of the user’s face, used for biometric face verification and recognition. Windows Hello is also a facial recognition technology to unlock Windows smart devices equipped with infrared cameras. Qualcomm, a mobile technology organization, is working on a new depth-perception technology. It will include an image signal processor and high-resolution 3D depth-sensing cameras for facial recognition. Face recognition for Travel Facial recognition technologies can smoothen the departure process for a customer by eliminating the need for a boarding pass. A traveller is scanned by cameras installed at various check points, so they don’t have to produce a boarding pass at every step. Emirates is collaborating with Dubai Customs, Police and Airports to use a facial recognition technology solution integrated with the UAE Wallet app. The project is known as Together Initiative, it allows travellers to register and store their biometric facial data at several kiosks placed at the check-in area. This facility helps passengers to avoid presenting their physical documents at every touchpoint. Face recognition can also be used for determining illegal immigration. The technology compares the photos of passengers taken immediately before boarding, with the photos provided in their visa application. Biometric Exit, is an initiative by US government, which uses facial recognition to identify individuals leaving the country. Facial recognition technology can also be used at train stations to reduce the waiting time for  buying a train ticket or going through other security barriers. Bristol Robotics Laboratory has developed a software which uses infrared cameras to identify passengers as they walk onto the train platform. They do not need to carry tickets. Retail and shopping In the area of retail, smart facial recognition technologies can be helpful in fast checkout by keeping a track of each customer as they shop across a store. This smart technology, can also use machine learning and analytics to find trends in the shopper’s purchasing behavior over time and devise personalized recommendations. Facial video analytics and deep learning algorithms can also identify loyal and VIP shoppers from the moving crowd, giving them a privileged VIP experience. Thus, enabling them with more reasons to come back and make repeat purchases. Facial biometrics can also accumulate rich statistics about demographics(age, gender, shopping history) of an individual. Analyzing these statistics can generate insights, which helps organizations develop their products and marketing strategies. FindFace is one such platform that uses sophisticated deep learning technologies to generate meaningful data about the shopper. Its e-facial recognition system can verify faces with almost 99% accuracy. It can also help route the shopper data to a salesperson’s notice for personalized assistance. Facial recognition technology can also be used to make secure payment transactions simply by analysing a person’s face. AliBaba has set up a Smile to Pay face recognition system in KFC's. This system allows customers to make secure payments by merely scanning their face. Facial recognition has emerged as a hot topic of interest and is poised to grow. On the flip side, organizations deploying such technology should incorporate privacy policies as a standard measure. Data collected from such facial recognition software can also be used wrongly for targeting customers with ads, or for other illegal purposes. They should implement a methodical and systematic approach for using facial recognition for the benefit of their customers. This will not only help businesses generate a new source of revenue, but will also usher in a new era of judicial automation.  
Read more
  • 0
  • 0
  • 10657
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-5-key-reinforcement-learning-principles-explained-by-ai-expert
Packt Editorial Staff
10 Dec 2019
10 min read
Save for later

5 key reinforcement learning principles explained by AI expert, Hadelin de Ponteves

Packt Editorial Staff
10 Dec 2019
10 min read
When people refer to artificial intelligence, some think of it as machine learning, while others think of it as deep learning or reinforcement learning, etc. While artificial intelligence is a broad term which involves machine learning, reinforcement learning is a type of machine learning, thereby a branch of AI. In this article we will understand 5 key reinforcement learning principles with some simple examples. Reinforcement learning allows machines and software agents to automatically determine the ideal behavior within a specific context, in order to maximize its performance. It is employed by various software and machines to find the best possible behavior or path it should take in a specific situation. This article is an excerpt from the book AI Crash Course written by Hadelin de Ponteves. In this book Hadelin helps you understand what you really need to build AI systems with reinforcement learning. The book involves descriptive and practical projects to put ideas into action and show how to build intelligent software step by step. While reinforcement learning in some way is a form of AI, machine learning does not include the process of taking action and interacting with an environment like we humans do. Indeed, as intelligent human beings, what we constantly keep doing is the following: We observe some input, whether it's what we see with our eyes, what we hear with our ears, or what we remember in our memory. These inputs are then processed in our brain. Eventually, we make decisions and take actions. This process of interacting with an environment is what we are trying to reproduce in terms of artificial intelligence. And to that extent, the branch of AI that works on this is reinforcement learning. This is the closest match to the way we think; the most advanced form of artificial intelligence, if we see AI as the science that tries to mimic (or surpass) human intelligence. Reinforcement learning principles also has the most impressive results in business applications of AI. For example, Alibaba leveraged reinforcement learning to increase its ROI in online advertising by 240% without increasing their advertising budget. Five reinforcement learning principles Let's begin building the first pillars of your intuition into how reinforcement learning works. These are the fundamental reinforcement learning principles, which will get you started with the right, solid basics in AI. Here are the five principles: Principle #1: The input and output system Principle #2: The reward Principle #3: The AI environment Principle #4: The Markov decision process Principle #5: Training and inference Principle #1 – The input and output system The first step is to understand that today, all AI models are based on the common principle of input and output. Every single form of artificial intelligence, including machine learning models, chatBots, recommender systems, robots, and of course reinforcement learning models, will take something as input, and will return another thing as output. Figure 1: The input and output system In reinforcement learning, this input and output have a specific name: the input is called the state, or input state. The output is the action performed by the AI. And in the middle, we have nothing other than a function that takes a state as input and returns an action as output. That function is called a policy. Remember the name, "policy," because you will often see it in AI literature. As an example, consider a self-driving car. Try to imagine what the input and output would be in that case. The input would be what the embedded computer vision system sees, and the output would be the next move of the car: accelerate, slow down, turn left, turn right, or brake. Note that the output at any time (t) could very well be several actions performed at the same time. For instance, the self-driving car can accelerate while at the same time turning left. In the same way, the input at each time (t) can be composed of several elements: mainly the image observed by the computer vision system, but also some parameters of the car such as the current speed, the amount of gas remaining in the tank, and so on. That's the very first important principle in artificial intelligence: it is an intelligent system (a policy) that takes some elements as input, does its magic in the middle, and returns some actions to perform as output. Remember that the inputs are also called the states. Principle #2 – The reward Every AI has its performance measured by a reward system. There's nothing confusing about this; the reward is simply a metric that will tell the AI how well it does over time. The simplest example is a binary reward: 0 or 1. Imagine an AI that has to guess an outcome. If the guess is right, the reward will be 1, and if the guess is wrong, the reward will be 0. This could very well be the reward system defined for an AI; it really can be as simple as that! A reward doesn't have to be binary, however. It can be continuous. Consider the famous game of Breakout: Figure 2: The Breakout game Imagine an AI playing this game. Try to work out what the reward would be in that case. It could simply be the score; more precisely, the score would be the accumulated reward over time in one game, and the rewards could be defined as the derivative of that score. This is one of the many ways we could define a reward system for that game. Different AIs will have different reward structures; we will build five rewards systems for five different real-world applications in this book. With that in mind, remember this as well: the ultimate goal of the AI will always be to maximize the accumulated reward over time. Those are the first two basic, but fundamental, principles of artificial intelligence as it exists today; the input and output system, and the reward. Principle #3 – AI environment The third reinforcement learning principles involves an "AI environment." It is a very simple framework where you will define three things at each time (t): The input (the state) The output (the action) The reward (the performance metric) For each and every single AI based on reinforcement learning that is built today, we always define an environment composed of the preceding elements. It is, however, important to understand that there are more than these three elements in a given AI environment. For example, if you are building an AI to beat a car racing game, the environment will also contain the map and the gameplay of that game. Or, in the example of a self-driving car, the environment will also contain all the roads along which the AI is driving and the objects that surround those roads. But what you will always find in common when building any AI, are the three elements of state, action, and reward. Principle #4 – The Markov decision process The Markov decision process, or MDP, is simply a process that models how the AI interacts with the environment over time. The process starts at t = 0, and then, at each next iteration, meaning at t = 1, t = 2, … t = n units of time (where the unit can be anything, for example, 1 second), the AI follows the same format of transition: The AI observes the current state, st The AI performs the action, at The AI receives the reward, rt = R(st,at) The AI enters the following state, st+1 The goal of the AI is always the same in reinforcement learning: it is to maximize the accumulated rewards over time, that is, the sum of all the rt = R(st,at) received at each transition. received at each transition. The following graphic will help you visualize and remember an MDP better, the basis of reinforcement learning models: Figure 3: The Markov Decision process Now four essential pillars are already shaping your intuition of AI. Adding a last important one completes the foundation of your understanding of AI. The last principle is training and inference; in training, the AI learns, and in inference, it predicts. Principle #5 – Training and inference The final principle you must understand is the difference between training and inference. When building an AI, there is a time for the training mode, and a separate time for the inference mode. I'll explain what that means starting with the training mode. Training mode Now you understand, from the three first principles, that the very first step of building an AI is to build an environment in which the input states, the output actions, and a system of rewards are clearly defined. From the fourth principle, you also understand that inside this environment an AI will be built that interacts with it, trying to maximize the total reward accumulated over time. To put it simply, there will be a preliminary (and long) period during which the AI will be trained to do that. That period is called the training; we can also say that the AI is in training mode. During that time, the AI tries to accomplish a certain goal repeatedly until it succeeds. After each attempt, the parameters of the AI model are modified in order to do better at the next attempt. Inference mode Inference mode simply comes after your AI is fully trained and ready to perform well. It will simply consist of interacting with the environment by performing the actions to accomplish the goal the AI was trained to achieve before in training mode. In inference mode, no parameters are modified at the end of each episode. For example, imagine you have an AI company that builds customized AI solutions for businesses, and one of your clients asked you to build an AI to optimize the flows in a smart grid. First, you'd enter an R&D phase during which you would train your AI to optimize these flows (training mode), and as soon as you reached a good level of performance, you'd deliver your AI to your client and go into production. Your AI would regulate the flows in the smart grid only by observing the current states of the grid and performing the actions it has been trained to do. That's inference mode. Sometimes, the environment is subject to change, in which case you must alternate fast between training and inference modes so that your AI can adapt to the new changes in the environment. An even better solution is to train your AI model every day and go into inference mode with the most recently trained model. That was the last fundamental principle common to every AI. To summarize, we explored the five key reinforcement learning principles which involves the input and output system, a reward system, AI environment, Markov decision process, training and inference mode for AI. Get this guide AI Crash Course by Hadelin de Ponteves today to learn about programming an AI software in Python without any math or data science background. It will also help you master the key skills of deep learning, reinforcement learning, and deep reinforcement learning. How artificial intelligence and machine learning can help us tackle the climate change emergency DeepMind introduces OpenSpiel, a reinforcement learning-based framework for video games OpenAI’s AI robot hand learns to solve a Rubik Cube using Reinforcement learning and Automatic Domain Randomization (ADR) DeepMind’s AI uses reinforcement learning to defeat humans in multiplayer games
Read more
  • 0
  • 0
  • 10600

article-image-dark-web-phishing-kits-cheap-plentiful-and-ready-to-trick-you
Guest Contributor
07 Dec 2018
6 min read
Save for later

Dark Web Phishing Kits: Cheap, plentiful and ready to trick you

Guest Contributor
07 Dec 2018
6 min read
Spam email is a part of daily life on the internet. Even the best junk mail filters will still allow through certain suspicious looking messages. If an illegitimate email tries to persuade you to click a link and enter personal information, then it is classified as a phishing attack. Phishing attackers send out email blasts to large groups of people with the messages designed to look like they come from a reputable company, such as Google, Apple, or a banking or credit card firm. The emails will typically try to warn you about an error with your account and then urge you to click a link and log in with your credentials. Doing so will bring you to an imitation website where the attacker will attempt to steal your password, social security number, or other private data. These days phishing attacks are becoming more widespread. One of the primary reasons is because of easy access to cybercrime kits on the dark web. With the hacker community growing, internet users need to take privacy seriously and remain vigilant against spam and other threats. Read on to learn more about this trend and how to protect yourself. Dark Web Basics The dark web, sometimes referred to as the deep web, operates as a separate environment on the internet. Normal web browsers, like Google Chrome or Mozilla Firefox, connect to the world wide web using the HTTP protocol. The dark web requires a special browser tool known as the TOR browser, which is fully encrypted and anonymous. Image courtesy of Medium.com Sites on the dark web cannot be indexed by search engines, so you'll never stumble on that content through Google. When you connect through the TOR browser, all of your browsing traffic is sent through a global overlay network so that your location and identity cannot be tracked. Even IP addresses are masked on the dark web. Hacker Markets Much of what takes place in this cyber underworld is illegal or unethical in nature, and that includes the marketplaces that exist there. Think of these sites as blackmarket versions of eBay, where anonymous individuals can buy and sell illegal goods and services. Recently, dark web markets have seen a surge in demands for cybercrime tools and utilities. Entire phishing kits are sold to buyers, which include spoofed pages that imitate real companies and full guides on how to launch an email phishing scam. Image courtesy of Medium.com When a spam email is sent out as part of a phishing scam, the messages are typically delivered through dark web servers that make it hard for junk filters to identify. In addition, the "From" address in the emails may look legitimate and use a valid domain like @gmail.com. Phishing kits can be found for as less as two dollars, meaning that inexperienced hackers can launch a cybercrime effort with little funding or training. It’s interesting to note that personal data prices at the Dark Web supermarket range from a single dollar (Social Security card) to thousands (medical records). Cryptocurrency Scandal You should be on the lookout for phishing scandals related to any company or industry, but in particular, banking and financial attacks can be the most dangerous. If a hacker gains access to your credit card numbers or online banking password, then can commit fraud or even steal your identity. The growing popularity of cryptocurrencies like Bitcoin and Ether have revolutionized the financial industry, but as a negative result of the trend, cybercriminals are now targeting these digital money systems. MyEtherWallet website, which allows users to store blockchain currency in a central location, has been victim to a number of phishing scams in recent months. Image courtesy ofMyEtherWallet.com Because cryptocurrencies do not operate with a central bank or financial authority, you may not know what a legitimate email alert for one looks like. Phishing messages for MyEtherWallet will usually claim that there is an issue with your cryptocurrency account, or sometimes even suggest that you have a payment pending that needs to be verified. Clicking on the link in the phishing email will launch your web browser and navigate to a spoofed page that looks like it is part of myetherwallet.com. However, the page is actually hosted on the hacker's network and will feed directly into their illegitimate database. If you enter your private wallet address, which is a unique string of letters and numbers, the hacker can gain access to all of the funds in your account. Preventative Measures Phishing attacks are a type of cybercrime that targets individuals, so it's up to you to be on guard for these messages and react appropriately. The first line of defense against phishing is to be skeptical of all emails that enter your inbox. Dark web hackers are getting better and better at imitating real companies with their spam and spoofing pages, so you need to look closely when examining the content. Always check the full URL of the links in email messages before you click one. If you do get tricked and end up navigating to a spoofed page in your web browser, you still have a chance to protect yourself. All browsers support secure sockets layer (SSL) functionality and will display a lock icon or a green status bar at the top of the window when a website has been confirmed as legitimate. If you navigate to a webpage from an email that does not have a valid SSL certificate, you should close the browser immediately and permanently delete the email message. The Bottom Line Keep this in mind. As prices for phishing kits drop and supply increases, the allure of engaging in this kind of bad behavior will be too much to resist for an increasing number of people. Expect incidents of phishing attempts will increase. The general internet-browsing public should stay on high alert at all times when navigating their email inbox. Think first, then click. Author Bio Gary Stevens is a front-end developer. He’s a full-time blockchain geek and a volunteer working for the Ethereum foundation as well as an active Github contributor. Packt has put together a new cybersecurity bundle for Humble Bundle Malicious code in npm ‘event-stream’ package targets a bitcoin wallet and causes 8 million downloads in two months Why scepticism is important in computer security: Watch James Mickens at USENIX 2018 argue for thinking over blindly shipping code
Read more
  • 0
  • 0
  • 10564

article-image-heres-how-you-can-handle-the-bias-variance-trade-off-in-your-ml-models
Savia Lobo
22 Jan 2018
8 min read
Save for later

Here's how you can handle the bias variance trade-off in your ML models

Savia Lobo
22 Jan 2018
8 min read
Many organizations rely on machine learning techniques in their day-today workflow, to cut down on the time required to do a job. The reason why these techniques are robust is because they undergo various tests in order to carry out correct predictions about any data fed into them. During this phase, there are also certain errors generated, which can lead to an inconsistent ML model. Two common errors that we are going to look at in this article are that of bias and Variance, and how a trade-off can be achieved between the two in order to generate a successful ML model.  Let’s first have a look at what creates these kind of errors. Machine learning techniques or more precisely supervised learning techniques involve training, often the most important stage in the ML workflow. The machine learning model is trained using the training data. How is this training data prepared? This is done by using a dataset for which the output of the algorithm is known. During the training stage, the algorithm analyzes the training data that is fed and produces patterns which are captured within an inferred function. This inferred function, which is derived after analysis of the training dataset, is the model that would be further used to map new examples. An ideal model generated from this training data should be able to generalize well. This means, it should learn from the training data and should correctly predict or classify data within any new problem instance. In general, the more complex the model is, the better it classifies the training data. However, if the model is too complex i.e it will pick up random features i.e. noise in the training data, this is the case of overfitting i.e. the model is said to overfit . On the other hand, if the model is not so complex, or missing out on important dynamics present within the data, then it is a case of underfitting. Both overfitting and underfitting are basically errors in the ML models or algorithms. Also, it is generally impossible to minimize both these errors at the same time and this leads to a condition called as the Bias-Variance Tradeoff. Before getting into knowing how to achieve the trade-off, lets simply understand how bias and variance errors occur. The Bias and Variance Error Let’s understand each error with the help of an example. Suppose you have 3 training datasets say T1, T2, and T3, and you pass these datasets through a supervised learning algorithm. The algorithm generates three different models say M1, M2, and M3 from each of the training dataset. Now let’s say you have a new input A. The whole idea is to apply each model on this new input A. Here, there can be two types of errors that can occur. If the output generated by each model on the input A is different(B1, B2, B3), the algorithm is said to have a high Variance Error. On the other hand, if the output from all the three models is same (B) but incorrect, the algorithm is said to have a high Bias Error. High Variance also means that the algorithm produces a model that is too specific to the training data, which is a typical case of Overfitting. On the other hand, high bias means that the algorithm has not picked up defining patterns from the dataset, this is a case of Underfitting. Some examples of high-bias ML algorithms are: Linear Regression, Linear Discriminant Analysis and Logistic Regression Examples of high-variance Ml algorithms are: Decision Trees, k-Nearest Neighbors and Support Vector Machines.  How to achieve a Bias-Variance Trade-off? For any supervised algorithm, having a high bias error usually means it has low variance error and vise versa. To be more specific, parametric or linear ML algorithms often have a high bias but low variance. On the other hand, non-parametric or non-linear algorithms have vice versa. The goal of any ML model is to obtain a low variance and a low bias state, which is often a task due to the parametrization of machine learning algorithms. So how can we achieve a trade-off between the two? Following are some ways to achieve the Bias-Variance Tradeoff: By minimizing the total error: The optimum location for any model is the level of complexity at which the increase in bias is equivalent to the reduction in variance. Practically, there is no analytical method to find the optimal level. One should use an accurate measure for error prediction and explore different levels of model complexity, and then choose the complexity level that reduces the overall error. Generally resampling based measures such as cross-validation should be preferred over theoretical measures such as Aikake's Information Criteria. Source: http://scott.fortmann-roe.com/docs/BiasVariance.html (The irreducible error is the noise that cannot be reduced by algorithms but can be reduced with better data cleaning.) Using Bagging and Resampling techniques: These can be used to reduce the variance in model predictions. In bagging (Bootstrap Aggregating), several replicas of the original dataset are created using random selection with replacement. One modeling algorithm that makes use of bagging is Random Forests. In Random Forest algorithm, the bias of the full model is equivalent to the bias of a single decision tree--which itself has high variance. By creating many of these trees, in effect a "forest", and then averaging them the variance of the final model can be greatly reduced over that of a single tree. Adjusting minor values in algorithms: Both the k-nearest algorithms and Support Vector Machines(SVM) algorithms have low bias and high variance. But the trade-offs in both these cases can be changed. In the K-nearest algorithm, the value of k can be increased, which would simultaneously increase the number of neighbors that contribute to the prediction. This in turn would increase the bias of the model. Whereas, in the SVM algorithm, the trade-off can be changed by an increase in the C parameter that would influence the violations of the margin allowed in the training data. This will increase the bias but decrease the variance. Using a proper Machine learning workflow: This means you have to ensure proper training by: Maintaining separate training and test sets - Splitting the dataset into training (50%), testing(25%), and validation sets ( 25%). The training set is to build the model, test set is to check the accuracy of the model, and the validation set is to evaluate the performance of your model hyperparameters. Optimizing your model by using systematic cross-validation - A cross-validation technique is a must to fine tune the model parameters, especially for unknown instances. In supervised machine learning, validation or cross-validation is used to find out the predictive accuracy within various models of varying complexity, in order to find the best model.For instance, one can use the k-fold cross validation method. Here, the dataset is divided into k folds. For each fold, train the algorithm on k-1 folds iteratively, using the remaining fold(also called as 'holdout fold')as the test set. Repeat this process until each k has acted as a test set. The average of the k recorded errors is called as the cross validation error and can serve as the performance metric for the model.   Trying out appropriate algorithms - Before relying on any model we need to first ensure that the model works best for our assumptions. One can make use of the No Free Lunch theorem, which states that one model can not work for only one problem. For instance, while using No Free lunch theorem, a random search will do the same as any of the heuristic optimization algorithms.   Tuning the hyperparameters that can give an impactful performance - Any machine learning model requires different hyperparameters such as constraints, weights or learning rates for generalizing different data patterns. Tuning these hyperparameters is necessary so that the model can optimally solve machine learning problems. Grid search and randomized search are two such methods practiced for hyperparameter tuning. So, we have listed some of the ways where you can achieve trade-off between the two. Both bias and variance are related to each other, if you increase one the other decreases and vice versa. By a trade-off, there is an optimal balance in the bias and variance which gives us a model that is neither underfit nor overfit. And finally, the ultimate goal of any supervised machine algorithm lies in isolating the signal from the dataset, and making sure that it eliminates the noise.  
Read more
  • 0
  • 0
  • 10434

article-image-what-coding-service
Antonio Cucciniello
02 Oct 2017
4 min read
Save for later

What is coding as a service?

Antonio Cucciniello
02 Oct 2017
4 min read
What is coding as a service? If you want to know what coding as a service is, you have to start with Artificial intelligence. Put simply, coding-as-a-service is using AI to build websites, using your machine to write code so you don't have to. The challenges facing engineers and programmers today In order to give you a solid understanding of what coding as a service is, you must understand where we are today. Typically, we have programs that are made by software developers or engineers. These programs are usually created to automate a task or make tasks easier. Think things that typically speed up processing or automate a repetitive task. This is, and has been, extremely beneficial. The gained productivity from the automated applications and tasks allows us, as humans and workers, to spend more time on creating important things and coming up with more ground breaking ideas. This is where Artificial Intelligence and Machine Learning come into the picture. Artificial intelligence and coding as a service Recently, with the gains in computing power that have come with time and breakthroughs, computers have became more and more powerful, allowing for AI applications to arise in more common practice. At this point today, there are applications that allow for users to detect objects in images and videos in real-time, translate speech to text, and even determine the emotions in the text sent by someone else. For an example of Artificial Intelligence Applications in use today, you may have used an Amazon Alexa or Echo Device. You talk to it, and it can understand your speech, and it will then complete a task based off your speech. Previously, this was a task given to only humans (the ability to understand speech.). Now with advances, Alexa is capable of understanding everything you say,given that it is "trained" to understand it. This development, previously only expected of humans, is now being filtered through to technology. How coding as a service will automate boring tasks Today, we have programmers that write applications for many uses and make things such as websites for businesses. As things progress and become more and more automated, that will increase programmer’s efficiency and will reduce the need for additional manpower. Coding as a service, other wise known as Caas, will result in even fewer programmers needed. It mixes the efficiencies we already have with Artificial Intelligence to do programming tasks for a user. Using Natural Language Processing to understand exactly what the user or customer is saying and means, it will be able to make edits to websites and applications on the fly. Not only will it be able to make edits, but combined with machine learning, the Caas can now come up with recommendations from past data to make edits on its own. Efficiency-wise, it is cheaper to own a computer than it is to pay a human especially when a computer will work around the clock for you and never get tired. Imagine paying an extremely low price (one than you might already pay to get a website made) for getting your website built or maybe your small application created. Conclusion Every new technology comes with pros and cons. Overall, the number of software developers may decrease, or, as a developer, this may free up your time from more menial tasks, and enable you to further specialize and broaden your horizons. Artificial Intelligence programs such as Coding as a Service could be spent doing plenty of the underlying work, and leave some of the heavier loading to human programmers. With every new technology comes its positives and negatives. You just need to use the postives to your advantage!
Read more
  • 0
  • 0
  • 10266
article-image-eight-things-you-need-learn-python
Oli Huggins
02 Jun 2016
4 min read
Save for later

Eight Things You Need To Learn with Python

Oli Huggins
02 Jun 2016
4 min read
We say it a lot, but Python really is a versatile language that can be applied to many different purposes. Web developers, data analysts, security pros - there's an impressive range of challenges that can be solved by Python. So, what exactly should you be learning to do with this great language to really get the most out of it?   Writing Python What's the most important thing to learn with Python? How to write it. As Python becomes the popular language of choice for most developers, there is an increasing need to learn and adopt it on different environments for different purposes. The Beginning Python video course focuses on just that. Aimed at a complete novice with no previous programming experience in Python, this course will guide the readers every step of the way. Starting with the absolute basics like understanding of variables, arrays, and strings, the course goes on teach the intricacies of Python. It teaches how you can build your own functions making use of the existing functions in Python. By the end, the course ensures that you have a strong foundation of the programming concepts in Python. Design Patterns As Python matures from being used just as a scripting language and into enterprise development and data science, the need for clean, reusable code becomes ever more vital. The modern Python developer cannot go astray with tried and true design patterns for Python when they want to write efficient, reliable Python code. The second edition of Learning Python Design Patterns is stuffed with rich examples of design pattern implementation. From OOP to more complex concepts, you'll find everything you need to improve your Python within. Machine Learning Design We all know how powerful Python is for machine learning - so why are your results proving sub-par and inaccurate? The issue is probably not your implementation, but rather with your system design. Just knowing the relevant algorithms and tools is not enough for a really effective system - you need the right design. Designing Machine Learning Systems with Python covers various machine learning designing aspects with the help of real-world data sets and examples and will enable you to evaluate and decide the right design for your needs. Python for the Next Generation Python was built to be simple, and it's the perfect language to get kids coding. With programmers getting younger and younger these days, get them learning with a language that will serve them well for life. In Python for Kids, kids will create two interesting game projects that they can play and show off to their friends and teachers, as well as learn Python syntax, and how to do basic logic building. Distributed Computing What do you do when your Python application takes forever to give the output? Very heavy computing results in delayed response or, sometimes, even failure. For special systems that deal with a lot of data and are mission critical, the response time becomes an important factor. In order to write highly available, reliable, and fault tolerant programs, one needs to take aid of distributed computing. Distributed Computing with Python will teach you how to manage your data intensive and resource hungry Python applications with the aid of parallel programming, synchronous and asynchronous programming, and many more effective techniques. Deep Learning Python is at the forefront of the deep learning revolution - the next stage of machine learning, and maybe even a step towards AI. As machine learning becomes a mainstream practice, deep learning has taken a front seat among data scientists. The Deep Learning with Python video course is a great stepping stone in entering the world of deep learning with Python -- learn the basics, clear your concepts, and start implementing efficient deep learning for making better sense of data. Get all that it takes to understand and implement Python deep learning libraries from this insightful tutorial. Predictive Analytics With the power of Python and predictive analytics, you can turn your data into amazing predictions of the future. It's not sorcery, just good data science. Written by Ashish Kumar, a data scientist at Tiger Analytics, Learning Predictive Analytics with Python is a comprehensive, intermediate-level book on Predictive Analytics and Python for aspiring data scientists. Internet of Things Python's rich libraries of data analytics, combined with its popularity for scripting microcontroller units such as the Raspberry Pi and Arduino, make it an exceptional choice for building IoT. Internet of Things with Python offers an exciting view of IoT from many angles, whether you're a newbie or a pro. Leverage your existing Python knowledge to build awesome IoT project and enhance your IoT skills with this book.  
Read more
  • 0
  • 0
  • 10235

article-image-what-does-a-data-science-team-look-like
Fatema Patrawala
21 Nov 2019
11 min read
Save for later

What does a data science team look like?

Fatema Patrawala
21 Nov 2019
11 min read
Until a couple of years ago, people barely knew the term 'data science' which has now evolved into an extremely popular career field. The Harvard Business Review dubbed data scientist within the data science team as the sexiest job of the 21st century and expert professionals jumped on the data is the new oil bandwagon. As per the Figure Eight Report 2018, which takes the pulse of the data science community in the US, a lot has changed rapidly in the data science field over the years. For the 2018 report, they surveyed approximately 240 data scientists and found out that machine learning projects have multiplied and more and more data is required to power them. Data science and machine learning jobs are LinkedIn's fastest growing jobs. And the internet is creating 2.5 quintillion bytes of data to process and analyze each day. With all these changes, it is evident for data science teams to evolve and change among various organizations. The data science team is responsible for delivering complex projects where system analysis, software engineering, data engineering, and data science is used to deliver the final solution. To achieve all of this, the team does not only have a data scientist or a data analyst but also includes other roles like business analyst, data engineer or architect, and chief data officer. In this post, we will differentiate and discuss various job roles within a data science team, skill sets required and the compensation benefit for each one of them. For an in-depth understanding of data science teams, read the book, Managing Data Science by Kirill Dubovikov, which has interesting case studies on building successful data science teams. He also explores how the team can efficiently manage data science projects through the use of DevOps and ModelOps.  Now let's get into understanding individual data science roles and functions, but before that we take a look at the structure of the team.There are three basic team structures to match different stages of AI/ML adoption: IT centric team structure At times for companies hiring a data science team is not an option, and they have to leverage in-house talent. During such situations, they take advantage of the fully functional in-house IT department. The IT team manages functions like data preparation, training models, creating user interfaces, and model deployment within the corporate IT infrastructure. This approach is fairly limited, but it is made practical by MLaaS solutions. Environments like Microsoft Azure or Amazon Web Services (AWS) are equipped with approachable user interfaces to clean datasets, train models, evaluate them, and deploy. Microsoft Azure, for instance, supports its users with detailed documentation for a low entry threshold. The documentation helps in fast training and early deployment of models even without an expert data scientists on board. Integrated team structure Within the integrated structure, companies have a data science team which focuses on dataset preparation and model training, while IT specialists take charge of the interfaces and infrastructure for model deployment. Combining machine learning expertise with IT resource is the most viable option for constant and scalable machine learning operations. Unlike the IT centric approach, the integrated method requires having an experienced data scientist within the team. This approach ensures better operational flexibility in terms of available techniques. Additionally, the team leverages deeper understanding of machine learning tools and libraries – like TensorFlow or Theano which are specifically for researchers and data science experts. Specialized data science team Companies can also have an independent data science department to build an all-encompassing machine learning applications and frameworks. This approach entails the highest cost. All operations, from data cleaning and model training to building front-end interfaces, are handled by a dedicated data science team. It doesn't necessarily mean that all team members should have a data science background, but they should have technology background with certain service management skills. A specialized structure model aids in addressing complex data science tasks that include research, use of multiple ML models tailored to various aspects of decision-making, or multiple ML backed services. Today's most successful Silicon Valley tech operates with specialized data science teams. Additionally they are custom-built and wired for specific tasks to achieve different business goals. For example, the team structure at Airbnb is one of the most interesting use cases. Martin Daniel, a data scientist at Airbnb in this talk explains how the team emphasizes on having an experimentation-centric culture and apply machine learning rigorously to address unique product challenges. Job roles and responsibilities within data science team As discussed earlier, there are many roles within a data science team. As per Michael Hochster, Director of Data Science at Stitch Fix, there are two types of data scientists: Type A and Type B. Type A stands for analysis. Individuals involved in Type A are statisticians that make sense of data without necessarily having strong programming knowledge. Type A data scientists perform data cleaning, forecasting, modeling, visualization, etc. Type B stands for building. These individuals use data in production. They're good software engineers with strong programming knowledge and statistics background. They build recommendation systems, personalization use cases, etc. Though it is rare that one expert will fit into a single category. But understanding these data science functions can help make sense of the roles described further. Chief data officer/Chief analytics officer The chief data officer (CDO) role has been taking organizations by storm. A recent NewVantage Partners' Big Data Executive Survey 2018 found that 62.5% of Fortune 1000 business and technology decision-makers said their organization appointed a chief data officer. The role of chief data officer involves overseeing a range of data-related functions that may include data management, ensuring data quality and creating data strategy. He or she may also be responsible for data analytics and business intelligence, the process of drawing valuable insights from data. Even though chief data officer and chief analytics officer (CAO) are two distinct roles, it is often handled by the same person. Expert professionals and leaders in analytics also own the data strategy and how a company should treat its data. It does make sense as analytics provide insights and value to the data. Hence, with a CDO+CAO combination companies can take advantage of a good data strategy and proper data management without losing on quality. According to compensation analysis from PayScale, the median chief data officer salary is $177,405 per year, including bonuses and profit share, ranging from $118,427 to $313,791 annually. Skill sets required: Data science and analytics, programming skills, domain expertise, leadership and visionary abilities are required. Data analyst The data analyst role implies proper data collection and interpretation activities. The person in this job role will ensure that collected data is relevant and exhaustive while also interpreting the results of the data analysis. Some companies also require data analysts to have visualization skills to convert alienating numbers into tangible insights through graphics. As per Indeed, the average salary for a data analyst is $68,195 per year in the United States. Skill sets required: Programming languages like R, Python, JavaScript, C/C++, SQL. With this critical thinking, data visualization and presentation skills will be good to have. Data scientist Data scientists are data experts who have the technical skills to solve complex problems and the curiosity to explore what problems are needed to be solved. A data scientist is an individual who develops machine learning models to make predictions and is well versed in algorithm development and computer science. This person will also know the complete lifecycle of the model development. A data scientist requires large amounts of data to develop hypotheses, make inferences, and analyze customer and market trends. Basic responsibilities include gathering and analyzing data, using various types of analytics and reporting tools to detect patterns, trends and relationships in data sets. According to Glassdoor, the current U.S. average salary for a data scientist is $118,709. Skills set required: A data scientist will require knowledge of big data platforms and tools like  Seahorse powered by Apache Spark, JupyterLab, TensorFlow and MapReduce; and programming languages that include SQL, Python, Scala and Perl; and statistical computing languages, such as R. They should also have cloud computing capabilities and knowledge of various cloud platforms like AWS, Microsoft Azure etc.You can also read this post on how to ace a data science interview to know more. Machine learning engineer At times a data scientist is confused with machine learning engineers, but a machine learning engineer is a distinct role that involves different responsibilities. A machine learning engineer is someone who is responsible for combining software engineering and machine modeling skills. This person determines which model to use and what data should be used for each model. Probability and statistics are also their forte. Everything that goes into training, monitoring, and maintaining a model is the ML engineer's job. The average machine learning engineer's salary is $146,085 in the US, and is ranked No.1 on the Indeed's Best Jobs in 2019 list. Skill sets required: Machine learning engineers will be required to have expertise in computer science and programming languages like R, Python, Scala, Java etc. They would also be required to have probability techniques, data modelling and evaluation techniques. Data architects and data engineers The data architects and data engineers work in tandem to conceptualize, visualize, and build an enterprise data management framework. The data architect visualizes the complete framework to create a blueprint, which the data engineer can use to build a digital framework. The data engineering role has recently evolved from the traditional software-engineering field.  Recent enterprise data management experiments indicate that the data-focused software engineers are needed to work along with the data architects to build a strong data architecture. Average salary for a data architect in the US ranges from $1,22,000 to $1,29, 000 annually as per a recent LinkedIn survey. Skill sets required: A data architect or an engineer should have a keen interest and experience in programming languages frameworks like HTML5, RESTful services, Spark, Python, Hive, Kafka, and CSS etc. They should have the required knowledge and experience to handle database technologies such as PostgreSQL, MapReduce and MongoDB and visualization platforms such as; Tableau, Spotfire etc. Business analyst A business analyst (BA) basically handles Chief analytics officer's role but on the operational level. This implies converting business expectations into data analysis. If your core data scientist lacks domain expertise, a business analyst can bridge the gap. They are responsible for using data analytics to assess processes, determine requirements and deliver data-driven recommendations and reports to executives and stakeholders. BAs engage with business leaders and users to understand how data-driven changes will be implemented to processes, products, services, software and hardware. They further articulate these ideas and balance them against technologically feasible and financially reasonable. The average salary for a business analyst is $75,078 per year in the United States, as per Indeed. Skill sets required: Excellent domain and industry expertise will be required. With this good communication as well as data visualization skills and knowledge of business intelligence tools will be good to have. Data visualization engineer This specific role is not present in each of the data science teams as some of the responsibilities are realized by either a data analyst or a data architect. Hence, this role is only necessary for a specialized data science model. The role of a data visualization engineer involves having a solid understanding of UI development to create custom data visualization elements for your stakeholders. Regardless of the technology, successful data visualization engineers have to understand principles of design, both graphical and more generally user-centered design. As per Payscale, the average salary for a data visualization engineer is $98,264. Skill sets required: A data visualization engineer need to have rigorous knowledge of data visualization methods and be able to produce various charts and graphs to represent data. Additionally they must understand the fundamentals of design principles and visual display of information. To sum it up, a data science team has evolved to create a number of job roles and opportunities, but companies still face challenges in building up the team from scratch and find it hard to figure where to start from. If you are facing a similar dilemma, check out this book, Managing Data Science, written by Kirill Dubovikov. It covers concepts and methodologies to manage and deliver top-notch data science solutions, while also providing guidance on hiring, growing and sustaining a successful data science team. How to learn data science: from data mining to machine learning How to ace a data science interview Data science vs. machine learning: understanding the difference and what it means today 30 common data science terms explained 9 Data Science Myths Debunked
Read more
  • 0
  • 0
  • 10189

article-image-the-best-backend-tools-in-web-development
Sugandha Lahoti
06 Jun 2018
5 min read
Save for later

The best backend tools in web development

Sugandha Lahoti
06 Jun 2018
5 min read
If you’re a backend developer, it’s easy to feel overwhelmed by the range of backend development tools available. It goes without saying that you should use what works for you but sometimes it’s not that easy to even work that out. With this in mind, this year’s Skill Up report offers a useful insight into some of the most popular backend tools being used today. Let’s take a look at what tools came out on top. That should help you make decisions about what you’re going to use or maybe even just learn. Read the Skill Up report in full. Sign up to our weekly newsletter and download the PDF for free. Node.js More than 50% respondents said, they prefer Node.js, the popular server-side Javascript coding framework. Node.js is a Javascript runtime that runs on the V8 JavaScript runtime engine. Node.js adds capabilities to Javascript (front-end language) to let it do more than just creating interactive websites. It uses an event-driven, non-blocking I/O model that makes it lightweight and efficient. The latest stable release of Node, Node 10, will be the next candidate in line for the Long Term Support (LTS) in October 2018. Node.js 10.0 comes with plenty of new features like OpenSSL 1.1.0 security toolkit, upgraded npm, N-API, and much more. Get started with learning Node.js with the following books: Learning Node.js Development Learn Node.js by Building 6 Projects RESTful Web API Design with Node.js 10 - Third Edition ASP.NET Core The next popular alternative was ASP. NET Core with over 25% developers approving it as their choice of backend framework. ASP.NET Core is the open-source cross-platform framework for building backends, web apps and services, and IoT apps. According to the skill-up survey, it was also one of the most popular framework used by developers. It provides a cloud-ready, environment-based configuration system. It seamlessly integrates with popular client-side frameworks and libraries, including Angular, React, and Bootstrap. Get started with ASP.NET Core by reading: Learning ASP.NET Core 2.0 Mastering ASP.NET Core 2.0 ASP.NET Core 2 High Performance - Second Edition Express.js Developers and tech pros also like to work with Express JS, and hence it ranked No. 3 on our list. Express JS is the pre-built Node JS framework that can help developers build faster and smarter websites and web apps. Express basically extends Node.js to build complete web apps. It is the perfect framework to learn for developers, who are fluent in Node.js, but want to transition to creating apps from just server-side technologies. Express is lightweight and comes with extra, built-in web application features and the Express API to support the already robust, feature-packed Node.js platform. Express is not just limited to NodeJS. It also works seamlessly with other modules and offers HTTP utilities and middleware for creating APIs. It can help developers master single-page and multiple-page websites, as well as some complex web apps. You can go through Projects in ExpressJS [Video], a complete course to learn professional web development using Express.js. Laravel Next, was Laravel, a prominent member of a new generation of web frameworks. It is one of the most popular PHP frameworks and is also free and and open source. It features: A simple, fast routing engine Powerful dependency injection container Multiple back-ends for session and cache storage Database agnostic schema migrations Robust background job processing Real-time event broadcasting The latest stable release, Laravel 5 is a substantial upgrade with a lot of new toys, at the same time retaining the features that made Laravel wildly successful. It comes with plenty of architectural as well as design-based changes. Start building with Laravel with these videos. Beginning Laravel [Video] Laravel Foundations: Basics to Every App [Video] Java EE The fifth most popular choice of backend tool is the Java EE. The Enterprise Java standard or Java EE is a collection of technologies and APIs for the Java platform designed to support Enterprise. By enterprise, we mean applications classified as large-scale, distributed, transactional and highly-available, designed to support mission-critical business requirements. Applications written to comply with the Java EE specification do not tie developers to a specific vendor; instead, they can be deployed to any Java EE compliant application server. The Java EE server application implements the Java EE platform APIs and provides the standard Java EE services. The latest stable release, Java EE 8 brings with it a load of features, mainly targeting newer architectures such as microservices, modernized security APIs, and cloud deployments. Our best picks for learning Java EE: Java EE 8 Application Development Architecting Modern Java EE Applications Java EE 8 High Performance The other backend tools which were among the top picks by developers included: Spring, a programming and configuration model for building modern Java-based enterprise applications, on any kind of deployment platform. Django, a powerful Python web framework for creating RESTful web services. It reduces the amount of trivial code, which simplifies the creation of web applications and results in faster development. Flask, a framework for building web servers in Python. It is a micro framework, meaning it’s not a full stack web application development framework. It just gives the developers very basics to get a web server running. Firebase, Google’s mobile platform to help developers run mobile backend code without managing servers and develop high-quality apps. Ruby on Rails, one of the oldest, backend technology. A certain percentage of people still prefer using ruby on rails for their backend code. Rails is a flexible and IDE friendly framework with easy functions and manipulations and the support of the powerful ruby language. The entire skill up survey report can be read on the Packt website, which details on what developers think about the changing tech landscape and the parameters that are driving that change. This survey report is launched at the start of the Skill Up campaign, where every eBook and video will be available for $10. Go grab your free content now!
Read more
  • 0
  • 0
  • 10035
article-image-how-develop-game-concept
Raka Mahesa
18 Sep 2017
5 min read
Save for later

How to develop a game concept

Raka Mahesa
18 Sep 2017
5 min read
You may have an idea or a concept for a game and you may like to make a full game based on that concept. Congratulations, you're now taking the first step in the game development process. But you may be unsure of what to do next with your game concept. Fortunately, that’s what we’re here to discuss this.  How to find inspiration for a game idea A game idea or concept can come from a variety of places. You may be inspired by another medium, such as a film or a book, you may have had an exciting experience and want to share it with others, you may be playing another game and think you can do better, or you may just have a sudden flash of inspiration out of nowhere. Because ideas can come from a variety of sources, they can take on a number of different forms and robustness. So it's important to take a step back and have another look at this idea of yours.  How to create a game prototype  What should you do after your game concept has been fleshed out? Well, the next step is to create a simple prototype based on your game concept to see if it is viable and actually fun to play.  Wait, what if this is your first foray into game development and you barely have any programming skill? Well, fortunately, developing a game prototype is a good entry to the world of programming. There are many game development tools out there like GameMaker, Stencyl, and Construct 2 that can help you quickly create a prototype without having to write too many lines of code. These tools are so useful that even seasoned programmers use them to quickly build a prototype.  Should I use a game engine to prototype?  Should you use full-featured, professional game engines for making a prototype? Well, it's completely up to you, but one of the purposes of making a prototype is to be able to test out your ideas easily, so when the idea doesn't work out, you can tweak it quickly. With a full-featured game engine, even though it's powerful, it may take longer to complete simple tasks, and you end up not being able to iterate on your game quick enough.  That's also why most game prototypes are made with just simple shapes or very simple graphics. Creating those kinds of graphics doesn't take a lot of time and allows you to iterate on your game concept quickly. Imagine you're testing out a game concept and found out that enemies that just randomly hop around aren't fun, so you decide to make those enemies simply run on the ground. If you're just using a red square for your hopping enemies, you can use the same square for running enemies. But if you're using, say, frog images for those enemies, you will have to switch to a different image when you want the enemies to run. Why is prototyping so important in game development?  You may wonder why the emphasis is on creating a prototype instead of building the actual game. After all, isn't fleshing out a game concept supposed to make sure the game is fun to play? Well, unfortunately, what seems fun in theory may not be actually fun in practice. Maybe you thought that having a jump stamina would make things more exciting for a player, but after prototyping such a system, you may discover that it actually slow things down and makes the game less fun.  Also, prototyping is not just useful for measuring a game's fun, it's also useful for making sure the player has the kinds of experiences that the game concept wants to deliver. Maybe you have this idea of a game where the hero fights many enemies at once so the player can experience an epic battle. But after you prototyped it, you found out that the game felt chaotic instead of epic. Fortunately with a prototype you can quickly tweak the variables of your enemies to make the game feel more epic and less chaotic.  Using simple graphics  Using simple graphics is important for a game prototype. If players can have a good experience with a prototype that uses simple graphics, imagine the fun they'll have with the final graphics. Simple graphics are good because the experience the player feels is due to the game's functions, and not because of how the game looks.  Next steps  After you're done building the prototype and have proven that your game concept is fun to play, you can move on to the next step in the game development process. Your next step depends on the sort of game you want to make. If it's a massive game with many systems, you might want to create a proper game design document that includes how you want to expand the mechanics of your game. But if the game is on the small side with simple mechanics, you can start building the final product and assets.  Good luck on your game development journey! Raka Mahesa is a game developer at Chocoarts (http://chocoarts.com/), who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99. 
Read more
  • 0
  • 0
  • 9971

article-image-react-native-vs-ionic-which-one-is-the-better-mobile-app-development-framework
Guest Contributor
01 Mar 2019
6 min read
Save for later

React Native Vs Ionic : Which one is the better mobile app development framework?

Guest Contributor
01 Mar 2019
6 min read
Today, mobile app development has come a long way, it isn’t the same as it used to be. In earlier days, the development process included only simple decisions such as design, features and the cost of creating the app. But, this scenario has changed now. Nowadays, mobile application development starts with the selection of the right app development framework. There are lots of options to choose from like Flutter, AngularJS, Ionic, React Native, etc. In this post, we are going to compare two powerful mobile app development frameworks: Ionic and React Native, to figure out the best option for your app development needs. React Native - An introduction React native is developed by Facebook using JavaScript which is one of the most popular languages used by mobile developers. React Native allows creating high-end applications for specific operating systems. Developers can reuse the code from this framework and don’t need to build an application from scratch. This is a helpful tool to create applications for Android and iOS operating systems. Features and benefits of React Native As it is reusable across Android and iOS, it saves development time and cost. With virtual-DOM support, it allows viewing changes in real time. There is a huge community of React native developers. Code written by one developer can be read, studied, understood and extended easily by other developers. Once the code is developed,  it can be used on iOS and Android. Issues with React Native apps for Android or iOS can be resolved quickly. It’s consistently improving and with every new release app development becomes interesting and convenient. Ionic - An introduction Ionic is developed by Drifty using TypeScript. It’s an open-source platform for developing hybrid mobile applications using HTML5, JavaScript and CSS technologies. Apps built with the Ionic framework are mainly focused on the UI, appearance, and feel. As it utilizes a combination of Apache Cordova and Angular, Ionic for many developers, is the first choice for app development. It provides tools such as HTML5, CSS, SaaS, etc to develop top-notch hybrid mobile apps to be run on Windows, Android, and iOS. Features and benefits of Ionic Ionic is an open source framework used for developing hybrid mobile applications. It is built on top of AngularJS and Apache Cordova. Ionic Framework comes with a command line interface (CLI) that empowers developers to build and test apps on any platform. It offers all the functionalities that are available with native app development SDKs to allows to develop apps and customize them for the different OS then deploy through Cordova. Apps require one-time development with Ionic and can be deployed on Android, iOS and Windows platforms. Facility to build apps using HTML5, CSS, and JavaScript technologies. The apps developed with Ionic are majorly focused on UI to provide the better user experience. It offers a multitude of exciting elements to choose from for development. Ionic 4 is the newest release of Ionic so far. The release is a complete rebuild of the popular JavaScript framework for developing mobile and desktop apps. Although Ionic has, up until now, been using Angular components, this new version has instead been built using Web Components. This is significant, as it changes the whole ball game for the project. It means the Ionic Framework is now an app development framework that can be used alongside any front end frameworks, not just Angular. React Native Vs Ionic: A comparison The following table below shows the difference between these two on different bases. Basis for comparison React Native Ionic Ease of learning Due to a few pre-developed elements, learning takes time. With plenty of pre-developed and pre-designed elements, learning is easier and shorter. Code language JSX (A syntax extension to JavaScript used to optimize code before compilation into JS) TypeScript (A typed superset of JavaScript for compiling clean and simple JS code on any browser) Code reusability It allows using the same code to develop Windows, Android, and iOS mobile apps. Same code can be utilized for creating apps for iOS, Android, Windows as well as web and PWA. Performance It has excellent performance as it doesn’t use WebView. The performance is average because it uses WebView. Community support Strong Strong Ease of development React follows the approach, ‘learn once write anywhere’ Written only once, it can be executed on any platform Phone hardware accessibility To access phone hardware Apache Cordova is used. No third Party tool is required to access phone hardware. Code testing An emulator or real mobile is needed for testing. Apps can be tested on any web browser. Documentation Very basic documentation Quite simple, clear and consistent documentation Developer Facebook Drifty.co By now, you must have obtained knowledge about the basic differences between Ionic and React Native. Both these frameworks are different from each other and they provide distinguishing features. Let us now further investigate both frameworks based on some board parameters Performance Android apps developed with React Native usually have a better performance score than ones developed with Ionic. This is because Ionic uses web-view in mobile app development and this is not the case with React Native framework. Design Ionic comes with plenty of pre-developed elements that allows creating elegant apps with excellent UI. This is what makes Ionic beat React Native when it comes to design. React Native offers a few pre-developed elements as compared to Ionic. Cost Developing apps with Ionic is cheaper than developing with React Native. This is because, in Ionic, the same code can be utilized across different platforms. Final words So which technology you should use? Well, this is not easy to tell. There are several factors you can consider like cost, features, requirements, platforms, and team size when deciding the best app development framework. They both serve different purposes and choosing any of them may be easy. If you a low budget then Ionic can be your choice to build an appealing application with a good performance. On the other hand, React Native lets you build native-like apps but the cost of development may be much than Ionic. Depending on your requirements and preferences, you can decide to choose any of the frameworks. Author-Bio David Meyer is a senior web developer at CSSChopper, a front end, and custom web development company catering customers across the globe. David has a passion for web development and likes to share his knowledge through informative blogs and articles.
Read more
  • 0
  • 0
  • 9949