Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech Guides

852 Articles
article-image-exploring-language-improvements-c-72-and-73-0
Mark J.
28 Nov 2017
9 min read
Save for later

Exploring Language Improvements in C# 7.2 and 7.3

Mark J.
28 Nov 2017
9 min read
With the C# 7 generation, Microsoft has decided to increase the cadence of language releases, releasing minor version numbers, aka point releases, for the first time since C# 1.1. This allows new features to be used by programmers faster than ever before, but the policy poses a challenge to writers of books about C#. Introduction One of the hardest parts of writing for technology is deciding when to stop chasing the latest changes and adding new content. Back in March 2017, I was reviewing the final drafts of the second edition of my book, C# 7 and .NET Core – Modern Cross-Platform Development. In Chapter 2, Speaking C# I got to the topic of representing number literals. One of the improvements in C# 7 is the ability to use the underscore character as a digit separator. For example, when writing large numbers in decimal you can improve the readability of number literals using underscores, and you can express binary or hexadecimal number literals by prefixing the number literal with 0b or 0x, as shown in the following code: // C# 6 and earlier int decimalNotation = 2000000; // 2 million // C# 7 and 7.1 int decimalNotation = 2_000_000; // 2 million int binaryNotation = 0b0001_1110_1000_0100_1000_0000; // 2 million int hexadecimalNotation = 0x001E_8480; // 2 million But in the final draft I hadn't included code examples of using underscores in number literals. At the last minute, I decided to add the preceding examples to the book. Unfortunately, I assumed that the underscore could be used to separate the prefixes 0b and 0x from the digits, and did not check the code examples would compile until the following day, after the book had gone to print. I had to release an erratum on the book's web page before it even reached the shelves. I felt so embarrassed. In the third edition, C# 7.1 and .NET Core 2.0 – Modern Cross-Platform Development, I fixed the code examples by removing the unsupported underscores after the prefixes since they are not supported in C# 7 or C# 7.1. Ironically, just as the third edition was due to go to print, Microsoft released C# 7.2, which adds support for using an underscore after the prefixes, as shown in the following code: // C# 7.2 and later int binaryNotation = 0b_0001_1110_1000_0100_1000_0000; // 2 million int hexadecimalNotation = 0x_001E_8480; // 2 million Gah! Clearly, I wasn't the only programmer who thought it is natural to be able to use underscores after the 0b or 0x prefixes. For the third edition, I decided not to make any last-minute changes to the book. This was partly because I didn't want to risk making a mistake again, and also because the code examples do work, they just don't show the latest improvement. Maybe in the fourth edition I will finally get the whole book perfect! But, of course, in the programming world that's impossible. Since the third edition covers C# 7.1, I have written this article to cover the improvements in C# 7.2 that are available today, and to preview the improvements coming early in 2018 with C# 7.3. Enabling C# 7 point releases Developer tools like Visual Studio 2017, Visual Studio Code, and the dotnet command line interface assume that you want to use the C# 7.0 language compiler by default. To use the improvements in a C# point release like 7.1 or 7.2, you must add a configuration element to the project file, as shown in the following markup: <LangVersion>7.2</LangVersion> Potential values for the <LangVersion> markup are shown in the following table: LangVersion Description 7, 7.1, 7.2, 7.3, 8 Entering a specific version number will use that compiler if it has been installed. default Uses the highest major number without a minor number, for example, 7 in 2017 and 8 later in 2018. latest Uses the highest major and highest minor number, for example, 7.2 in 2017, 7.3 early in 2018, 8 later in 2018. To be able to use C# 7.2, either install Visual Studio 2017 version 15.5 on Windows, or install .NET Core SDK 2.1.2 on Windows, macOS, or Linux from the following link: https://www.microsoft.com/net/download/ Run the .NET Core SDK installer, as shown in the following screenshot: Setting up a project for exploring C# 7.2 improvements In Visual Studio 2017 version 15.5 or later, create a new Console App (.NET Core) project named ExploringCS72 in a solution named Bonus, as shown in the following screenshot: You can download the projects created in this article from the Packt website or from the following GitHub repository: https://github.com/PacktPublishing/CSharp-7.1-and-.NET-Core-2.0-Modern-Cross-Platform-Development-Third-Edition/tree/master/BonusSectionCode/Bonus In Visual Studio Code, create a new folder named Bonus with a subfolder named ExploringCS72. Open the ExploringCS72 folder. Navigate to View | Integrated Terminal, and enter the following command: dotnet new console In either Visual Studio 2017 or Visual Studio Code, edit the ExploringCS72.csproj file, and add the <LangVersion> element, as shown highlighted in the following markup: <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <TargetFramework>netcoreapp2.0</TargetFramework> <LangVersion>7.2</LangVersion> </PropertyGroup> </Project> Edit the Program.cs file, as shown in the following code: using static System.Console; namespace ExploringCS72 { class Program { static void Main(string[] args) { int year = 0b_0000_0111_1011_0100; WriteLine($"I was born in {year}."); } } } In Visual Studio 2017, navigate to Debug | Start Without Debugging, or press Ctrl + F5. In Visual Studio Code, in Integrated Terminal, enter the following command: dotnet run You should see the following output, which confirms that you have successfully enabled C# 7.2 for this project: I was born in 1972. In Visual Studio Code, note that the C# extension version 1.13.1 (released on November 13, 2017) has not been updated to recognize the improvements in C# 7.2. You will see red squiggle compile errors in the editor even though the code will compile and run without problems, as shown in the following screenshot: Controlling access to type members with modifiers When you define a type like a class with members like fields, you control where those members can be accessed by applying modifiers like public and private. Until C# 7.2, there have been five combinations access modifier keywords. C# 7.2 adds a sixth combination, as shown in the last row of the following table: Access modifier Description private Member is accessible inside the type only. This is the default if no keyword is applied to a member. internal Member is accessible inside the type, or any type that is in the same assembly. protected Member is accessible inside the type, or any type that inherits from the type. public Member is accessible everywhere. internal protected Member is accessible inside the type, or any type that is in the same assembly, or any type that inherits from the type. Equivalent to internal_OR_protected. private protected Member is accessible inside the type, or any type that inherits from the type and is in the same assembly. Equivalent to internal_AND_protected. Setting up a .NET Standard class library to explore access modifiers In Visual Studio 2017 version 15.5 or later, add a new Class Library (.NET Standard) project named ExploringCS72Lib to the current solution, as shown in the following screenshot: In Visual Studio Code, create a new subfolder in the Bonus folder named ExploringCS72Lib. Open the ExploringCS72Lib folder. Navigate to View | Integrated Terminal, and enter the following command: dotnet new classlib Open the Bonus folder so that you can work with both projects. In either Visual Studio 2017 or Visual Studio Code, edit the ExploringCS72Lib.csproj file, and add the <LangVersion> element, as shown highlighted in the following markup: <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>netstandard2.0</TargetFramework> <LangVersion>7.2</LangVersion> </PropertyGroup> </Project> In the class library, rename the class file from Class1 to AccessModifiers, and edit the class, as shown in the following code: using static System.Console; namespace ExploringCS72 { public class AccessModifiers { private int InTypeOnly; internal int InSameAssembly; protected int InDerivedType; internal protected int InSameAssemblyOrDerivedType; private protected int InSameAssemblyAndDerivedType; // C# 7.2 public int Everywhere; public void ReadFields() { WriteLine("Inside the same type:"); WriteLine(InTypeOnly); WriteLine(InSameAssembly); WriteLine(InDerivedType); WriteLine(InSameAssemblyOrDerivedType); WriteLine(InSameAssemblyAndDerivedType); WriteLine(Everywhere); } } public class DerivedInSameAssembly : AccessModifiers { public void ReadFieldsInDerivedType() { WriteLine("Inside a derived type in same assembly:"); //WriteLine(InTypeOnly); // is not visible WriteLine(InSameAssembly); WriteLine(InDerivedType); WriteLine(InSameAssemblyOrDerivedType); WriteLine(InSameAssemblyAndDerivedType); WriteLine(Everywhere); } } } Edit the ExploringCS72.csproj file, and add the <ItemGroup> element to reference the class library in the console app, as shown highlighted in the following markup: <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <TargetFramework>netcoreapp2.0</TargetFramework> <LangVersion>7.2</LangVersion> </PropertyGroup> <ItemGroup> <ProjectReference Include="..ExploringCS72LibExploringCS72Lib.csproj" /> </ItemGroup> </Project> Edit the Program.cs file, as shown in the following code: using static System.Console; namespace ExploringCS72 { class Program { static void Main(string[] args) { int year = 0b_0000_0111_1011_0100; WriteLine($"I was born in {year}."); } public void ReadFieldsInType() { WriteLine("Inside a type in different assembly:"); var am = new AccessModifiers(); WriteLine(am.Everywhere); } } public class DerivedInDifferentAssembly : AccessModifiers { public void ReadFieldsInDerivedType() { WriteLine("Inside a derived type in different assembly:"); WriteLine(InDerivedType); WriteLine(InSameAssemblyOrDerivedType); WriteLine(Everywhere); } } } When entering code that accesses the am variable, note that IntelliSense only shows members that are visible due to access control. Passing parameters to methods In the original C# language, parameters had to be passed in the order that they were declared in the method. In C# 4, Microsoft introduced named parameters so that values could be passed in a custom order and even made optional. But if a developer chose to name parameters, all of them had to be named. In C# 7.2, you can mix named and unnamed parameters, as long as they are passed in the correct position. In Program.cs, add a static method, as shown in the following code: public static void PassingParameters(string name, int year) { WriteLine($"{name} was born in {year}."); } In the Main method, add the following statement: PassingParameters(name: "Bob", 1945); Visual Studio Code will show an error, as shown in the following screenshot, but the code will compile and execute. Optimizing performance with value types The fourth and final feature of C# 7.2 is working with value types while using reference semantics. This can improve performance in very specialized scenarios. You are unlikely to use them much in your own code, unless like Microsoft themselves, you create frameworks for other programmers to build upon that need to do a lot of memory management. You can learn more about these features at the following link: https://docs.microsoft.com/en-gb/dotnet/csharp/reference-semantics-with-value-types Conclusion I plan to refresh this bonus article when C# 7.3 is released to update it with the new features in that point release. Good luck with all your C# adventures!
Read more
  • 0
  • 0
  • 6182

article-image-what-is-core-ml
Savia Lobo
28 Sep 2018
5 min read
Save for later

What is Core ML?

Savia Lobo
28 Sep 2018
5 min read
Introduced by Apple, CoreML is a machine learning framework that powers the iOS app developers to integrate machine learning technology into their apps. It supports natural language processing (NLP), image analysis, and various other conventional models to provide a top-notch on-device performance with minimal memory footprint and power consumption. This article is an extract taken from the book Machine Learning with Core ML written by Joshua Newnham. In this article, you will get to know the basics of what CoreML is and its typical workflow. With the release of iOS 11 and Core ML, performing inference is just a matter of a few lines of code. Prior to iOS 11, inference was possible, but it required some work to take a pre-trained model and port it across using an existing framework such as Accelerate or metal performance shaders (MPSes). Accelerate and MPSes are still used under the hood by Core ML, but Core ML takes care of deciding which underlying framework your model should use (Accelerate using the CPU for memory-heavy tasks and MPSes using the GPU for compute-heavy tasks). It also takes care of abstracting a lot of the details away; this layer of abstraction is shown in the following diagram: There are additional layers too; iOS 11 has introduced and extended domain-specific layers that further abstract a lot of the common tasks you may use when working with image and text data, such as face detection, object tracking, language translation, and named entity recognition (NER). These domain-specific layers are encapsulated in the Vision and natural language processing (NLP) frameworks; we won't be going into any details of these frameworks here, but you will get a chance to use them in later chapters: It's worth noting that these layers are not mutually exclusive and it is common to find yourself using them together, especially the domain-specific frameworks that provide useful preprocessing methods we can use to prepare our data before sending to a Core ML model. So what exactly is Core ML? You can think of Core ML as a suite of tools used to facilitate the process of bringing ML models to iOS and wrapping them in a standard interface so that you can easily access and make use of them in your code. Let's now take a closer look at the typical workflow when working with Core ML. CoreML Workflow As described previously, the two main tasks of a ML workflow consist of training and inference. Training involves obtaining and preparing the data, defining the model, and then the real training. Once your model has achieved satisfactory results during training and is able to perform adequate predictions (including on data it hasn't seen before), your model can then be deployed and used for inference using data outside of the training set. Core ML provides a suite of tools to facilitate getting a trained model into iOS, one being the Python packaged released called Core ML Tools; it is used to take a model (consisting of the architecture and weights) from one of the many popular packages and exporting a .mlmodel file, which can then be imported into your Xcode project. Once imported, Xcode will generate an interface for the model, making it easily accessible via code you are familiar with. Finally, when you build your app, the model is further optimized and packaged up within your application. A summary of the process of generating the model is shown in the following diagram:   The previous diagram illustrates the process of creating the .mlmodel;, either using an existing model from one of the supported frameworks, or by training it from scratch. Core ML Tools supports most of the frameworks, either internal or as third party plug-ins, including  Keras, Turi, Caffe, scikit-learn, LibSVN, and XGBoost frameworks. Apple has also made this package open source and modular for easy adaption for other frameworks or by yourself. The process of importing the model is illustrated in this diagram: In addition; there are frameworks with tighter integration with Core ML that handle generating the Core ML model such as Turi Create, IBM Watson Services for Core ML, and Create ML. We will be introducing Create ML in chapter 10; for those interesting in learning more about Turi Create and IBM Watson Services for Core ML then please refer to the official webpages via the following links: Turi Create; https://github.com/apple/turicreate IBM Watson Services for Core ML; https://developer.apple.com/ibm/ Once the model is imported, as mentioned previously, Xcode generates an interface that wraps the model, model inputs, and outputs. Thus, in this post, we learned about the workflow of training and how to import a model. If you've enjoyed this post, head over to the book  Machine Learning with Core ML  to delve into the details of what this model is and what Core ML currently supports. Emotional AI: Detecting facial expressions and emotions using CoreML [Tutorial] Build intelligent interfaces with CoreML using a CNN [Tutorial] Watson-CoreML: IBM and Apple’s new machine learning collaboration project
Read more
  • 0
  • 0
  • 6149

article-image-jakarta-ee-past-present-and-future
David Heffelfinger
16 Aug 2018
10 min read
Save for later

Jakarta EE: Past, Present, and Future

David Heffelfinger
16 Aug 2018
10 min read
You may have heard some talk about a new Java framework called Jakarta EE, in this article we will cover what Jakarta EE actually is, how we got here, and what to expect when it’s actually released. History and Background In September of 2017, Oracle announced it was donating Java EE to the Eclipse Foundation. Isn’t Eclipse a Java IDE? Most Java developers are familiar with the hugely popular Eclipse IDE, therefore for many, when they hear the word “Eclipse”, the Eclipse IDE comes to mind. Not everybody knows that the Eclipse IDE is developed by the Eclipse Foundation, an open source foundation similar to the Apache Foundation and the Linux Foundation. In addition to the Eclipse IDE, the Eclipse Foundation develops several other Java tools and APIs such as Eclipse Vert.x, Eclipse Yasson, and EclipseLink. Java EE was the successor to J2EE; which was a wildly popular set of specifications for implementing enterprise software. In spite of its popularity, many J2EE APIs were cumbersome to use and required lots of boilerplate code. Sun Microsystems, together with the Java community as part of the Java Community Process (JCP), replaced J2EE with Java EE in 2006. Java EE introduced a much nicer, lightweight programming model, making enterprise Java development much more easier than what could be accomplished with J2EE. J2EE was so popular that, to this day, it is incorrectly used as a generic term for all server-side Java technologies. Many, to this day still refer to Java EE as J2EE, and incorrectly assume Java EE is a bloated, convoluted technology. In short, J2EE was so popular that even Java EE can’t shake its predecessor’s reputation for being a “heavyweight” technology. In 2010 Oracle purchased Sun Microsystems, and became the steward for Java technology, including Java EE. Java EE 7 was released in 2013, after the Sun Microsystems acquisition by Oracle, simplifying enterprise software development even further, and adding additional APIs to meet new demands of enterprise software systems. Work on Java EE 8, the latest version of the Java EE specification, began shortly after Java EE 7 was released. In the beginning everything seemed to be going well, however  in early 2016, the Java EE community started noticing a lack of progress in Java EE 8, particularly Java Specification Requests (JSRs) led by Oracle. The perceived lack of Java EE 8 progress became a big concern for many in the Java EE community. Since the specifications were owned by Oracle, there was no legal way for any other entity to continue making progress on Java EE 8. In response to the perceived lack of progress, several Java EE vendors, including big names such as IBM and Red Hat, got together and started the Microprofile initiative, which aimed to introduce new APIs to Java EE, with a focus on optimizing Java EE for developing systems based on a microservices architecture. The idea wasn’t to compete with Java EE per se, but to develop new specifications in the hopes that they would be eventually added to Java EE proper. In addition to big vendors reacting to the perceived Java EE progress, a grassroots organization called the Java EE Guardians was formed, led largely by prominent Java EE advocate Reza Rahman. The Java EE Guardians provided a way for Java EE developers and advocates to have a united, collective voice which could urge Oracle to either keep working on Java EE 8, or to allow the community to continue the work themselves. Nobody can say for sure how much influence the Microprofile initiative and Java EE Guardians had, but many speculate that Java EE would have never been donated to the Eclipse Foundation had it not been for these two initiatives. One Standard, Multiple Implementations It is worth mentioning that Java EE is not a framework per se, but a set of specifications for various APIs. Some examples of Java EE specifications include the Java API for RESTful Web Services (JAX-RS), Contexts and Dependency Injection (CDI), and the Java Persistence API (JPA). There are several implementations of Java EE, commonly known as application servers or runtimes, examples include Weblogic, JBoss, Websphere, Apache Tomee, GlassFish and Payara. Since all of these implement the Java EE specifications, code written against one of these servers can easily be migrated to another one, with minimal or no modifications. Coding against the Java EE standard provides protection against vendor lock-in. Once Jakarta EE is completely migrated to the Eclipse Foundation, it will continue being a specification with multiple implementations, keeping one of the biggest benefits of Java EE. To become Java EE certified, application server vendors had to pay Oracle a fee to obtain a Technology Compatibility Kit (TCK), which is a set of tests vendors can use to make sure their products comply 100% with the Java EE specification. The fact that the TCK is closed source and not publicly available has been a source of controversy among the Java EE community. It is expected that the TCK will be made publicly available once the transition to the Eclipse Foundation is complete. From Java EE to Jakarta EE Once the announcement of the donation was made, it became clear that for legal reasons Java EE would have to be renamed, as Oracle owns the “Java” trademark. The Eclipse Foundation requested input from the community, hundreds of suggestions were submitted. The Foundation made it clear that naming such a big project is no easy task, there are several constraints that may not be obvious to the casual observer, such as: the name must not be trademarked in any country, it must be catchy, and it must not spell profanity in any language. Out of hundreds of suggestions, the Eclipse Foundation narrowed them down to two choices, “Enterprise Profile” and “Jakarta EE”, and had the community vote for their favorite. “Jakarta EE” won by a fairly large margin. It is worth mentioning that the name “Jakarta” carries a bit of history in the Java world, as it used to be an umbrella project under the Apache Foundation. Several very popular Java tools and libraries used to fall under the Jakarta umbrella, such as the ANT build tool, the Struts MVC framework, and many others. Where we are in the transition Ever since the announcement, the Eclipse Foundation along with the Java EE community at large has been furiously working on transitioning Java EE to the Eclipse Foundation. Transitioning such a huge and far reaching project to an open source foundation is a huge undertaking, and as such it takes some time. Some of the progress so far includes relicensing all Oracle led Java EE technologies, including reference implementations (RI), Technology Compatibility Kits (TCK) and project documentation.  39 projects have been created under the Jakarta EE umbrella, corresponding to 39 Java EE specifications being donated to the Eclipse Foundation. Reference Implementations Each Java EE specification must include a reference implementation, which proves that the requirements on the specification can be met by actual code. For example, the reference implementation for JSF is called Mojarra, the CDI reference implementation is called Weld, and the JPA is called EclipseLink. Similarly, all other Java EE specifications have a corresponding reference implementation. These 39 projects are in different stages of completion, a small minority are still in the proposal stage; some have provisioned committers and other resources, but code and other artifacts hasn’t been transitioned yet; some have had the initial contribution (code and related content) transitioned already, the majority of the projects have had the initial contribution committed to the Eclipse Foundation’s Git repository, and a few have had their first Release Review, which is a formal announcement of the project’s release to the Eclipse Foundation, and a request for feedback. Current status for all 39 projects can be found at https://www.eclipse.org/ee4j/status.php. Additionally, the Jakarta EE working group was established, which includes Java EE implementation vendors, companies that either rely on Java EE or provide products or services complementary to Java EE, as well as individuals interested in advancing Jakarta EE. It is worth noting that Pivotal, the company behind the popular Spring Framework, has joined the Jakarta EE Working Group. This is worth pointing out as the Spring Framework and Java EE have traditionally been perceived as competing technologies. With Pivotal joining the Jakarta EE Working Group some are speculating that “the feud may soon be over”, with Jakarta EE and Spring cooperating with each other instead of competing. At the time of writing, it has been almost a year since the announcement that Java EE is moving to the Eclipse foundation, some may be wondering what is holding up the process. Transitioning a project of such a massive scale as Java EE involves several tasks that may not be obvious to the casual observer, both tasks related to legal compliance as well as technical tasks. For example, each individual source code file needs to be inspected to make sure it has the correct license header. Project dependencies for each API need to be analyzed. For legal reasons, some of the Java EE technologies need to be renamed, appropriate names need to be found. Additionally, build environments need to be created for each project under the Eclipse Foundation infrastructure. In short, there is more work than meets the eye. What to expect when the transition is complete The first release of Jakarta EE will be 100% compatible with Java EE. Existing Java EE applications, application servers and runtimes will also be Jakarta EE compliant. Sometime after the announcement, the Eclipse Foundation surveyed the Java EE community as to the direction Jakarta EE should take under the Foundation’s guidance. The community overwhelmingly stated that they want better support for cloud deployment, as well as better support for microservices. As such, expect Jakarta EE to evolve to better support these technologies. Representatives from the Eclipse Foundation have stated that release cadence for Jakarta EE will be more frequent than it was for Java EE when it was under Oracle. In summary, the first version of Jakarta EE will be an open version of Java EE 8, after that we can expect better support for cloud and microservices development, as well as a faster release cadence. Help Create the Future of Jakarta EE Anyone, from large corporations to individual contributors can contribute to Jakarta EE. I would like to invite interested readers to contribute! Here are a few ways to do so: Subscribe to Jakarta EE community mailing list: jakarta.ee-community@eclipse.org Contribute to EE4J projects: https://github.com/eclipse-ee4j You can also keep up to date with the latest Jakarta EE happenings by following Jakarta EE on Twitter at @JakartaEE or by visiting the Jakarta EE web site at https://jakarta.ee About the Author David R. Heffelfinger David R. Heffelfinger is an independent consultant based in the Washington D.C. area. He is a Java Champion, an Apache NetBeans committer, and a former member of the JavaOne content committee. He has written several books on Java EE, application servers, NetBeans, and JasperReports. David is a frequent speaker at software development conferences such as JavaOne, Oracle Code and NetBeans Day. You can follow him on Twitter at @ensode.  
Read more
  • 0
  • 0
  • 6145

article-image-what-is-distributed-computing-and-whats-driving-its-adoption
Melisha Dsouza
07 Nov 2018
8 min read
Save for later

What is distributed computing and what's driving its adoption?

Melisha Dsouza
07 Nov 2018
8 min read
Distributed computing is having a real impact on the way companies look at the cloud. The "Most Promising Jobs 2018" report published by LinkedIn pointed out that distributed and cloud Computing rank amongst the top 10 most in-demand skills. What are the problems with centralized computing systems? Distributed computing solves many of the challenges that centralized computing systems pose today. These centralized systems - like IBM Mainframes - have been around for decades, but they’re beginning to lose favor. This is because centralized computing is ineffective and expensive in the context of increasing data and workloads. When you have a single central computer which controls a massive amount of computations - at the same time - it’s a massive strain on the system. Even one that’s particularly powerful. Centralized systems simply aren’t capable of processing huge volumes of transactional data and supporting tons of online users concurrently. There’s also a big issue with reliability. If your centralized server fails, all data could be permanently lost if you have no disaster recovery strategy. Fortunately, distributed computing offers solutions to many of these issues. How does distributed computing work? Distributed Computing comprises a group of systems located at different places, all connected over a network. They work on a single problem or a common goal. Each one of these systems is autonomous, programmable, asynchronous and failure-prone. These systems provide a better price/performance ratio when compared to a centralized system. This is because it’s more economical to add microprocessors rather than mainframes to your network. They have more computational power as compared to their centralized (mainframe) computing systems. Distributed computing and agility Another major plus point of distributed computing systems is that they provide much greater agility than centralized computing systems. Without centralization, organizations can add and change software and computational power according to the demands and needs of the business. With the reduction in price for computing power and storage thanks to the rise of public cloud services like AWS, organizations all over the world have begun using distributed systems and service-oriented architectures, like microservices. Distributed computing in action: Google search A perfect example of distributed computing in action is Google search. When a user submits a query, Google will use data from a number of different servers to deliver results, based on things like location, past searches, semantic keywords - and much, much more. These servers are located all around the world and are able to provide the search result in seconds or at time milliseconds. How cloud is driving the adoption of distributed computing Central to the adoption is the cloud. Today, cloud is mainstream and opens up the possibility of distributed systems to organizations in a number of different ways. Arguably, you’re not really seeing the full potential of cloud until you’ve moved to a distributed system. Let’s take a look at the different ways cloud services are helping companies feel confident enough to successfully leverage distributed computing. Infrastructure as a Service (IaaS) IaaS makes distributed systems accessible for many organizations by allowing them to host their infrastructure either internally on a private or public cloud. Essentially, they give an organization control over the operating system and platform that forms the foundation of their software infrastructure, but give an external cloud provider control over servers and virtualization technologies that make it possible to deploy that infrastructure. In the context of a distributed system, this means organizations have less to worry about. As you can imagine, without an IaaS, the process of developing and deploying a distributed system becomes much more complex and even costly. Platform as a Service: Custom Software on another Platform If IaaS effectively splits responsibilities between the organization and the cloud provider (the ‘service’), the platform as a Service (PaaS) ‘outsources’ even more to the cloud provider. Essentially, an organization simply has to handle the applications and data, leaving every other aspect of their infrastructure to the platform. This brings many benefits, and, in theory, should allow even relatively small engineering teams to take advantage of the benefits of a distributed system. The underlying complexity and heavy lifting that a distributed system brings rests with the cloud provider, allowing an organization’s engineers to focus on what matters most - shipping code. If you’re thinking about speed and innovation, then a PaaS opens that right up, provided your happy to allow your cloud provider to manage the bulk of your infrastructure. Software as a Service SaaS solutions are perhaps the clearest example of a distributed system. Arguably, given the way we use Saas today, it’s easy to forget that it can be a part of a distributed system. The concept is simple: it’s a complete software solution delivered to the end-user. If you’re trying to accomplish something particularly complex, something which you simply do not have the resources to do yourself, a SaaS solution could be effective. Users don’t need to worry about installing and maintaining software, they can simply access it via the internet   The biggest advantages of adopting a distributed computing system #1 Complete control on the system architecture Distributed computing opens up your options when it comes to system architecture. Although you might rely on an external cloud service for some resources (like compute or storage), the architectural decisions are ultimately yours. This means that you can make decisions based on exactly what your organization needs and how it works. In a sense, this is why distributed computing can bring you agility - but its not just about being agile in the strict sense, but also in a broader version of the word. It allows you to prioritize according to your own needs and demands. #2 Improve the “absolute performance” of the computing system Tasks can be partitioned into sub computations that can run concurrently. This, in turn, provides a total speedup of task completion. What’s more, if a particular site is currently overloaded with jobs, some of them can be moved to lightly loaded sites. This technique of ‘load sharing’ can boost the performance of your system. Essentially, distributed systems minimize the latency and response time while increasing the throughput. [caption id="attachment_23973" align="alignnone" width="1536"]  [/caption] #3  The Price to Performance ratio for the system Distributed networks offer a better price/performance ratio compared to centralized mainframe computers. This is because decentralized and modular applications can share expensive peripherals, such as high-capacity file servers and high-resolution printers. Similarly, multiple components can be run on nodes with specialized processing. This further reduces the cost of multiple specialized processing systems. #4 Disaster Recovery Distributed systems involve services communicating through different machines. This is where message integrity, confidentiality and authentication comes into play. In such a case, distributed computing gives organizations the flexibility to deploy a 4 way mechanism to keep operations secure: Encryption Authentication Authorization: Auditing: Another aspect of disaster recovery is reliability. If computation and the associated data effectively built into a single machine, and if that machine goes down, the entire service goes with it. With a distributed system, what could happen instead is that specific services might go down, but the whole thing should, in theory at least, stay standing. #5 Resilience through replication So, if specific services can go down within a distributed system, you still do need to do something to increase resilience. You do this by replicating services across multiple nodes, minimizing potential points of failure. This is what’s known as fault tolerance - it improves system reliability without affecting the system as a whole. It’s also worth pointing out that the hardware on which a distributed system is built is replaceable - this is better than depending on centralized hardware which, if it fails, will take everything with it… Another distributed computing example: SETI A good example of a distributed system is SETI. SETI collects massive amounts of data from observatories around the world on activity in the sky, in a bid to identify possible signs of extraterrestrial life. This information is then sliced into smaller pieces of data for easy analysis through distributed computing applications running as a screensaver on individual user PC’s, all around the world. The PC’s running the SETI screensaver will download a small file, and while a PC is unused, the screen saver downloads a data slice from SETI. It then runs the analytics application while the PC is idle, and when the analysis is complete, the analyzed data slice is uploaded back to SETI. This massive data analytics is possible all because of distributed computing. So, although distributed computing has become a bit of a buzzword, the technology is gaining traction in the minds of customers and service providers. Beyond the hype and debate, these services will ultimately help companies to be more responsive to market conditions while restraining IT costs. Cloudflare’s decentralized vision of the web: InterPlanetary File System (IPFS) Gateway to create distributed websites Oath’s distributed network telemetry collector- ‘Panoptes’ is now Open source! Intelligent Edge Analytics: 7 ways machine learning is driving edge computing adoption in 2018
Read more
  • 0
  • 0
  • 6129

Banner background image
article-image-is-serverless-architecture-a-good-choice-for-app-development
Mehul Rajput
11 Oct 2019
6 min read
Save for later

Is serverless architecture a good choice for app development?

Mehul Rajput
11 Oct 2019
6 min read
App development has evolved rapidly in recent years. With new demands and expectations from businesses and users, trends like cloud have helped developers to be more productive and to build faster, more reliable and secure applications. But, there’s no end to evolution - and serverless is arguably the next step for application development. But is a serverless architecture the right choice? What is a Serverless Architecture? When you hear the word sever-less, you might assume that it means no servers. In actual fact it really refers to the elimination of the need to manage the servers. Instead, it shifts that responsibility to your cloud provider. Simply, it means that the constituent parts of an application are divided between multiple servers, with no need for the application owner/manager to create or manage the infrastructure that supports it. Instead of running off a server, with a serverless architecture, it runs off functions. These are essentially actions that are fired off to ensure things happen within the application. This is where the phrase ‘function-as-a-service’, or FaaS, (another way of describing serverless) comes from.  A recent report claims that the FaaS market is projected to grow up to 32.7% by 2021, by 7.72 billion US dollars. Is Serverless Architecture a Good Choice for App Development? Now that we’ve established what the serverless actually means, we must get to business. Is serverless architecture the right choice for app development? Well, it can work either way. It can be positive as well as negative. Here are some reasons: Using serverless for app development: the positives There are many reasons because of which serverless architecture can be good for app development and should be used. Some of them are discussed below: Decreasing costs Easier for service Scalability Third-party services Decreasing costs The most effective use of a serverless architecture in an app development process is that it reduces the costs of the work.It’s typically less expensive a ‘traditional’ server architecture. The reason is that on hardware servers, you have to pay for many different things that might not be  required. For example, you won’t have to pay for regular maintenance, the grounds, the electricity, and staff maintenance. Hence, you can save a considerable amount of money and use that for app quality as well. Easier for service It is a rational thought that when the owner or the app manager will not have to manage the server themselves, and a machine can do this job, then it won’t be as challenging to make the service accessible. As it will make the job more comfortable because it will not require supervision. Second, you will not have to spend time on it. Instead, you can use this time for productive work such as product development. Third, the service by this technology is reliable, and hence you can easily use it without much fear. Scalability Now another interestingly useful advantage of serverless architecture in app development is scalability. So, what is scalability? Well, it is the phenomenon by which a system handles an extra amount of work by adding resources to the system. It is the capability of an app or product to continue to work appropriately without disturbance when it is reformed in size or volume to meet any users need. So, serverless architecture act as the resource that is added to the system to handle any work that has piled up. Third-party services Another essential and useful feature of the serverless architecture is that, going this way you can use third-party services. Hence, your app can use any third-party service it requires other than what you already have. This way, the struggle needed to create the backend architecture of the app reduces. Additionally the third-party might provide us with better services than we already have. Hence, eventually, serverless architecture proves to be better as it provides the extent of a third-party. Serverless for app development: negatives Now we know all the advantages of a serverless architecture, it’s important to note that it can also it  bring some limitations and disadvantages. These are: Time restrictions Vendor lock-in Multi-tenancy Debugging is not possible Time restrictions As mentioned before, serverless architecture works on FaaS rules and has a time limit for running a function. This time limit is 300 seconds exactly. So, when this limit is reached, the function is stopped. Therefore, for more complex functions that require more time to execute, FaaS approach may not be a good choice. Although this problem can be tackled in a way that the problem is solved easily, to do this, we can split a task into several simpler functions if the task allows it. Otherwise, time restrictions like these can cause great difficulty. Vendor lock-in We have discussed that by using serverless architecture, we can utilize with third party services. Well, it can also go in the wrong way and cause vendor lock-in. If, for any reason, you decide to shift to a new service provider, in most cases services will be fulfilled in a different way. That means the productivity gains you expected from serverless will be lost as you will have to adjust and reconfigure the infrastructure to accept the new service. Multi-tenancy Multi-tenancy is an increasing problem in serverless architecture. The data of many tenants are kept quite near to each other. This can create  confusion. Some data might be exchanged, distributed, or probably lost. In turn, this can cause security and reliability issues. A customer could, for example, suddenly produce an extraordinarily high load which would affect other customers' applications. Debugging is not possible Debugging isn’t possible with serverless. As Serverless Architecture is a place where the data is being stored, it doesn’t have a debugging facility where the uploaded code can be debugged. If you want to know the function, run or perform it and wait for the result. The result can crash in the function and you cannot do anything about this. However, there is a way to resolve this problem, as well. You can use extensive logging which with every step being logged, decreases the chances of errors that cause debugging issues. Conclusion Serverless architecture certainly seems impressive in spite of having some limitations. There is no doubt that the viability and success of architectures depend on the business requirements and of course on the technology used. In the same way, serverless can sparkle bright if used in the appropriate case. I hope this blog might have helped you in the understanding of Serverless architecture for mobile apps and might be able to see it's both bright and dark sides. Author Bio Mehul Rajput is a CEO and co-founder of Mindinventory which specializes in Android and iOS app development and provide web and mobile app solutions from startup to enterprise level businesses. He is an avid blogger and writes on mobile technologies, mobile app, app marketing, app development, startup and business.   What is serverless architecture and why should I be interested? Introducing numpywren, a system for linear algebra built on a serverless architecture Serverless Computing 101 Modern Cloud Native architectures: Microservices, Containers, and Serverless – Part 1 Modern Cloud Native architectures: Microservices, Containers, and Serverless – Part 2
Read more
  • 0
  • 0
  • 6128

article-image-how-has-ethical-hacking-benefited-the-software-industry
Fatema Patrawala
27 Sep 2019
8 min read
Save for later

How has ethical hacking benefited the software industry

Fatema Patrawala
27 Sep 2019
8 min read
In an online world infested with hackers, we need more ethical hackers. But all around the world, hackers have long been portrayed by the media and pop culture as the bad guys. Society is taught to see them as cyber-criminals and outliers who seek to destroy systems, steal data, and take down anything that gets in their way. There is no shortage of news, stories, movies, and television shows that outright villainize the hacker. From the 1995 movie Hackers, to the more recent Blackhat, hackers are often portrayed as outsiders who use their computer skills to inflict harm and commit crime. Read this: Did you know hackers could hijack aeroplane systems by spoofing radio signals? While there have been real-world, damaging events created by cyber-criminals that serve as the inspiration for this negative messaging, it is important to understand that this is only one side of the story. The truth is that while there are plenty of criminals with top-notch hacking and coding skills, there is also a growing and largely overlooked community of ethical (commonly known as white-hat) hackers who work endlessly to help make the online world a better and safer place. To put it lightly, these folks use their cyber superpowers for good, not evil. For example, Linus Torvalds, the creator of Linux was a hacker, as was Tim Berners-Lee, the man behind the World Wide Web. The list is long for the same reason the list of hackers turned coders is long – they all saw better ways of doing things. What is ethical hacking? According to the EC-Council, an ethical hacker is “an individual who is usually employed with an organization and who can be trusted to undertake an attempt to penetrate networks and/or computer systems using the same methods and techniques as a malicious hacker.” Listen: We discuss what it means to be a hacker with Adrian Pruteanu [Podcast] The role of an ethical hacker is important since the bad guys will always be there, trying to find cracks, backdoors, and other secret ways to access data they shouldn’t. Ethical hackers not only help expose flaws in systems, but they assist in repairing them before criminals even have a shot at exploiting said vulnerabilities. They are an essential part of the cybersecurity ecosystem and can often unearth serious unknown vulnerabilities in systems better than any security solution ever could. Certified ethical hackers make an average annual income of $99,000, according to Indeed.com. The average starting salary for a certified ethical hacker is $95,000, according to EC-Council senior director Steven Graham. Ways ethical hacking benefits the software industry Nowadays, ethical hacking has become increasingly mainstream and multinational tech giants like Google, Facebook, Microsoft, Mozilla, IBM, etc employ hackers or teams of hackers in order to keep their systems secure. And as a result of the success hackers have shown at discovering critical vulnerabilities, in the last year itself there has been a 26% increase in organizations running bug bounty programs, where they bolster their security defenses with hackers. Other than this there are a number of benefits that ethical hacking has provided to organizations majorly in the software industry. Carry out adequate preventive measures to avoid systems security breach An ethical hacker takes preventive measures to avoid security breaches, for example, they use port scanning tools like Nmap or Nessus to scan one’s own systems and find open ports. The vulnerabilities with each of the ports is studied, and remedial measures are taken by them. An ethical hacker will examine patch installations and make sure that they cannot be exploited. They also engage in social engineering concepts like dumpster diving—rummaging through trash bins for passwords, charts, sticky notes, or anything with crucial information that can be used to generate an attack. They also attempt to evade IDS (Intrusion Detection Systems), IPS (Intrusion Prevention systems), honeypots, and firewalls. They carry out actions like bypassing and cracking wireless encryption, and hijacking web servers and web applications. Perform penetration tests on networks at regular intervals One of the best ways to prevent illegal hacking is to test the network for weak links on a regular basis. Ethical hackers help clean and update systems by discovering new vulnerabilities on an on-going basis. Going a step ahead, ethical hackers also explore the scope of damage that can occur due to the identified vulnerability. This particular process is known as pen testing, which is used to identify network vulnerabilities that an attacker can target. There are many methods of pen testing. The organization may use different methods depending on its requirements. Any of the below pen testing methods can be carried out by an ethical hacker: Targeted testing which involves the organization's people and the hacker. The organization staff will be aware of the hacking being performed. External testing penetrates all externally exposed systems such as web servers and DNS. Internal testing uncovers vulnerabilities open to internal users with access privileges. Blind testing simulates real attacks from hackers. Testers are given limited information about the target, which requires them to perform reconnaissance prior to the attack. Pen testing is the strongest case for hiring ethical hackers. Ethical hackers have built computers and programs for software industry Going back to the early days of the personal computer, many of the members in the Silicon Valley would have been considered hackers in modern terms, that they pulled things apart and put them back together in new and interesting ways. This desire to explore systems and networks to find how it worked made many of the proto-hackers more knowledgeable about the different technologies and it can be safeguarded from malicious attacks. Just as many of the early computer enthusiasts turned out to be great at designing new computers and programs, many people who identify themselves as hackers are also amazing programmers. This trend of the hacker as the innovator has continued with the open-source software movement. Much of the open-source code is produced, tested and improved by hackers – usually during collaborative computer programming events, which are affectionately referred to as "hackathons." Even if you never touch a piece of open-source software, you still benefit from the elegant solutions that hackers come up with that inspire or are outright copied by proprietary software companies. Ethical hackers help safeguard customer information by preventing data breaches The personal information of consumers is the new oil of the digital world. Everything runs on data. But while businesses that collect and process consumer data have become increasingly valuable and powerful, recent events prove that even the world’s biggest brands are vulnerable when they violate their customers’ trust. Hence, it is of utmost importance for software businesses to gain the trust of customers by ensuring the security of their data. With high-profile data breaches seemingly in the news every day, “protecting businesses from hackers” has traditionally dominated the data privacy conversation. Read this: StockX confirms a data breach impacting 6.8 million customers In such a scenario, ethical hackers will prepare you for the worst, they will work in conjunction with the IT-response plan to ensure data security and in patching breaches when they do happen. Otherwise, you risk a disjointed, inconsistent and delayed response to issues or crises. It is also imperative to align how your organization will communicate with stakeholders. This will reduce the need for real-time decision-making in an actual crisis, as well as help limit inappropriate responses. They may also help in running a cybersecurity crisis simulation to identify flaws and gaps in your process, and better prepare your teams for such a pressure-cooker situation when it hits. Information security plan to create security awareness at all levels No matter how large or small your company is, you need to have a plan to ensure the security of your information assets. Such a plan is called a security program which is framed by information security professionals. Primarily the IT security team devises the security program but if done in coordination with the ethical hackers, they can provide the framework for keeping the company at a desired security level. Additionally by assessing the risks the company faces, they can decide how to mitigate them, and plan for how to keep the program and security practices up to date. To summarize… Many white hat hackers, gray hat and reformed black hat hackers have made significant contributions to the advancement of technology and the internet. In truth, hackers are almost in the same situation as motorcycle enthusiasts in that the existence of a few motorcycle gangs with real criminal operations tarnishes the image of the entire subculture. You don’t need to go out and hug the next hacker you meet, but it might be worth remembering that the word hacker doesn’t equal criminal, at least not all the time. Our online ecosystem is made safer, better and more robust by ethical hackers. As Keren Elazari, an ethical hacker herself, put it: “We need hackers, and in fact, they just might be the immune system for the information age. Sometimes they make us sick, but they also find those hidden threats in our world, and they make us fix it.” 3 cybersecurity lessons for e-commerce website administrators Hackers steal bitcoins worth $41M from Binance exchange in a single go! A security issue in the net/http library of the Go language affects all versions and all components of Kubernetes
Read more
  • 0
  • 0
  • 6092
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-how-is-artificial-intelligence-changing-the-mobile-developer-role
Bhagyashree R
15 Oct 2018
10 min read
Save for later

How is Artificial Intelligence changing the mobile developer role?

Bhagyashree R
15 Oct 2018
10 min read
Last year, at Google I/O, Sundar Pichai, the CEO of Google, said: “We are moving from a mobile-first world to an AI-first world” Is it only applicable to Google? Not really. In the recent past, we have seen several advancements in Artificial Intelligence and in parallel a plethora of intelligent apps coming into the market. These advancements are enabling developers to take their apps to the next level by integrating recommendation service, image recognition, speech recognition, voice translation, and many more cool capabilities. Artificial Intelligence is becoming a potent tool for mobile developers to experiment and innovate. The Artificial Intelligence components that are integral to mobile experiences, such as voice-based assistants and location-based services, increasingly require mobile developers to have a basic understanding of Artificial Intelligence to be effective. Of course, you don’t have to be Artificial Intelligence experts to include intelligent components in your app. But, you should definitely understand something about what you’re building into your app and why. After all AI in mobile is not just limited to calling an API, isn't it? There’s more to it and in this article we will explore how Artificial Intelligence will shape the mobile developer role in the immediate future. Read also: AI on mobile: How AI is taking over the mobile devices marketspace What is changing in the mobile developer role? Focus shifting to data With Artificial Intelligence becoming more and more accessible, intelligent apps are becoming the new norm for businesses. Artificial Intelligence strengthens the relationship between brands and customers, inspiring developers to build smart apps that increase user retention. This also means that developers have to direct their focus to data. They have to understand things like how the data will be collected? How will the data be fed to machines and how often will data input be needed? When nearly 1 in 4 people abandon an app after its first use, as a mobile app developer, you need to rethink how you drive in-app personalization and engagement. Explore “humanized” way of user-app interaction With so many chatbots such as Siri and Google Assistant coming into the market, we can see that “humanizing” the interaction between the user and the app is becoming mainstream. “Humanizing” is the process where the app becomes relatable to the user, and the more effective it is conducted, the more the end user will interact with the app. Users now want easy navigation and searching system and Artificial Intelligence fits perfectly in the scenario. The advances in technologies like text-to-speech, speech-to-text, Natural Language Processing, and cloud services, in general, have contributed to the mass adoption of these types of interfaces. Companies are increasingly expecting mobile developers to be comfortable working with AI functionalities Artificial Intelligence is the future. Companies are now expecting their mobile developers to know how to handle the huge amount of data generated every day and how to use it. Here's is an example of what Google wants their engineers to do: “We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day.” This open-ended requirement list shows that it is the right time to learn and embrace Artificial Intelligence as soon as possible. What skills do you need to build intelligent apps? Ideally, data scientists are the ones who conceptualize mathematical models and machine learning engineers are the ones who translate it into the code and train the model. But, when you are working in a resource-tight environment, for example in a start-up, you will be responsible for doing the end-to-end job. It is not as scary as it sounds, because you have several resources to get started with! Taking your first steps with machine learning as a service Learning anything starts with motivating yourself. Directly diving into the maths and coding part of machine learning might exhaust and bore you. That's why it's a good idea to know what the end goal of your entire learning process is going to be and what types of solutions are possible using machine learning. There are many products available that you can try to quickly get started such as Google Cloud AutoML (Beta), Firebase MLKit (Beta), and Fritz Mobile SDK, among others. Read also: Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence Getting your hands dirty After getting a “warm-up” the next step will involve creating and training your own model. This is where you’ll be introduced to TensorFlow Lite, which is going to be your best friend throughout your journey as a machine learning mobile developer. There are many other machine learning tools coming into the market that you can make use of. These tools make building AI in mobile easier. For instance, you can use Dialogflow, a Natural Language Understanding (NLU) platform that makes it easy for developers to design and integrate conversational user interfaces into mobile apps, web applications, devices, and bots. You can then integrate it on Alexa, Cortana, Facebook Messenger, and other platforms your users are on. Read also: 7 Artificial Intelligence tools mobile developers need to know For practicing you can leverage an amazing codelab by Google, TensorFlow For Poets. It guides you through creating and training a custom image classification model. Through this codelab you will learn the basics of data collection, model optimization, and other key components involved in creating your own model. The codelab is divided into two parts. The first part covers creating and training the model, and the second part is focused on TensorFlow Lite which is the mobile version of TensorFlow that allows you to run the same model on a mobile device. Mathematics is the foundation of machine learning Love it or hate it, machine learning and Artificial Intelligence are built on mathematical principles like calculus, linear algebra, probability, statistics, and optimization. You need to learn some essential foundational concepts and the notation used to express them. There are many reasons why learning mathematics for machine learning is important. It will help you in the process of selecting the right algorithm which includes giving considerations to accuracy, training time, model complexity, number of parameters and number of features. Maths is needed when choosing parameter settings and validation strategies, identifying underfitting and overfitting by understanding the bias-variance tradeoff. Read also: Bias-Variance tradeoff: How to choose between bias and variance for your machine learning model [Tutorial] Read also: What is Statistical Analysis and why does it matter? What are the key aspects of Artificial Intelligence for mobile to keep in mind? Understanding the problem Your number one priority should be the user problem you are trying to solve. Instead of randomly integrating a machine learning model into an application, developers should understand how the model applies to the particular application or use case. This is important because you might end up building a great machine learning model with excellent accuracy rate, but if it does not solve any problem, it will end up being redundant. You must also understand that while there are many business problems which require machine learning approaches, not all of them do. Most business problems can be solved through simple analytics or a baseline approach. Data is your best friend Machine learning is dependent on data; the data that you use, and how you use it, will define the success of your machine learning model. You can also make use of thousands of open source datasets available online. Google recently launched a tool for dataset search named, Google Dataset Search which will make it easier for you to search the right dataset for your problem. Typically, there’s no shortage of data; however, the abundant existence of data does not mean that the data is clean, reliable, or can be used as intended. Data cleanliness is a huge issue. For example, a typical company will have multiple customer records for a single individual, all of which differ slightly. If the data isn’t clean, it isn’t reliable. The bottom line is, it’s a bad practice to just grabbing the data and using it without considering its origin. Read also: Best Machine Learning Datasets for beginners Decide which model to choose A machine learning algorithm is trained and the artifact that it creates after the training process is called the machine learning model. An ML model is used to find patterns in data without the developer having to explicitly program those patterns. We cannot look through such a huge amount of data and understand the patterns. Think of the model as your helper who will look through all those terabytes of data and extract knowledge and insights from the data. You have two choices here: either you can create your own model or use a pre-built model. While there are several pre-built models available, your business-specific use cases may require specialized models to yield the desired results. These off-the-shelf model may also need some fine-tuning or modification to deliver the value the app is intended to provide. Read also: 10 machine learning algorithms every engineer needs to know Thinking about resource utilization is important Artificial Intelligence-powered apps or apps, in general, should be developed with resource utilization in mind. Though companies are working towards improving mobile hardware, currently, it is not the same as what we can accomplish with GPU clusters in the cloud. Therefore, developers need to consider how the models they intend to use would affect resources including battery power and memory usage. In terms of computational resources, inferencing or making predictions is less costly than training. Inferencing on the device means that the models need to be loaded into RAM, which also requires significant computational time on the GPU or CPU. In scenarios that involve continuous inferencing, such as audio and image data which can chew up bandwidth quickly, on-device inferencing is a good choice. Learning never stops Maintenance is important, and to do that you need to establish a feedback loop and have a process and culture of continuous evaluation and improvement. A change in consumer behavior or a market trend can make a negative impact on the model. Eventually, something will break or no longer work as intended, which is another reason why developers need to understand the basics of what it is they’re adding to an app. You need to have some knowledge of how the Artificial Intelligence component that you just put together is working or how it could be made to run faster. Wrapping up Before falling for the Artificial Intelligence and machine learning hype, it’s important to understand and analyze the problem you are trying to solve. You should examine whether applying machine learning can improve the quality of the service, and decide if this improvement justifies the effort of deploying a machine learning model. If you just want a simple API endpoint and don’t want to dedicate much time in deploying a model, cloud-based web services are the best option for you. Tools like ML Kit for Firebase looks promising and seems like a good choice for startups or developers just starting out. TensorFlow Lite and Core ML are good options if you have mobile developers on your team or if you’re willing to get your hands a little dirty. Artificial Intelligence is influencing the app development process by providing us a data-driven approach for solving user problems. It wouldn't be surprising if in the near future Artificial Intelligence becomes a forerunning factor for app developers in their expertise and creativity. 10 useful Google Cloud Artificial Intelligence services for your next machine learning project [Tutorial] How Artificial Intelligence is going to transform the Data Center How Serverless computing is making Artificial Intelligence development easier
Read more
  • 0
  • 0
  • 6017

article-image-a-five-level-learning-roadmap-for-functional-programmers
Sugandha Lahoti
12 Apr 2019
4 min read
Save for later

A five-level learning roadmap for Functional Programmers

Sugandha Lahoti
12 Apr 2019
4 min read
The following guide serves as an excellent learning roadmap for functional programming. It can be used to track our level of knowledge regarding functional programming. This guide was developed for the Fantasyland institute of learning for the LambdaConf conference. It was designed for statically-typed functional programming languages that implement category theory. This post is extracted from the book Hands-On Functional Programming with TypeScript by Remo H. Jansen. In this book, you will understand the pros, cons, and core principles of functional programming in TypeScript. This roadmap talks about five levels of difficulty: Beginner, Advanced Beginner, Intermediate, Proficient, and Expert. Languages such as Haskell support category theory natively, but, we can take advantage of category theory in TypeScript by implementing it or using some third-party libraries. Not all the items in the list are 100% applicable to TypeScript due to language differences, but most of them are 100% applicable. Beginner To reach the beginner level, you will need to master the following concepts and skills: CONCEPTS SKILLS Immutable data Second-order functions Constructing and destructuring Function composition First-class functions and lambdas Use second-order functions (map, filter, fold) on immutable data structures Destructure values to access their components Use data types to represent optionality Read basic type signatures Pass lambdas to second-order functions Advanced beginner To reach the advanced beginner level, you will need to master the following concepts and skills: CONCEPTS SKILLS Algebraic data types Pattern matching Parametric polymorphism General recursion Type classes, instances, and laws Lower-order abstractions (equal, semigroup, monoid, and so on) Referential transparency and totality Higher-order functions Partial application, currying, and point-free style Solve problems without nulls, exceptions, or type casts Process and transform recursive data structures using recursion Able to use functional programming in the small Write basic monadic code for a concrete monad Create type class instances for custom data types Model a business domain with abstract data types (ADTs) Write functions that take and return functions Reliably identify and isolate pure code from an impure code Avoid introducing unnecessary lambdas and named parameters Intermediate To reach the intermediate level, you will need to master the following concepts and skills: CONCEPTS SKILLS Generalized algebraic data type Higher-kinded types Rank-N types Folds and unfolds Higher-order abstractions (category, functor, monad) Basic optics Implement efficient persistent data structures Existential types Embedded DSLs using combinators Able to implement large functional programming applications Test code using generators and properties Write imperative code in a purely functional way through monads Use popular purely functional libraries to solve business problems Separate decision from effects Write a simple custom lawful monad Write production medium-sized projects Use lenses and prisms to manipulate data Simplify types by hiding irrelevant data with existential Proficient To reach the proficient level, you will need to master the following concepts and skills: CONCEPTS SKILLS Codata (Co)recursion schemes Advanced optics Dual abstractions (comonad) Monad transformers Free monads and extensible effects Functional architecture Advanced functors (exponential, profunctors, contravariant) Embedded domain-specific languages (DSLs) using generalized algebraic datatypes (GADTs) Advanced monads (continuation, logic) Type families, functional dependencies (FDs) Design a minimally powerful monad transformer stack Write concurrent and streaming programs Use purely functional mocking in tests. Use type classes to modularly model different effects Recognize type patterns and abstract over them Use functional libraries in novel ways Use optics to manipulate state Write custom lawful monad transformers Use free monads/extensible effects to separate concerns Encode invariants at the type level. Effectively use FDs/type families to create safer code Expert To reach the expert level, you will need to master the following concepts and skills: CONCEPTS SKILLS High performance Kind polymorphism Generic programming Type-level programming Dependent-types, singleton types Category theory Graph reduction Higher-order abstract syntax Compiler design for functional languages Profunctor optics Design a generic, lawful library with broad appeal Prove properties manually using equational reasoning Design and implement a new functional programming language Create novel abstractions with laws Write distributed systems with certain guarantees Use proof systems to formally prove properties of code Create libraries that do not permit invalid states. Use dependent typing to prove more properties at compile time Understand deep relationships between different concepts Profile, debug, and optimize purely functional code with minimal sacrifices Summary This guide should be a good resource to guide you in your future functional-programming learning efforts. Read more on this in our book Hands-On Functional Programming with TypeScript. What makes functional programming a viable choice for artificial intelligence projects? Why functional programming in Python matters: Interview with best selling author, Steven Lott Introducing Coconut for making functional programming in Python simpler
Read more
  • 0
  • 0
  • 6001

article-image-do-you-need-to-be-polyglot-great-programmer
Amit Kothari
19 Jan 2018
6 min read
Save for later

Do you need to be a polyglot to be a great programmer?

Amit Kothari
19 Jan 2018
6 min read
Recently, I was talking to someone who has been working as a developer for over a year. They asked me which programming languages they should learn in order to improve their employability and to grow as a developer. This made me think: Do we really need to be a polyglot to be a good programmer? A polyglot programmer is someone who can write code in multiple languages. Most of us are already using multiple programming languages. Someone working on web apps uses HTML, CSS, and JavaScript. Similarly, backend services might be written in a specific language, but the developer might still be using SQL for database queries or YAML for configuration files. As developers, we like to try and learn new programming languages and frameworks. We do this for many reasons, to solve specific problems, to find a better alternative, or simply to keep ourselves up to date with what's new and trending. The benefits of being a polyglot programmer There are obvious benefits of being a polyglot developer. It increases your employability. Being proficient in multiple languages looks very good on your resume. It shows your experience as a developer and also indicates that you are flexible, able to work with different tools in different situations. It provides you with more opportunities and greater variety. When you’re looking for a new job or maybe even in your current role, if you are able to write code in multiple languages many more opportunities open up to you. When you're a polyglot you become much more in control of your career destiny! Developer happiness. Many developers simply feel more productive when they are using a specific language. But to know what you enjoy, you need to be open minded and willing to explore lots of different languages. Polyglots get to try out different syntaxes, get to know different communities – and this exploration is surely one of the best things about being a developer. Along with all these benefits, working with different languages give us a chance to learn about different programming paradigms. We can learn different ways of solving a problem and different ways of thinking. We can then bring all this learning together to write better code. The challenges While there are many benefits of learning and knowing multiple programming languages, this constant learning comes with its own challenges. Lack of proficiency: In his book "JavaScript: The Good Parts," Douglas Crockford talks about good and bad parts of JavaScript. Similarly, other languages also have certain aspects that should be approached with caution. If you’re frequently changing programming languages without spending enough time to learn one properly, you might run into issues around things like performance and security. Maintenance becomes a nightmare. Having too many languages in a tech stack will likely become a maintenance nightmare for both the development and the operations side. This will take you somewhere that is the opposite of agile and efficient. Developer fatigue. Constantly learning and adapting to new languages and technology may result in developer fatigue. It’s a fact of tech today that developers feel stressed and under pressure – this is bound to affect not only their productivity but their health as well. From an organization's perspective, there are tradeoffs when adding a new language to their tech stack. There may be operational costs and costs to up-skill the team. On the upside, code quality and productivity may improve. Companies who avoid investing in up-skilling their teams and upgrading their tech stack may end up with systems that are difficult to maintain. Even small changes may take weeks to deliver and finding skilled developers can become challenging. On the other hand, constantly changing programming languages and technology may result in features not getting delivered for months; in some cases years. There are many cases where a project started in one programming language and after years of development, the team decided to rewrite the whole system in a newer language or framework. While architectures like microservices solve some of these problems by allowing us to write different parts of a given system in different languages without needing to rewrite the whole system, it is important to understand the cost of introducing a new language. The benefits we get out of it should always outweigh the cost. "Any fool can write code that a computer can understand. Good programmers write code that humans can understand." - Martin Fowler How to become a better developer Learning different programming languages is one way to grow as a developer but there are others things we can do to improve. Write clean code. As developers, we spend more time reading code than writing it. Writing code that is easy to read and understand is one of the key traits of a good developer. Write easy to maintain code. A good programmer puts in extra effort to make sure that the code is easy to maintain. Use design principles and test-driven development to make sure that the code can be modified with ease and confidence that it is not going to affect the existing functionality. Understand the problem. A good developer will try and understand the problem and then pick the appropriate tool to solve the problem instead of starting with a technology just because it's trending. There are lots of obvious advantages of learning multiple programming languages. Not only does it look good on a resume, it also helps you to improve as a developer. However, it is just as important to understand the business problems you’re trying to solve. Whether you’re a polyglot or not, the most important thing any developer can do is focus on the problems instead of the tool. I hope you enjoyed this post; please let us know what you think! Are you a polyglot? Do you think trying to become one is important today? Amit Kothari is a full stack software developer based in Melbourne, Australia. He has 10+ years experience in designing and implementing software mainly in Java/JEE. His recent experience is in building web applications using JavaScript frameworks like React and AngularJS and backend micro services/ REST API in Java. He is passionate about lean software development and continuous delivery.
Read more
  • 0
  • 2
  • 5963

article-image-why-triple-game-development-unsustainable
Raka Mahesa
12 Jun 2017
5 min read
Save for later

Why is triple-A game development unsustainable?

Raka Mahesa
12 Jun 2017
5 min read
The video game industry is a huge corporation that has brought in over $91 billion in revenue during 2016 alone. Not only big, it's also a growing industry with a projected yearly growth rate of 3.6%. So it's quite surprising when Cliff Bleszinski, a prominent figure in the game industry, made a remark that the business of modern triple-A games is unsustainable.  While the statement may sound "click-bait-y", he's not the only person from the industry to voice concern about the business model. Back in 2012, a game director from Ubisoft; one of the biggest game publishers in the world, made a similar remark about how the development of triple-A games could be harmful. Seeing how there is another person voicing a similar concern, maybe there's some truth to what they are saying. And if it is true, what makes triple-A game development unsustainable? Let's take a look. So, before we go further, let's first clear up one thing: what are triple-A games?  Triple-A games (or AAA games) are a tier of video games with the highest development budget. It's not a formal classification, so there isn't an exact budget limit that must be passed for a game to be categorized as triple-A. Additionally, even though this classification makes it seems like triple-A games are these super premium games of the highest quality; in reality, most games you find in a video game store are triple-A games, being sold at $60.  So that's the triple-A tier, but what other tiers of video games are there, and where are they sold? Well, there are indie games and double-A (AA) games. Indie games are made by a small team with a small budget and are sold at a price of $20 and lower. The double-A games are made with bigger budgets than indie games and sold at a higher price of $40. Both tiers of video games are sold digitally at a digital storefront like Steam and usually are not sold on a physical media like DVD.  Do keep in mind that this classification is for PC or console video games and isn't really applicable to mobile games.  Also, it is important to note that this classification of video games doesn't determine which game has the better quality or which one has the better sales. After all, Minecraft is an indie game with a really small initial development team that has sold over 100 million copies. In comparison, Grand Theft Auto V, a triple-A game with a $250 million development budget, has "only" sold 75 million copies.  And yes, you read that right. Grand Theft Auto V has a development cost of $250 million, with half of that cost being marketing. Most triple-A games don't have as much development budget, but they're still pretty expensive. Call of Duty: Modern Warfare 2 has a development cost of $200 million, The Witcher 3 has a development cost of $80 million, and the production cost (which means marketing cost is excluded) of Final Fantasy XIII is $65 million.  So, with that kind of budget, how do those games fare? Well, fortunately for Grand Theft Auto V, it made $1 billion in sales in just three days after it was released, making it the fastest-selling entertainment product of all time. Final Fantasy XIII has a different story though. Unlike Grand Theft Auto V with its 75 million sales number, the lifetime sales number of Final Fantasy XIII is only 6.6 million, which means it made roughly $350 million in sales, not much when compared to its production cost of $65 million.  And this is why triple-A game development is unsustainable. The development cost of those games is getting so high that the only way for the developer to gain profitability is to sell millions and millions of copies of those games. Meanwhile, each day there are more video games being released, making it harder for each game to gain sales. Grand Theft Auto V is the exception and not the rule here, since there aren't a lot of video games that can even reach 10 million in sales.  With that kind budget, the development of every triple-A game has become very risky. After all, if a game doesn't sell well, the developer could lose tens of millions of dollars, enough to bankrupt a small company that doesn't have much funding. And even for a big company with plenty of funding, how many projects could they fail on before they're forced to shut down?  And with risky projects comes risk mitigation. With so much money at stake, developers are forced to play safe and only work on games with mainstream appeal. Oh, the science fiction theme doesn’t have the audience as big as a military theme? Let’s only make games with a military theme then. But if all game developers think the same way, the video game market could end up with only a handful of genres, with all those developers competing for the same audience.  It’s a vicious cycle, really. High budget games need to have a high amount of sales to recoup its production cost. But for a game to get a high amount of sales, it needs to have high development budgets to compete with other games on the market.  So, if triple-A game development is truly unsustainable, would that mean those high budget games will disappear from the market in the future? Well, it's possible. But as we've seen with Minecraft, you don't need hundreds of millions in development budget to create a good game that will sell well. So even though the number of video games with high budgets may diminish, high quality video games will still exist.  About the Author  RakaMahesa is a game developer at Chocoarts: http://chocoarts.com/, who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99. 
Read more
  • 0
  • 0
  • 5925
article-image-microservices-require-a-high-level-vision-to-shape-the-direction-of-the-system-in-the-long-term-says-jaime-buelta
Bhagyashree R
25 Nov 2019
9 min read
Save for later

"Microservices require a high-level vision to shape the direction of the system in the long term," says Jaime Buelta

Bhagyashree R
25 Nov 2019
9 min read
Looking back 4-5 years ago, the sentiment around microservices architecture has changed quite a bit. First, it was in the hype phase when after seeing the success stories of companies like Netflix, Amazon, and Gilt.com developers thought that microservices are the de facto of application development. Cut to now, we have realized that microservices is yet another architectural style which when applied to the right problem in the right way works amazingly well but comes with its own pros and cons. To get an understanding of what exactly microservices are, when we should use them, when not to use them, we sat with Jaime Buelta, the author of Hands-On Docker for Microservices with Python. Along with explaining microservices and their benefits, Buelta shared some best practices developers should keep in mind if they decide to migrate their monoliths to microservices. [box type="shadow" align="" class="" width=""] Further Learning Before jumping to microservices, Buelta recommends building solid foundations in general software architecture and web services. “They'll be very useful when dealing with microservices and afterward,” he says. Buelta’s book, Hands-on Docker for Microservices with Python aims to guide you in your journey of building microservices. In this book, you’ll learn how to structure big systems, encapsulate them using Docker, and deploy them using Kubernetes. [/box] Microservices: The benefits and risks A traditional monolith application encloses all its capabilities in a single unit. On the contrary, in the microservices architecture, the application is divided into smaller standalone services that are independently deployable, upgradeable, and replaceable. Each microservice is built for a single business purpose, which communicates with other microservices with lightweight mechanisms. Buelta explains, “Microservice architecture is a way of structuring a system, where several independent services communicate with each other in a well-defined way (typically through web RESTful services). The key element is that each microservice can be updated and deployed independently.” Microservices architecture does not only dictates how you build your application but also how your team is organized. [box type="shadow" align="" class="" width=""]"Though [it] is normally described in terms of the involved technologies, it’s also an organizational structure. Each independent team can take full ownership of a microservice. This allows organizations to grow without developers clashing with each other," he adds. [/box] One of the key benefits of microservices is it enables innovation without much impact on the system as a whole. With microservices, you can do horizontal scaling, have strong module boundaries, use diverse technologies, and develop parallelly. Coming to the risks associated with microservices, Buelta said, "The main risk in its adoption, especially when coming from a monolith, is to make a design where the services are not truly independent. This generates an overhead and complexity increase in inter-service communication." He adds, "Microservices require a high-level vision to shape the direction of the system in the long term. My recommendation to organizations moving towards this kind of structure is to put someone in charge of the “big picture”. You don't want to lose sight of the forest for the trees." Migrating from monoliths to microservices Martin Fowler, a renowned author and software consultant, advises going for a "monolith-first" approach. This is because using microservices architecture from the get-go can be risky as it is mostly found suitable for large systems and large teams. Buelta shared his perspective, "The main metric to start thinking into getting into this kind of migration is raw team size. For small teams, it is not worth it, as developers understand everything that is going on and can ask the person sitting right across the room for any question. A monolith works great in these situations, and that’s why virtually every system starts like this." This asserts the "two-pizza team" rule by Amazon, which says that if a team responsible for one microservice couldn’t be fed with two pizzas, it is too big. [box type="shadow" align="" class="" width=""]"As business and teams grow, they need better coordination. Developers start stepping into each other's toes often. Knowing the intent of a particular piece of code is trickier. Migrating then makes sense to give some separation of function and clarity. Each team can set its own objectives and work mostly on their own, presenting a clear external interface. But for this to make sense, there should be a critical mass of developers," he adds.[/box] Best practices to follow when migrating to microservices When asked about what best practices developers can practice when migrating to microservices, Buelta said, "The key to a successful microservice architecture is that each service is as independent as possible." A question that arises here is ‘how can you make the services independent?’ "The best way to discover the interdependence of system is to think in terms of new features: “If there’s a new feature, can it be implemented by changing a single service? What kind of features are the ones that will require coordination of several microservices? Are they common requests, or are they rare? No design will be perfect, but at least will help make informed decisions,” explains Buelta. Buelta advises doing it right instead of doing it twice. "Once the migration is done, making changes on the boundaries of the microservices is difficult. It’s worth to invest time into the initial phase of the project," he adds. Migrating from one architectural pattern to another is a big change. We asked what challenges he and his team faced during the process, to which he said, [box type="shadow" align="" class="" width=""]"The most difficult challenge is actually people. They tend to be underestimated, but moving into microservices is actually changing the way people work. Not an easy task![/box] He adds, “I’ve faced some of these problems like having to give enough training and support for developers. Especially, explaining the rationale behind some of the changes. This helps developers understand the whys of the change they find so frustrating. For example, a common complaint moving from a monolith is to have to coordinate deployments that used to be a single monolith release. This needs more thought to ensure backward compatibility and minimize risk. This sometimes is not immediately obvious, and needs to be explained." On choosing Docker, Kubernetes, and Python as his technology stack We asked Buelta what technologies he prefers for implementing microservices. For language his answer was simple: "Python is a very natural choice for me. It’s my favorite programming language!" He adds, "It’s very well suited for the task. Not only is it readable and easy to use, but it also has ample support for web development. On top of that, it has a vibrant ecosystem of third-party modules for any conceivable demand. These demands include connecting to other systems like databases, external APIs, etc." Docker is often touted as one of the most important tools when it comes to microservices. Buelta explained why, "Docker allows to encapsulate and replicate the application in a convenient standard package. This reduces uncertainty and environment complexity. It simplifies greatly the move from development to production for applications. It also helps in reducing hardware utilization.  You can fit multiple containers with different environments, even different operative systems, in the same physical box or virtual machine." For Kubernetes, he said, "Finally, Kubernetes allows us to deploy multiple Docker containers working in a coordinated fashion. It forces you to think in a clustered way, keeping the production environment in mind. It also allows us to define the cluster using code, so new deployments or configuration changes are defined in files. All this enables techniques like GitOps, which I described in the book, storing the full configuration in source control. This makes any change in a specific and reversible way, as they are regular git merges. It also makes recovering or duplicating infrastructure from scratch easy." "There is a bit of a learning curve involved in Docker and Kubernetes, but it’s totally worth it. Both are very powerful tools. And they encourage you to work in a way that’s suited for avoiding downfalls in production," he shared. On multilingual microservices Microservices allow you to use diverse technologies as each microservice ideally is handled by an independent team. Buelta shared his opinion regarding multilingual microservices, "Multilingual microservices are great! That’s one of its greatest advantages. A typical example of this is to migrate legacy code written in some language to another. A microservice can replace another that exposes the same external interface. All while being completely different internally. I’ve done migrations from old PHP apps to replace them with Python apps, for example." He adds, "As an organization, working with two or more frameworks at the same time can help understand better both of them, and when to use one or the other." Though using multilingual microservices is a great advantage, it can also increase the operational overhead. Buelta advises, "A balance needs to be stuck, though. It doesn’t make sense to use a different tool each time and not be able to share knowledge across teams. The specific numbers may depend on company size, but in general, more than two or three should require a good explanation of why there’s a new tool that needs to be introduced in the stack. Keeping tools at a reasonable level also helps to share knowledge and how to use them most effectively." About the author Jaime Buelta has been a professional programmer and a full-time Python developer who has been exposed to a lot of different technologies over his career. He has developed software for a variety of fields and industries, including aerospace, networking and communications, industrial SCADA systems, video game online services, and financial services. As part of these companies, he worked closely with various functional areas, such as marketing, management, sales, and game design, helping the companies achieve their goals. He is a strong proponent of automating everything and making computers do most of the heavy lifting so users can focus on the important stuff. He is currently living in Dublin, Ireland, and has been a regular speaker at PyCon Ireland. Check out Buelta’s book, Hands-On Docker for Microservices with Python on PacktPub. In this book, you will learn how to build production-grade microservices as well as orchestrate a complex system of services using containers. Follow Jaime Buelta on Twitter: @jaimebuelta. Microsoft launches Open Application Model (OAM) and Dapr to ease developments in Kubernetes and microservices Kong CTO Marco Palladino on how the platform is paving the way for microservices adoption [Interview] Yuri Shkuro on Observability challenges in microservices and cloud-native applications
Read more
  • 0
  • 0
  • 5910

article-image-ubers-kepler-gl-an-open-source-toolbox-for-geospatial-analysis
Pravin Dhandre
28 Jun 2018
4 min read
Save for later

Uber's kepler.gl, an open source toolbox for GeoSpatial Analysis

Pravin Dhandre
28 Jun 2018
4 min read
Geography Visualization, also called as Geovisualization plays a pivotal role in areas like cartography, geographic information systems, remote sensing and global positioning systems. Uber, a peer-to-peer transportation network company headquartered at California believes in data-driven decision making and hence keeps developing smart frameworks like deck.gl for exploring and visualizing advanced geospatial data at scale. Uber strives to make the data web-based and shareable in real-time across their teams and customers. Early this month, Uber surprised the geospatial market with its newly open-source toolbox, kepler.gl, a geoanalytics tool to gain quick insights from geospatial data with amazing and intuitive visualizations. What’s exactly Kepler.gl is? kepler.gl is a visualization-rich web platform, developed on top of deck.gl, a WebGL-powered data visualization library providing real-time visual analytics of millions of geolocation points. The platform provides visual exploration of geographical data sets along with spatial aggregation of all data points collected. The platform is said to be data-agnostic with a single interface to convert your data into insightful visualizations. https://www.youtube.com/watch?v=i2fRN4e2s0A The platform is very user-friendly where one can just drag the CSV or the GeoJSON files and drop them into the browser to visualize the dataset more intuitively. The platform is supported with different map layers, filtering option, aggregation feature through which you can get the final visualization in an animated format or like a video. The usability of features is so high that you can apply all the metrics available to your data points without much of a hassle. The web platform exhibits high performance where you can get insights from your spatial data in less than 10 minutes and that too in a single window. Another advantage of this framework is it does not involve any sort of coding and hence non-technical users can also reap the benefits by churn valuable insights from the data points. The platform is also equipped with some advanced, complex features such as 2D cartographic plane,a separate dimension for altitude, visibility of height of hexagon and grids. The users seem happy with the new height feature which helps them detect abnormalities and illicit traits in an aggregated map. With the filtering menu, the analysts and engineers can compare their data and have a granular look at their data points. This option also helps in reading the histogram well and one can easily detect outliers and make their dataset more reliable. It  has a feature to add playback to time series data points which makes getting useful information of real time location systems easy. The team at Uber looks at this toolbox with a long-term vision where they are planning to keep adding new features and enhancements to make it highly functional and a single-click visualization dashboard. The team has already announced that they would be powering it up with two major enhancements to the current functionality in next couple of months. They would add support on, More robust exploration: There will be interlinkage between charts and maps, and support for custom charts, maps and widgets like the renowned BI tool Tableau through which it will facilitate analytics teams to unveil deeper insights. Addition of newer geo-analytical capabilities: To support massive datasets, there will be added features on data operations such as polygon aggregation, union of data points, operations like joining and buffering. Companies across different verticals such as Airbnb, Atkins Global, Cityswifter, Mapbox have found great value in kepler.gl offerings and are looking towards engineering their products to leverage this framework. The visualization specialists at these companies have already praised Uber for building such a simple yet fast platform with remarkable capabilities. To get started with kepler.gl, read the documentation available at Github and start creating visualizations and enhance your geospatial data analysis. Top 7 libraries for geospatial analysis Using R to implement Kriging – A Spatial Interpolation technique for Geostatistics data Data Visualization with ggplot2
Read more
  • 0
  • 0
  • 5906

article-image-how-succeed-gaming-industry-10-tips
Raka Mahesa
12 Jun 2017
5 min read
Save for later

How to succeed in the gaming industry: 10 tips

Raka Mahesa
12 Jun 2017
5 min read
The gaming industry is a crowded trade. After all, it's one of those industries where you can work on something you actually love; so a lot of people are trying to get into it. And with a lot from rivals being successful in the industry is a difficult thing to accomplish. Here are 10 tips to help you succeed in the gaming enterprise. Do note that these are general tips. That way, the tips should be applicable to you regardless of your position in the industry, whether you're an indie developer working on your own games, or a programmer working for a big gaming company. Tip 1: Be creative The gaming industry is a creative one, so it makes perfect sense that you need to be creative to succeed. And you don't have to be an artist or a writer to apply your creative thinking; there are many challenges that need creative solutions. For example, a particular system in the game may need some heavy computing, so you can come up with a creative solution that instead of fully computing the problem, you merely use a simpler formula and estimate the result.  Tip 2: Capable of receiving criticism Video games are a passion for many people, probably including you. That's why it's easy to fall in love with your own idea; whether it’s a gameplay idea like how an enemy should behave, or a technical one like how a save file should be written. Your idea might not be perfect though, so it's important to be able to step back and see if another person's criticism on your idea has its merit. After all, that other person could be capable of seeing a flaw that you may have missed.  Tip 3: Be able to see the big picture A video game’s software is full of complex, interlocking systems. Being able to see the big picture, that is, seeing how changes in one system could affect another system is a really nice skill to have when developing a video game.  Tip 4: Keep up with technology Technology moves at a blisteringly fast speed. Technology that is relevant today may be rendered useless tomorrow, so it is very important to keep up with technology. Using the latest equipment may help your game project and the newest technology may provide opportunities for your games too. For example, newer platforms like VR and AR don't have many games yet, so it's easier to gain visibility there.  Tip 5: Keep up with industry trends It's not just technology that moves fast, but also the world. Just 10 years ago, it was unthinkable that millions of people would watch other people play games, or that mobile gaming would be bigger than console gaming. By keeping up with industry trends, we can understand the market for our games, and more importantly, understand our players' behavior.  Tip 6: Put yourself in your player's shoes Being able to see your games from the viewpoint of your player is a really useful skill to be had. For example, as a developer you may feel fine looking at a black screen when your game is loading its resources because you know the game is working fine, as long as it doesn't throw an error dialog. Whereas, your player probably doesn't feel the same way and thought the game just hangs when it shows you a black screen without a resource loading indicator.  Tip 7: Understand your platform and your audience This is a bit similar to the previous tip, but on a more general level. Each platform has different strengths and the audience of each platform also has different expectations. For example, games for mobile platforms are expected to be played in small time burst instead of hour long sessions, so mobile gamers expect their games to automatically save progress whenever they stop playing. Understanding this behavior is really important for developing games that satisfy players.  Tip 8: Be a team player Unless you're a one-man army, games usually are not developed alone. Since game development is a team effort, it's pretty important to get along with your teammates. Whether it's dividing tasks fairly with your programmer buddy, or explaining to the artist about the format of the art assets that your game needs.  Tip 9: Show your creation to other people When you are deep in the process of working on your latest creation, it’s hardsometimes to take a step back and assess your creation fairly. Occasionally you may even feel like your creations aren’t up to scratch. Fortunately, showing your work to other people is a relatively easy way to get good and honest feedback. And if you’re lucky, your new audience may just show you how your creation is actually to a standard level.  Tip 10: Networking This is probably the most generic tip ever, but that doesn't mean it's not true. In any industry and no matter what your position is, networking is really important. If you're an indie developer, you may connect with a development partner that shares the same vision as you. Alternatively, if you're a programmer, maybe you will connect with someone who’s looking for a senior position to lead a new game project. Networking will open the door of opportunities for you.  About the author  Raka Mahesa is a game developer at Chocoarts: http://chocoarts.com/, who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99. 
Read more
  • 0
  • 0
  • 5888
article-image-how-can-data-scientist-get-game-development
Graham Annett
07 Aug 2017
5 min read
Save for later

How can a data scientist get into game development?

Graham Annett
07 Aug 2017
5 min read
One of the most interesting uses for data science is within the aspects and process around game development.  While not immediately obvious that data science can be applicable to game development, it is increasingly becoming an enticing area both from a user engagement perspective, and as a source of data collection for deep learning and data science related tasks. Games and data collection  With the increase of reinforcement learning oriented deep learning tasks in the past few years, the concept of using games as a method for collection of data (somewhat in parallel to collecting data on mturk or various other crowdsourcing platforms) has never been greater.  The main idea behind data collection for these types of tasks is capturing the graphical display at some time and recording the user input for that image frame.  From this data, it's possible to connect these inputs into some end result (such that the final score) that can later be optimized and used as an objective cost function to be minimized or maximized.  With this, it’s possible to collect a large corpus of a user's data for deep learning algorithms to initially train off of, which they can then use for the computer to play itself (something akin to this was done for AlphaGo and various other game related reinforcement learning bots). With the incredible influx of processing power now available, it’s possible for computers to play themselves thousands and millions of times to learn from themselves and their own shortcomings.  Deep learning uses  Practical uses of this type of deep learning that a data scientist may find interesting range from creating smart AI systems that are more engaging to a player, to finding transferable algorithms and data sources that can be used elsewhere. For example, many of the OpenAI algorithms are intended to be trained in one game with the hope that they will be transferable to another game and still do well (albeit with new parameters and learned cost function). This type of deep learning is incredibly interesting from a data scientist perspective because it is useful to not have to focus on highly optimizing each game or task that a data scientist may be working on and instead find commonalities and generalizable methodologies that translate across systems and games.  Technical skills Many of the technical skills for creating pipelines of data collection from game development are much more development oriented than a traditional data scientist may be used to, and it may require learning new skills. These skills are much broader and encompassing of traditional developer roles, and initially include things such as data collection and data pipelining from the games, to scaling deep learning training and implementing new algorithms during training.These are becoming more vital to a data scientist as the need to both provide insight as well as create integrations into a product is becoming an incredibly vital skillset.  Exploring projects and tools  A data scientist may go about getting into this area by exploring such projects and tools such as OpenAI’s gym and Facebook's MazeBase. These projects are very deep learning oriented though, and may not be what a traditional data scientist thinks of when they are interested in game development.  Data oriented/driven game design Another approach is data oriented/driven game design. While this is not a new concept by any means, it has become increasingly ubiquitous as in-app purchasing and subscription based gaming plans have become a common theme among mobile and other gaming platforms. These types of data science tasks are not unlike normal data science related projects, in that they seek to understand from a statistical perspective what is happening to users at specific points along the games. There is a pretty big overlap in projects like this and projects that aim to understand, for instance, when a user abandons a cart during an online order. The data for the games may be oriented around when the gamer gave up on a quest, or at what point users are willing to make an in-app purchase to quicker achieve a goal. Since these are quantifiable and objective goals, they are an incredibly fit for traditional supervised learning tasks and can be approached with traditional supervised learning baselines and algorithms.  The end result of these tasks may include things such as making a quest or goal easier, or making an in-app purchase cheaper during some specific interval that the user would be more inclined to purchase (much like offering a user a coupon if a cart is abandoned during checkout often entices the user to come back and finish the purchase). While both of these paths are game development oriented, they differ quite a lot in that one is much more traditionally data analytical, and one is much more deep learning engineering oriented. They both are highly interesting areas to explore from a professional standpoint, but data driven game development may be somewhat limited from a hobbyist standpoint outside of Kaggle competitions (which a quick search didn’t seem to show any previous competitions having this sort of data) since many companies would be quite hesitant to provide this sort of data if their entire business model is based around in-app purchases and recurring revenue from players.  Overall, these are both incredibly enticing areas and are great avenues to pursue and provide plenty of interesting problems that you may not encounter outside of game development.  About the Author Graham Annett is an NLP Engineer at Kip (Kipthis.com).  He has been interested in deep learning for a bit over a year and has worked with and contributed to Keras (https://github.com/fchollet/keras).  He can be found on Github at http://github.com/grahamannett or via http://grahamannett.me. 
Read more
  • 0
  • 0
  • 5886

article-image-organisation-needs-to-know-about-gdpr
Aaron Lazar
16 Apr 2018
5 min read
Save for later

What your organisation needs to know about GDPR

Aaron Lazar
16 Apr 2018
5 min read
GDPR is an acronym that has been doing the rounds for a couple of years now. It’s become even more visible in the last few weeks, thanks to the Facebook and Cambridge Analytica data hijacking scandal. And with the deadline date looming - 25 May 2018 - every organization on the planet needs to make sure their on top of things. But what is GDPR exactly? And how is it going to affect you? What is GDPR? Before April, 2016, a data protection directive enforced in 1995 was in place. This governed all organisations that dealt with collecting, storing and processing data. This directive became outdated with rapidly evolving technological trends, which meant a revised directive was needed. In April 2016, the European Union drew up General Data Protection Regulation. It has been specifically created to to protect the personal data and privacy of European citizens. It's important to note at this point that the directive doesn't just apply to EU organizations - it applies to anyone who deals with data on EU citizens. A relatively new genre of crime involving stealing data, has cropped up over the past decade. Data is so powerful, that its misuse could be devastating, possibly resulting in another world war. GDPR aims to set a new benchmark for the protection of consumer data rights by making organisations more accountable. Governed by GDPR, organisations will now be responsible for guarding every quantum of information that is connected to an individual, including IP addresses and web cookies! Read more: Why GDPR is good for everyone. Why should organizations bother with GDPR? In December 2017, the RSA, one of the first cryptosystems and security organisations, surveyed 7,500 customers in France, Italy, Germany, the UK and the US, and the results were interesting. When asked what their main concern was, customers responded that lost passwords, banking information, passports and other important documents were their major concern. The more interesting part was that over 60% of the respondents said that in the event of a breach, they would blame the organisation that lost their data rather than the hacker. If you work for or own a company that deals with the data of EU citizens, you’ll probably have GDPR on your radar. If you don’t comply, you’ll face a hefty fine - more on that below. What kind of data are we talking about? The GDPR aims to protect data related to identity information like name, physical address, sexual orientation and more. It also covers any ID numbers; IP addresses, cookies and RFID tags; genetic and any data related to health; biometric data like fingerprints, retina scans, etc; racial or ethnic data; political opinions. Who must comply with GDPR? You’ll be governed by GDPR if: You’re a company located in the EU You’re not located in the EU but you still process data of EU citizens You have more than 250 employees You have lesser than 250 employees but process data that could impact the rights and freedom of EU citizens When does GDPR come into force? In case you missed it in the first paragraph, GDPR comes into effect on 25 May 2018. If you're not ready yet, now is the time to scramble to get things right and make sure you comply with GDPR regulations. What if you don’t make the date? Unlike an invitation to a birthday party, if you miss the date to comply with the GDPR, you’re likely to be fined to the tune of €20 million or 4% of the worldwide turnover of your company. A more relaxed fine includes €10 million or 2% of the worldwide turnover of your company, for misusing data in ways involving failure to report a data breach, failure to incorporate privacy by design and failure to ensure that data protection is applied at the initial stage of a project. It also includes the failure to hire a Data Protection Officer/Chief Data Officer, who has professional experience and knowledge of data protection laws that are proportionate to what the organisation carries out. If it makes you feel any better, you’re not the only one. A report from Ovum states that more than 50% of the companies feel they’re most likely to be fined for non compliance. How do you prepare for GDPR? Well, here are a few honest steps that you could perform to ensure a successful compliance: Prepare to shell out between $1 million to $10 million to meet GDPR requirements Hire a DPO or a CDO who’s capable of handling all your data policies and migration Fully understand GDPR and its requirements Perform a risk assessment, understand what kind of data you store and what implications it might have Strategize to mitigate that risk Review/Create your data protection plan Plan for a 72 hour incident response system Implement internal plans and policies to ensure employees follow For the third time then - time is running out! It’s imperative that you ensure your organisation complies with GDPR before the 25th of May, 2018. We’ll follow up with some more thoughts to help you make the shift, as well as give you more insight into this game changing regulation. If you own or are part of an organisation that has migrated to comply with GDPR, please share some tips in the comments section below to help others still in the midst of the transition.
Read more
  • 0
  • 0
  • 5880