Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech Guides - Programming

81 Articles
article-image-how-to-become-an-exceptional-performance-engineer
Guest Contributor
14 Dec 2019
8 min read
Save for later

How to become an exceptional Performance Engineer

Guest Contributor
14 Dec 2019
8 min read
Whenever I think of performance engineering, I am reminded of Amazon’s CEO Jeff Bezos’ statement, “Focusing on the customer makes a company more resilient.” Any company which follows this consumer-focused approach has a performance engineering domain in it, though in varying capacity and form. The connection is simple. More and more businesses are becoming web-based, so they are interacting with their customers digitally. In such a scenario, if they have to provide exceptional customer experience, they have to build resilient, stable, user-centric and high performing web-systems and applications. And to do that, they need performance engineering. What is Performance Engineering? Let me explain performance engineering with an example. Suppose, your team is building an online shopping portal. The developers will build a system that allows people to access products and buy them. They will ensure that the entire transaction is smooth, uncomplicated for the user and can be done quickly. Now imagine that to promote the portal, you do a flash sale, and 1000 users come on the platform and start doing transactions simultaneously. And your system, under this load, starts performing slower, a lot of transactions fail and your users are dejected. This will directly affect your brand image, customer loyalty, and revenue. How about we fix this before such a situation occurs? That is exactly what performance engineering entails. A performance engineer would essentially take into account such scenarios and conduct load tests and check the system’s performance in the development phase itself. Load tests check the behavior of your system in particular situations. A ‘load’ is a possible scenario that can affect the system, for instance, sale offers or peak times. If the system is able to handle the load, it will check if it is scalable. If the system is unable to handle it, they will analyze the result, find the possible bottleneck by checking the code and try to rectify it. So, for the above example, a performance engineer would have tested the system for 100 transactions at a time, then 500, and then 1000 and would have even gone up to one hundred thousand. Hence, performance engineering ensures crash-free operation of a system, software or application. Using processes and systematic techniques, practices, and activities, a performance engineer ensures that the performance requirements are met during the development cycle. However, this is not a blanket role. It would vary with your field of operation. The work of a performance engineer working on a web application will be a lot different than that of a database performance engineer or that of a streaming performance engineer. For each of these, your “load” would vary but your goal is the same, ensuring that your system is resilient enough to shoulder that load. Before I dive deeper into the role of a performance engineer, I’d like to clarify the difference between a performance tester and a performance engineer. (Yes, they are not the same!) Performance Tester versus Performance Engineer Well, many people think that 2-3 years of experience as a performance tester can easily land you a performance engineering job. Well, no. It is a long journey, which requires much more knowledge than what a tester has. A performance tester would have testing knowledge and would know about performance analysis and performance monitoring concepts across different applications. They would essentially conduct a “load test” to check the performance, stability, and scalability of a system, and produce reports to share with the developer to work on. Their work ends here. But this is not the case for a performance engineer. A performance engineer will look for the root cause of the performance issue, work towards finding a possible solution for it and then tune and optimize the system to sort the said issue until the performance parameters are met. Simply put, performance testing can be considered as a part of performance engineering but not as the same thing. Roles and Responsibilities of a Performance Engineer Designing Effective Tests As a performance engineer, your first task is to design an effective test to check the system. I found this checklist on Dzone that is really helpful for designing tests: Identify your goals, requirements, desires, workload model and your stakeholders. Understand how to test concurrency, arrival rates, and scheduling. Understand the roles of scalability, capacity, and reliability as quality attributes and requirements. Understand how to setup/create test data and data management. Scripting, Running Tests and Interpreting Results There are several performance testing tools available in the market. But you would have to work with different languages based on the tool you use. For instance, you’d have to build your testing in C and Javascript while working with Microfocus Loadrunner. Similarly, you’d script in Java and Javascript for Apache JMeter. Once your test is ready, you’d run that test on your system. Make sure you use consistent metrics while running these tests or else your results would be inaccurate. Finally, you will interpret those results. In this, you’d have to figure out what the bottlenecks are and where they are occurring. For that, you would have to read results and analyze graphs that your performance testing tool has produced and draw conclusions. Fine Tuning And Performance Optimisation Once you know what the bottleneck is and where it is occurring, you would have to find a solution to overcome it to enhance the performance of the system you are testing. (Something a performance tester won’t do!) Your task is to ensure that the system/application is optimized to the level where it works optimally at the maximum load possible. Of course, you can seek aid from a developer (backend, frontend or full-stack) working on the project to figure this out. But as a performance engineer, you’d have to be involved actively in this fine-tuning and optimization process. There are four major skills/attributes that differentiate an exceptional performance engineer from an average one. Proves that their load results are scalable If you are a good performance engineer, you will not serve a half-cooked meal. First of all, take all possibilities into account. For instance, take the example of the same online shopping portal. If you are considering a load test for 1000 simultaneous transactions, consider it for both scenarios wherein the transactions are happening for different products or when it is happening for the same product. If your portal does a launch sale for an exclusive product that is available for a limited period, you may have too many people trying to buy it at the same time. Ask yourself if your system could withstand that load? Proves that their load results are sustainable Not just this, you should also consider whether your results are sustainable over a defined period of time. The system should operate without crashing. It is often recommended that a load test runs for 30 mins. While thirty minutes will be enough to detect most new performance changes as they are introduced, in order to make these tests legitimate, it is necessary to prove they can run for at least two hours at the same load. These time durations may vary for different programs/systems/applications. Uses Benchmarks A benchmark essentially is a point of reference based on which you can compare and assess the performance of your system. It is a set standard against which you can check the quality of your product/application/system. For some systems, like databases, standard benchmarks are readily available for you to test on. As a performance engineer, you must be aware of the performance benchmarks in your field/domain. For example, you’d find benchmarks for testing firewalls, databases, and end-to-end IT systems. The most commonly used benchmarking frameworks are Benchmark Framework 2.0 & TechEmpower. Understands User Behavior If you don’t have an understanding of user reactions in different situations, you cannot design an effective load test. A good performance engineer knows their user demographics, understands their key behavior and knows how the user would interact with the system. While it is impossible to predict user behavior entirely, for instance, a sale may result in 100,000 transactions per hour to barely 100 per hour, you should check user statistics, analyze user activity and conduct and prepare your system for optimum usage. All in all, besides strong technical skills, as a performance engineer, you must always be far-sighted. You must be able to see beyond what meets the eye and catch what others might miss. The role, invariably, requires a lot of technical expertise. But it also requires non-technical skills like problem-solving, attention-to-detail and insightfulness. About the Author Dr Sandeep Deshmukh is the founder and CEO at Workship. He holds a  PhD from IIT Bombay, and has worked in Big Data, Hadoop ecosystem, Distributed Systems, AI/ML, etc for 12+ yrs. He has been an Engineering Manager at DataTorrent and Data Scientist with Reliance Industries.
Read more
  • 0
  • 0
  • 4655

article-image-denys-vuika-on-building-secure-and-performant-electron-apps-and-more
Bhagyashree R
02 Dec 2019
7 min read
Save for later

Denys Vuika on building secure and performant Electron apps, and more

Bhagyashree R
02 Dec 2019
7 min read
Building cross-platform desktop applications can be difficult. It requires you to have knowledge of specific tools and technologies for each platform you want to target. Wouldn't it be great if you could write and maintain a single codebase and that too with your existing web development skills? Electron helps you do exactly that. It is a framework for building cross-platform desktop apps with JavaScript, HTML, and CSS. Electron was originally not a separate project but was actually built to port the Mac-only Atom text editor to different platforms. The Atom team at GitHub tried out solutions like Chromium Embedded Framework (CEF) and node-webkit (now known as NW.js), but nothing was working right. This is when Cheng Zhao, a GitHub engineer started a new project and rewrote node-webkit from scratch. This project was Atom Shell, that we now know as Electron. It was open-sourced in 2014 and was renamed to Electron in May 2015. To get an insight into why so many companies are adopting Electron, we interviewed Denys Vuika, a veteran programmer and author of the book Electron Projects. He also talked about when you should choose Electron, best practices for building secure Electron apps, and more. Electron Projects is a project-based guide that will help you explore the components of the Electron framework and its integration with other JS libraries to build 12 real-world desktop apps with an increasing level of complexity. When is Electron the best bet and when is it not  Many popular applications are built using Electron including VSCode, GitHub Desktop, and Slack. It enables developers to deliver new features fast, while also maintaining consistency with all platforms. Vuika says, “The cost and speed of the development, code reuse are the main reasons I believe. The companies can effectively reuse existing code to build desktop applications that look and behave exactly the same across the platforms. No need to have separate developer teams for various platforms.” When we asked Vuika about why he chose Electron, he said, “Historically, I got into the Electron app development to build applications that run on macOS and Linux, alongside traditional Windows platform. I didn't want to study another stack just to build for macOS, so Electron shell with the web-based content was extremely appealing.” Sharing when you should choose Electron, he said, "Electron is the best bet when you want to have a single codebase and single developer team working with all major platforms. Web developers should have a very minimal learning curve to get started with Electron development. And the desktop application codebase can also be shared with the website counterpart. That saves a huge amount of time and money. Also, the Node.js integration with millions of useful packages to cover all possible scenarios." The case when it is not a good choice is, “if you are just trying to wrap the website functionality into a desktop shell. The biggest benefit of Electron applications is access to the local file system and hardware.” Building Electron application using Angular, React, Vue Electron integrates with all the three most popular JavaScript frameworks: React, Vue, and Angular. All these three have their own pros and cons. If you are coming from a JavaScript background, React could probably be a good option as it has much less abstraction away from vanilla JS. Other advantages are it is very flexible, you can extend its core functionality by adding libraries, and it is backed by a great community. Vue is a lightweight framework that’s easier to learn and get productive. Angular has exceptional TypeScript support and includes dependency injections, Http services, internationalization, formatting pipes, server-side rendering, a CLI, animations and much more. When it comes to Electron, choosing one of them depends on which framework you are comfortable with and what fits your needs. Vuika recommends, "There are three pretty much big developer camps out there: React, Angular and Vue. All of them focus on web components and client applications, so it’s a matter of personal preferences or historical decisions if speaking about companies. Also, each JavaScript framework has more than one set of mature UI libraries and design systems, so there are always options to choose from. For novice developers he recommends, “keep in mind it is still a web stack. Pick whatever you are comfortable to build a web application with." Vuika's book, Electron Projects, has a dedicated chapter, Integrating Electron applications with Angular, React, and Vue to help you learn how to integrate them with your Electron apps. Tips on building performant and secure apps Electron’s core components are Chromium, more specifically the libchromiumcontent library, Node.js, and of Chromium Google V8 javascript engine. Each Electron app ships with its own isolated copy of Chromium, which could affect their memory footprint as well as the bundle size. Sharing other reasons Vuika said, “It has some memory footprint but, based on my personal experience, most of the memory issues are usually related to the application implementation and resource management rather than the Electron shell itself.” Some of the best practices that the Electron team recommends are: examining modules and their dependencies before adding to your applications, ensuring the main process is not blocked, among others. You can find the full checklist on Electron’s official site. Vuika suggests, “Electron developers have all the development toolset they use for web development: Chrome Developer Tools with debuggers, profilers, and many other great features. There are also build tools for each frontend framework that allow minification, code splitting, and tree shaking. Routing libraries allow loading only the content the user needs at a particular point. Many areas to improve memory and resource consumption.” More recently, some developers have also started using Rust and also recommend using WebAssembly with Electron to minimize the Electron pain points while enjoying its benefits.  Coming to security, Vuika says, “With Electron, a web application can have nearly full access to the local file system and operating system resources by means of the Node.js process. Developers should be very careful trusting web content, especially if using remotely served HTML content.”   “Electron team has recently published a very good article on the security that I strongly recommend to read and keep in the bookmarks. The article dwells on explaining major security pitfalls, as well as ways to harden your applications,” he recommends. Meanwhile, Electron is also improving with its every subsequent release. Starting with Electron 6.0 the team has started laying “the groundwork for a future requirement that native Node modules loaded in the renderer process be either N-API or Context Aware.” This update is expected to come in Electron 11.0.  “Also, keep in mind that Electron keeps improving and evolving all the time. It is getting more secure and faster with each next release. For developers, it is more important to build the knowledge of creating and debugging applications, as for me,” he adds. About the author Denys Vuika is an Applications Platform Developer and Tech Lead at Alfresco Software, Inc. He is a full-stack developer and a constant open source contributor, with more than 16 years of programming experience, including ten years of front-end development with AngularJS, Angular, ASP.NET, React.js and other modern web technologies, more than three years of Node.js development. Denys works with web technologies on a daily basis, has a good understanding of Cloud development, and containerization of the web applications. Denys Vuika is a frequent Medium blogger and the author of the "Developing with Angular" book on Angular, Javascript, and Typescript development. He also maintains a series of Angular-based open source projects. Check out Vuika’s latest book, Electron Projects on PacktPub. This book is a project-based guide to help you create, package, and deploy desktop applications on multiple platforms using modern JavaScript frameworks Follow Denys Vuika on Twitter: @DenysVuika. Electron 6.0 releases with improved Promise support, native Touch ID authentication support, and more The Electron team publicly shares the release timeline for Electron 5.0 How to create a desktop application with Electron [Tutorial]
Read more
  • 0
  • 0
  • 5721

article-image-microservices-require-a-high-level-vision-to-shape-the-direction-of-the-system-in-the-long-term-says-jaime-buelta
Bhagyashree R
25 Nov 2019
9 min read
Save for later

"Microservices require a high-level vision to shape the direction of the system in the long term," says Jaime Buelta

Bhagyashree R
25 Nov 2019
9 min read
Looking back 4-5 years ago, the sentiment around microservices architecture has changed quite a bit. First, it was in the hype phase when after seeing the success stories of companies like Netflix, Amazon, and Gilt.com developers thought that microservices are the de facto of application development. Cut to now, we have realized that microservices is yet another architectural style which when applied to the right problem in the right way works amazingly well but comes with its own pros and cons. To get an understanding of what exactly microservices are, when we should use them, when not to use them, we sat with Jaime Buelta, the author of Hands-On Docker for Microservices with Python. Along with explaining microservices and their benefits, Buelta shared some best practices developers should keep in mind if they decide to migrate their monoliths to microservices. [box type="shadow" align="" class="" width=""] Further Learning Before jumping to microservices, Buelta recommends building solid foundations in general software architecture and web services. “They'll be very useful when dealing with microservices and afterward,” he says. Buelta’s book, Hands-on Docker for Microservices with Python aims to guide you in your journey of building microservices. In this book, you’ll learn how to structure big systems, encapsulate them using Docker, and deploy them using Kubernetes. [/box] Microservices: The benefits and risks A traditional monolith application encloses all its capabilities in a single unit. On the contrary, in the microservices architecture, the application is divided into smaller standalone services that are independently deployable, upgradeable, and replaceable. Each microservice is built for a single business purpose, which communicates with other microservices with lightweight mechanisms. Buelta explains, “Microservice architecture is a way of structuring a system, where several independent services communicate with each other in a well-defined way (typically through web RESTful services). The key element is that each microservice can be updated and deployed independently.” Microservices architecture does not only dictates how you build your application but also how your team is organized. [box type="shadow" align="" class="" width=""]"Though [it] is normally described in terms of the involved technologies, it’s also an organizational structure. Each independent team can take full ownership of a microservice. This allows organizations to grow without developers clashing with each other," he adds. [/box] One of the key benefits of microservices is it enables innovation without much impact on the system as a whole. With microservices, you can do horizontal scaling, have strong module boundaries, use diverse technologies, and develop parallelly. Coming to the risks associated with microservices, Buelta said, "The main risk in its adoption, especially when coming from a monolith, is to make a design where the services are not truly independent. This generates an overhead and complexity increase in inter-service communication." He adds, "Microservices require a high-level vision to shape the direction of the system in the long term. My recommendation to organizations moving towards this kind of structure is to put someone in charge of the “big picture”. You don't want to lose sight of the forest for the trees." Migrating from monoliths to microservices Martin Fowler, a renowned author and software consultant, advises going for a "monolith-first" approach. This is because using microservices architecture from the get-go can be risky as it is mostly found suitable for large systems and large teams. Buelta shared his perspective, "The main metric to start thinking into getting into this kind of migration is raw team size. For small teams, it is not worth it, as developers understand everything that is going on and can ask the person sitting right across the room for any question. A monolith works great in these situations, and that’s why virtually every system starts like this." This asserts the "two-pizza team" rule by Amazon, which says that if a team responsible for one microservice couldn’t be fed with two pizzas, it is too big. [box type="shadow" align="" class="" width=""]"As business and teams grow, they need better coordination. Developers start stepping into each other's toes often. Knowing the intent of a particular piece of code is trickier. Migrating then makes sense to give some separation of function and clarity. Each team can set its own objectives and work mostly on their own, presenting a clear external interface. But for this to make sense, there should be a critical mass of developers," he adds.[/box] Best practices to follow when migrating to microservices When asked about what best practices developers can practice when migrating to microservices, Buelta said, "The key to a successful microservice architecture is that each service is as independent as possible." A question that arises here is ‘how can you make the services independent?’ "The best way to discover the interdependence of system is to think in terms of new features: “If there’s a new feature, can it be implemented by changing a single service? What kind of features are the ones that will require coordination of several microservices? Are they common requests, or are they rare? No design will be perfect, but at least will help make informed decisions,” explains Buelta. Buelta advises doing it right instead of doing it twice. "Once the migration is done, making changes on the boundaries of the microservices is difficult. It’s worth to invest time into the initial phase of the project," he adds. Migrating from one architectural pattern to another is a big change. We asked what challenges he and his team faced during the process, to which he said, [box type="shadow" align="" class="" width=""]"The most difficult challenge is actually people. They tend to be underestimated, but moving into microservices is actually changing the way people work. Not an easy task![/box] He adds, “I’ve faced some of these problems like having to give enough training and support for developers. Especially, explaining the rationale behind some of the changes. This helps developers understand the whys of the change they find so frustrating. For example, a common complaint moving from a monolith is to have to coordinate deployments that used to be a single monolith release. This needs more thought to ensure backward compatibility and minimize risk. This sometimes is not immediately obvious, and needs to be explained." On choosing Docker, Kubernetes, and Python as his technology stack We asked Buelta what technologies he prefers for implementing microservices. For language his answer was simple: "Python is a very natural choice for me. It’s my favorite programming language!" He adds, "It’s very well suited for the task. Not only is it readable and easy to use, but it also has ample support for web development. On top of that, it has a vibrant ecosystem of third-party modules for any conceivable demand. These demands include connecting to other systems like databases, external APIs, etc." Docker is often touted as one of the most important tools when it comes to microservices. Buelta explained why, "Docker allows to encapsulate and replicate the application in a convenient standard package. This reduces uncertainty and environment complexity. It simplifies greatly the move from development to production for applications. It also helps in reducing hardware utilization.  You can fit multiple containers with different environments, even different operative systems, in the same physical box or virtual machine." For Kubernetes, he said, "Finally, Kubernetes allows us to deploy multiple Docker containers working in a coordinated fashion. It forces you to think in a clustered way, keeping the production environment in mind. It also allows us to define the cluster using code, so new deployments or configuration changes are defined in files. All this enables techniques like GitOps, which I described in the book, storing the full configuration in source control. This makes any change in a specific and reversible way, as they are regular git merges. It also makes recovering or duplicating infrastructure from scratch easy." "There is a bit of a learning curve involved in Docker and Kubernetes, but it’s totally worth it. Both are very powerful tools. And they encourage you to work in a way that’s suited for avoiding downfalls in production," he shared. On multilingual microservices Microservices allow you to use diverse technologies as each microservice ideally is handled by an independent team. Buelta shared his opinion regarding multilingual microservices, "Multilingual microservices are great! That’s one of its greatest advantages. A typical example of this is to migrate legacy code written in some language to another. A microservice can replace another that exposes the same external interface. All while being completely different internally. I’ve done migrations from old PHP apps to replace them with Python apps, for example." He adds, "As an organization, working with two or more frameworks at the same time can help understand better both of them, and when to use one or the other." Though using multilingual microservices is a great advantage, it can also increase the operational overhead. Buelta advises, "A balance needs to be stuck, though. It doesn’t make sense to use a different tool each time and not be able to share knowledge across teams. The specific numbers may depend on company size, but in general, more than two or three should require a good explanation of why there’s a new tool that needs to be introduced in the stack. Keeping tools at a reasonable level also helps to share knowledge and how to use them most effectively." About the author Jaime Buelta has been a professional programmer and a full-time Python developer who has been exposed to a lot of different technologies over his career. He has developed software for a variety of fields and industries, including aerospace, networking and communications, industrial SCADA systems, video game online services, and financial services. As part of these companies, he worked closely with various functional areas, such as marketing, management, sales, and game design, helping the companies achieve their goals. He is a strong proponent of automating everything and making computers do most of the heavy lifting so users can focus on the important stuff. He is currently living in Dublin, Ireland, and has been a regular speaker at PyCon Ireland. Check out Buelta’s book, Hands-On Docker for Microservices with Python on PacktPub. In this book, you will learn how to build production-grade microservices as well as orchestrate a complex system of services using containers. Follow Jaime Buelta on Twitter: @jaimebuelta. Microsoft launches Open Application Model (OAM) and Dapr to ease developments in Kubernetes and microservices Kong CTO Marco Palladino on how the platform is paving the way for microservices adoption [Interview] Yuri Shkuro on Observability challenges in microservices and cloud-native applications
Read more
  • 0
  • 0
  • 5908

article-image-exploring-net-core-3-0-components-with-mark-j-price-a-microsoft-specialist
Packt Editorial Staff
15 Nov 2019
8 min read
Save for later

Exploring .Net Core 3.0 components with Mark J. Price, a Microsoft specialist

Packt Editorial Staff
15 Nov 2019
8 min read
There has been continuous transformation since the last few years to bring .NET to platforms other than Windows. .NET Core 3.0 released in September 2019 with primary focus on adding Windows specific features. .NET Core 3.0 supports side-by-side and app-local deployments, a fast JSON reader, serial port access and other PIN access for Internet of Things (IoT) solutions, and tiered compilation on by default. In this article we will explore the .Net Core components of its new 3.0 release. This article is an excerpt from the book C# 8.0 and .NET Core 3.0 - Modern Cross-Platform Development - Fourth Edition written by Mark J. Price. Mark follows a step-by-step approach in the book filled with exciting projects and fascinating theory for the readers in this highly acclaimed franchise.  Pieces of .NET Core components These are pieces that play an important role in the development of the .NET Core: Language compilers: These turn your source code written with languages such as C#, F#, and Visual Basic into intermediate language (IL) code stored in assemblies. With C# 6.0 and later, Microsoft switched to an open source rewritten compiler known as Roslyn that is also used by Visual Basic. Common Language Runtime (CoreCLR): This runtime loads assemblies, compiles the IL code stored in them into native code instructions for your computer's CPU, and executes the code within an environment that manages resources such as threads and memory. Base Class Libraries (BCL) of assemblies in NuGet packages (CoreFX): These are prebuilt assemblies of types packaged and distributed using NuGet for performing common tasks when building applications. You can use them to quickly build anything you want rather combining LEGO™ pieces. .NET Core 2.0 implemented .NET Standard 2.0, which is a superset of all previous versions of .NET Standard, and lifted .NET Core up to parity with .NET Framework and Xamarin. .NET Core 3.0 implements .NET Standard 2.1, which adds new capabilities and enables performance improvements beyond those available in .NET Framework. Understanding assemblies, packages, and namespaces An assembly is where a type is stored in the filesystem. Assemblies are a mechanism for deploying code. For example, the System.Data.dll assembly contains types for managing data. To use types in other assemblies, they must be referenced. Assemblies are often distributed as NuGet packages, which can contain multiple assemblies and other resources. You will also hear about metapackages and platforms, which are combinations of NuGet packages. A namespace is the address of a type. Namespaces are a mechanism to uniquely identify a type by requiring a full address rather than just a short name. In the real world, Bob of 34 Sycamore Street is different from Bob of 12 Willow Drive. In .NET, the IActionFilter interface of the System.Web.Mvc namespace is different from the IActionFilter interface of the System.Web.Http.Filters namespace. Understanding dependent assemblies If an assembly is compiled as a class library and provides types for other assemblies to use, then it has the file extension .dll (dynamic link library), and it cannot be executed standalone. Likewise, if an assembly is compiled as an application, then it has the file extension .exe (executable) and can be executed standalone. Before .NET Core 3.0, console apps were compiled to .dll files and had to be executed by the dotnet run command or a host executable. Any assembly can reference one or more class library assemblies as dependencies, but you cannot have circular references. So, assembly B cannot reference assembly A, if assembly A already references assembly B. The compiler will warn you if you attempt to add a dependency reference that would cause a circular reference. Understanding the Microsoft .NET Core App platform By default, console applications have a dependency reference on the Microsoft .NET Core App platform. This platform contains thousands of types in NuGet packages that almost all applications would need, such as the int and string types. When using .NET Core, you reference the dependency assemblies, NuGet packages, and platforms that your application needs in a project file. Let's explore the relationship between assemblies and namespaces. In Visual Studio Code, create a folder named test01 with a subfolder named AssembliesAndNamespaces, and enter dotnet new console to create a console application. Save the current workspace as test01 in the test01 folder and add the AssembliesAndNamespaces folder to the workspace. Open AssembliesAndNamespaces.csproj, and note that it is a typical project file for a .NET Core application, as shown in the following markup: Check out this code on GitHub. Although it is possible to include the assemblies that your application uses with its deployment package, by default the project will probe for shared assemblies installed in well-known paths. First, it will look for the specified version of .NET Core in the current user's .dotnet/store and .nuget folders, and then it looks in a fallback folder that depends on your OS, as shown in the following root paths: Windows: C:\Program Files\dotnet\sdk macOS: /usr/local/share/dotnet/sdk Most common .NET Core types are in the System.Runtime.dll assembly. You can see the relationship between some assemblies and the namespaces that they supply types for, and note that there is not always a one-to-one mapping between assemblies and namespaces, as shown in the following table: Assembly Example namespaces Example types System.Runtime.dll System, System.Collections, System.Collections.Generic Int32, String, IEnumerable<T> System.Console.dll System Console System.Threading.dll System.Threading Interlocked, Monitor, Mutex System.Xml.XDocument.dll System.Xml.Linq XDocument, XElement, XNode Understanding NuGet packages .NET Core is split into a set of packages, distributed using a Microsoft-supported package management technology named NuGet. Each of these packages represents a single assembly of the same name. For example, the System.Collections package contains the System.Collections.dll assembly. The following are the benefits of packages: Packages can ship on their own schedule. Packages can be tested independently of other packages. Packages can support different OSes and CPUs by including multiple versions of the same assembly built for different OSes and CPUs. Packages can have dependencies specific to only one library. Apps are smaller because unreferenced packages aren't part of the distribution. The following table lists some of the more important packages and their important types: Package Important types System.Runtime Object, String, Int32, Array System.Collections List<T>, Dictionary<TKey, TValue> System.Net.Http HttpClient, HttpResponseMessage System.IO.FileSystem File, Directory System.Reflection Assembly, TypeInfo, MethodInfo Understanding frameworks There is a two-way relationship between frameworks and packages. Packages define the APIs, while frameworks group packages. A framework without any packages would not define any APIs. .NET packages each support a set of frameworks. For example, the System.IO.FileSystem package version 4.3.0 supports the following frameworks: .NET Standard, version 1.3 or later. .NET Framework, version 4.6 or later. Six Mono and Xamarin platforms (for example, Xamarin.iOS 1.0). Understanding dotnet commands When you install .NET Core SDK, it includes the command-line interface (CLI) named dotnet. Creating new projects The dotnet command-line interface has commands that work on the current folder to create a new project using templates. In Visual Studio Code, navigate to Terminal. Enter the dotnet new -l command to list your currently installed templates, as shown in the following screenshot: Managing projects The dotnet CLI has the following commands that work on the project in the current folder, to manage the project: dotnet restore: This downloads dependencies for the project. dotnet build: This compiles the project. dotnet test: This runs unit tests on the project. dotnet run: This runs the project. dotnet pack: This creates a NuGet package for the project. dotnet publish: This compiles and then publishes the project, either with dependencies or as a self-contained application. add: This adds a reference to a package or class library to the project. remove: This removes a reference to a package or class library from the project. list: This lists the package or class library references for the project. To summarize, we explored the .NET Core components of the new 3.0 release. If you want to learn the fundamentals, build practical applications, and explore latest features of C# 8.0 and .NET Core 3.0, check out our latest book C# 8.0 and .NET Core 3.0 - Modern Cross-Platform Development - Fourth Edition written by Mark J. Price. .NET Framework API Porting Project concludes with .NET Core 3.0 .NET Core 3.0 is now available with C# 8, F# 4.7, ASP.NET Core 3.0 and general availability of EF Core 3.0 and EF 6.3 .NET Core 3.0 Preview 6 is available, packed with updates to compiling assemblies, optimizing applications ASP.NET Core and Blazor Inspecting APIs in ASP.NET Core [Tutorial] Use App Metrics to analyze HTTP traffic, errors & network performance of a .NET Core app [Tutorial]
Read more
  • 0
  • 0
  • 5447

article-image-salesforce-lightning-platform-powerful-fast-and-intuitive-user-interface
Fatema Patrawala
05 Nov 2019
6 min read
Save for later

What makes Salesforce Lightning Platform a powerful, fast and intuitive user interface

Fatema Patrawala
05 Nov 2019
6 min read
Salesforce has always been proactive in developing and bringing to market new features and functionality in all of its products. Throughout the lifetime of the Salesforce CRM product, there have been several upgrades to the user interface. In 2015, Salesforce began promoting its new platform – Salesforce Lightning. Although long time users and Salesforce developers may have grown accustomed to the classic user interface, Salesforce Lightning may just covert them. It brings in a modern UI with new features, increased productivity, faster deployments, and a seamless transition across desktop and mobile environments. Recently, Salesforce has been actively encouraging its developers, admins and users to migrate from the classic Salesforce user interface to the new Lightning Experience. Andrew Fawcett, currently VP Product Management and a Salesforce Certified Platform Developer II at Salesforce, writes in his book, Salesforce Lightning Enterprise Architecture, “One of the great things about developing applications on the Salesforce Lightning Platform is the support you get from the platform beyond the core engineering phase of the production process.” This book is a comprehensive guide filled with best practices and tailor-made examples developed in the Salesforce Lightning. It is a must-read for all Lightning Platform architects! Why should you consider migrating to Salesforce Lightning Earlier this year, Forrester Consulting published a study quantifying the total economic impact and benefits of Salesforce Lightning for Service Cloud. In the study, Forrester found that a composite service organization deploying Lightning Experience obtained a return on investment (ROI) of 475% over 3 years. Among the other potential benefits, Forrester found that over 3 years organizations using Lighting platform: Saved more than $2.5 million by reducing support handling time; Saved $1.1 million by avoiding documentation time; and Increased customer satisfaction by 8.5% Apart from this, the Salesforce Lightning platform allows organizations to leverage the latest cloud-based features. It includes responsive and visually attractive user interfaces which is not available within the Classic themes. Salesforce Lightning provides stupendous business process improvements and new technological advances over Classic for Salesforce developers. How does the Salesforce Lightning architecture look like While using the Salesforce Lightning platform, developers and users interact with a user interface backed by a robust application layer. This layer runs on the Lightning Component Framework which comprises of services like the navigation, Lightning Data Service, and Lightning Design System. Source: Salesforce website As part of this application layer, Base Components and Standard Components are the building blocks that enable Salesforce developers to configure their user interfaces via the App Builder and Community Builder. Standard Components are typically built up from one or more Base Components, which are also known as Lightning Components. Developers can build Lightning Components using two programming models: the Lightning Web Components model, and the Aura Components model. The Lightning platform is critical for a range of services and experiences in Salesforce: Navigation Service: The navigation service is supported for Lightning Experience and the Salesforce app. It is built with extensive routing, deep linking, and login redirection, Salesforce's navigation service powers app navigation, state changes, and refreshes. Lightning Data Service: Lightning Data Service is built on top of the User Interface API, It enables developers to load, create, edit, or delete a record in your component without requiring Apex code. Lightning Data Service improves performance and data consistency across components. Lightning Design System: With Lightning Design System, developers can build user interfaces easily including the component blueprints, markup, CSS, icons, and fonts. Base Lightning Components: Base Lightning Components are the building blocks for all UI across the platform. Components range from a simple button to a highly functional data table and can be written as an Aura component or a Lightning web component. Standard Components: Lightning pages are made up of Standard Components, which in turn are composed of Base Lightning Components. Salesforce developers or admins can drag-and-drop Standard Components in tools like Lightning App Builder and Community Builder. Lightning App Builder: Lightning App Builder will let developers build and customize interfaces for Lightning Experience, the Salesforce App, Outlook Integration, and Gmail Integration. Community Builder: For Communities, developers can use the Community Builder to build and customize communities easily. Apart from the above there are other services available within the Salesforce Lightning platform, like the Lightning security measures and record detail pages on the platform and Salesforce app. How to plan transitioning from Classic to Lightning Experience As Salesforce admins/developers prepare for the transition to Lightning Experience, they will need to evaluate three things: how does the change benefit the company, what work is needed to prepare for the change, and how much will it cost. This is the stage to make the case for moving to Lightning Experience by calculating the return on investment of the company and defining what a Lightning Experience implementation will look like. First they will need to analyze how prepared the organization is for the transition to Lightning Experience. Salesforce admins/developers can use the Lightning Experience Readiness Check, it is a tool that produces a personalized Readiness Report and shows which users will benefit right away, and how to adjust the implementation for Lightning Experience. Further Salesforce developers/admins can make the case to their leadership team by showing how migrating to Lightning Experience can realize business goals and improve the company's bottom line. Finally, by using the results of the activities carried out to assess the impact of the migration, understand the level of change required and decide on a suitable approach. If the changes required are relatively small, consider migrating all users and all areas of functionality at the same time. However, if the Salesforce environment is more complex and the amount of change is far greater, consider implementing the migration in phases or as an initial pilot to start with. Overall, the Salesforce Lightning Platform is being increasingly adopted by admins, business analysts, consultants, architects, and especially Salesforce developers. If you want to deliver packaged applications using Salesforce Lightning that cater to enterprise business needs, read this book, Salesforce Lightning Platform Enterprise Architecture, written by Andrew Fawcatt.  This book will take you through the architecture of building an application on the Lightning platform and help you understand its features and best practices. It will also help you ensure that the app keeps up with the increasing customers’ and business requirements. What are the challenges of adopting AI-powered tools in Sales? How Salesforce can help Salesforce open sources ‘Lightning Web Components framework’ “Facebook is the new Cigarettes”, says Marc Benioff, Salesforce Co-CEO Build a custom Admin Home page in Salesforce CRM Lightning Experience How to create and prepare your first dataset in Salesforce Einstein  
Read more
  • 0
  • 0
  • 5260

article-image-founder-ceo-of-odoo-fabien-pinckaers-discusses-the-new-odoo-13-framework
Vincy Davis
04 Nov 2019
6 min read
Save for later

Founder &amp; CEO of Odoo, Fabien Pinckaers discusses the new Odoo 13 framework

Vincy Davis
04 Nov 2019
6 min read
Odoo, formerly known as OpenERP (Enterprise Resource Planning), is a popular open source, business application development software. It comes with many features like a powerful GUI, performance optimization, integrated in-app purchase features and more. It is used by companies to manage and organize their workloads like materials and warehouse management, human resources, finance, accounting, sales, and many other enterprise features. With a fast-growing community, Odoo is being used by companies of all sizes. At the Odoo Experience 2019 event conducted earlier this month, the Odoo team announced the release of Odoo 13, its latest version of the all-in-one business software. This release contains an abundance of major and minor improvements, including new features like sales coupons & promotions module, MRP subcontracting, website form builder, skill management module and more.  At the event, founder & CEO of Odoo, Fabien Pinckaers explained the many concepts behind the new Odoo framework, which he says is one of the best improvements in Odoo 13. New to Odoo? If you are a beginner in Odoo, read our book Working with Odoo 12 - Fourth Edition written by Greg Moss to learn how to start a new company database in Odoo and to understand the basics of Odoo sales management. You can also master customer relationship management in Odoo for setting up a modern business environment. This book will also take you through the OpenChatter feature with notes and messages associated with the Odoo documents. Also, learn how to use Odoo's API to integrate with other applications from our book.   The Odoo 13 framework is also called an In-Memory ORM, because it provides more considerable memory than before. When employed for operational measures, on an average, it runs 4.5 times faster when compared to earlier versions of Odoo. Key features of Odoo 13 framework Simplified Cache process Pinckaers says that in the new framework, they have simplified the cache as the stored fields will now only need a single value. On the other hand, the non-stored fields’ computed value will depend on the keywords present in the context (eg. translatable and context). He added that, in version 12, most fields did not need a cache so it contained only one global cache with an exception for fields that were text-dependent. It also had a new attribute for a multi-line inventory where the projects depend on “way roads”. However, the difficulty in this version is that when creating a field, users had to select the cache value and if the context of the field is changing, then the users had to again specify the new value of cache. This step is made simpler in version 13, as the user now needs to specify the value of the cache only once. “It seems simple but actually in the business code we're passing it to all the fields at the same time,” asserts Pinckaers. This simplified cache process will also reduce the alert memory access of the code. In-memory updates While specifying the various test field values, in the earlier versions, users had to update its validation value each time making it a time consuming process. To overcome this problem, the Odoo team has included all the data transactions in memory in the new version. Consequently,  in Odoo 13, when assigning the field value, the user can put it in the cache each time. Hence, when a field value needs to be read, it is taken from the cache itself. To manage all the dependencies in Python, Pinckaers demonstrated how users should always:  Use the inverse field, instead of SQL query Avoid using SELECT, as the implementation of the compute will read the same object When create(), set one2many to[] Delaying the computing field for faster transactions In order to delay a computing field in the line.product_quantity and the line.discount in the preceding Odoo versions, a user had to compute the dependency value for all the for line in order commands. Once the transaction was completed, the values were then recomputed and written in the code. This process is also made easy in Odoo 13, as the user can now mark all the line commands to recompute and use the self.flush() command to compute the values after the transaction is completed. This makes all the computation transactions to be conducted in memory. According to Pinckaers, this support will help users with more than 100 customers as it will make the process much faster and simpler. Optimize dependency tree to reduce Python and SQL computations Pinckaers takes the ‘change order’ example to demonstrate how version 13 of Odoo has a clean dependency tree. This means that if the price list of the order is changed, the total cost of the order will also change indirectly, thus optimizing the dependency tree. He explains that this indirect change will happen due to the indirect dependency between the pricelist identity and the total cost list of the field in Odoo 13. In the earlier versions, due to the recursive nature of the dependencies, each order of the line entailed the order ID of the field. This required the user to read sometimes even more than 100 lines of the list to get the order ID. In Odoo 13, this prolonged process is altered to get a more optimized dependency tree. This means that the user can now directly get the order ID from the dependent tree, without the Python and SQL computations.  Improvements in browse optimization() The major improvement instilled in Odoo 13 browse optimization() is the mechanism to avoid multiple format cache conversion. In the previous versions, users had to read and convert all the SQL queries to cache format followed by put in cache command. This meant that it required three commands just to read the data, making the process very tedious. With the latest version, the prefetch command will directly save all the similar data formats in the memory. “But if the format is different, then we have to apply everything a color conversion method. As  Python is extremely slow,” Pinckaers says, “applying a dictionary that we see from outside the cache” makes the process faster because a C implementation can be used to directly convert the data in the cache format. You can watch the full video to see Pinckaer’s demonstration of code cleanup and Python optimization. If you want to use Odoo to build enterprise applications and set up the functional requirements for your business, read our book ‘Working with Odoo 12 - Fourth Edition' written by Greg Moss to learn how to use the MRP module to create, process, and schedule the manufacturing and production order. This book will also guide you with in-depth knowledge about the business intelligence required in Odoo, its architecture and will also unveil how to customize Odoo to meet the specific needs of your business.  Creating views in Odoo 12 – List, Form, Search [Tutorial] How to set up Odoo as a system service [Tutorial] Handle Odoo application data with ORM API [Tutorial] Implement an effective CRM system in Odoo 11 [Tutorial] “Everybody can benefit from adopting Odoo, whether you’re a small start-up or a giant tech company” – An interview with Odoo community hero, Yenthe Van Ginneken
Read more
  • 0
  • 0
  • 4777
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-what-is-the-history-behind-c-programming-and-unix
Packt Editorial Staff
17 Oct 2019
9 min read
Save for later

What is the history behind C Programming and Unix?

Packt Editorial Staff
17 Oct 2019
9 min read
If you think C programming and Unix are unrelated, then you are making a big mistake. Back in the 1970s and 1980s, if the Unix engineers at Bell Labs had decided to use another programming language instead of C to develop a new version of Unix, then we would be talking about that language today. The relationship between the two is simple; Unix is the first operating system that is implemented with a high-level C programming language, got its fame and power from Unix. Of course, our statement about C being a high-level programming language is not true in today’s world. This article is an excerpt from the book Extreme C by Kamran Amini. Kamran teaches you to use C’s power. Apply object-oriented design principles to your procedural C code. You will gain new insight into algorithm design, functions, and structures. You’ll also understand how C works with UNIX, how to implement OO principles in C, and what multiprocessing is. In this article, we are going to look at the history of C programming and Unix. Multics OS and Unix Even before having Unix, we had the Multics OS. It was a joint project launched in 1964 as a cooperative project led by MIT, General Electric, and Bell Labs. Multics OS was a huge success because it could introduce the world to a real working and secure operating system. Multics was installed everywhere from universities to government sites. Fast-forward to 2019, and every operating system today is borrowing some ideas from Multics indirectly through Unix. In 1969, because of the various reasons that we will talk about shortly, some people at Bell Labs, especially the pioneers of Unix, such as Ken Thompson and Dennis Ritchie, gave up on Multics and, subsequently, Bell Labs quit the Multics project. But this was not the end for Bell Labs; they had designed their simpler and more efficient operating system, which was called Unix. It is worthwhile to compare the Multics and Unix operating systems. In the following list, you will see similarities and differences found while comparing Multics and Unix: Both follow the onion architecture as their internal structure. We mean that they both have the same rings in their onion architecture, especially kernel and shell rings. Therefore, programmers could write their own programs on top of the shell ring. Also, Unix and Multics expose a list of utility programs, and there are lots of utility programs such as ls and pwd. In the following sections, we will explain the various rings found in the Unix architecture. Multics needed expensive resources and machines to be able to work. It was not possible to install it on ordinary commodity machines, and that was one of the main drawbacks that let Unix thrive and finally made Multics obsolete after about 30 years. Multics was complex by design. This was the reason behind the frustration of Bell Labs employees and, as we said earlier, the reason why they left the project. But Unix tried to remain simple. In the first version, it was not even multitasking or multi-user! You can read more about Unix and Multics online, and follow the events that happened in that era. Both were successful projects, but Unix has been able to thrive and survive to this day. It is worth sharing that Bell Labs has been working on a new distributed operating system called Plan 9, which is based on the Unix project.   Figure 1-1: Plan 9 from Bell Labs Suffice to say that Unix was a simplification of the ideas and innovations that Multics presented; it was not something new, and so, I can quit talking about Unix and Multics history at this point. So far, there are no traces of C in the history because it has not been invented yet. The first versions of Unix were purely written using assembly language. Only in 1973 was Unix version 4 written using C. Now, we are getting close to discussing C itself, but before that, we must talk about BCPL and B because they have been the gateway to C. About BCPL and B BCPL was created by Martin Richards as a programming language invented for the purpose of writing compilers. The people from Bell Labs were introduced to the language when they were working as part of the Multics project. After quitting the Multics project, Bell Labs first started to write Unix using assembly programming language. That’s because, back then, it was an anti-pattern to develop an operating system using a programming language other than assembly. For instance, it was strange that the people at the Multics project were using PL/1 to develop Multics but, by doing that, they showed that operating systems could be successfully written using a higher-level programming language other than assembly. As a result, Multics became the main inspiration for using another language for developing Unix. The attempt to write operating system modules using a programming language other than assembly remained with Ken Thompson and Dennis Ritchie at Bell Labs. They tried to use BCPL, but it turned out that they needed to apply some modifications to the language to be able to use it in minicomputers such as the DEC PDP-7. These changes led to the B programming language. While we won’t go too deep into the properties of the B language here you can read more about it and the way it was developed at the following links: The B Programming Language  The Development of the C Language Dennis Ritchie authored the latter article himself, and it is a good way to explain the development of the C programming language while still sharing valuable information about B and its characteristics. B also had its shortcomings in terms of being a system programming language. B was typeless, which meant that it was only possible to work with a word (not a byte) in each operation. This made it hard to use the language on machines with a different word length. Therefore, over time, further modifications were made to the language until it led to developing the NB (New B) language, which later derived the structures from the B language. These structures were typeless in B, but they became typed in C. And finally, in 1973, the fourth version of Unix could be developed using C, which still had many assembly codes. In the next section, we talk about the differences between B and C, and why C is a top-notch modern system programming language for writing an operating system. The way to C programming and Unix I do not think we can find anyone better than Dennis Ritchie himself to explain why C was invented after the difficulties met with B. In this section, we’re going to list the causes that prompted Dennis Ritchie, Ken Thompson, and others create a new programming language instead of using B for writing Unix. Limitations of the B programming language: B could only work with words in memory: Every single operation should have been performed in terms of words. Back then, having a programming language that was able to work with bytes was a dream. This was because of the available hardware at the time, which addressed the memory in a word-based scheme. B was typeless: More accurately, B was a single-type language. All variables were from the same type: word. So, if you had a string with 20 characters (21 plus the null character at the end), you had to divide it up by words and store it in more than one variable. For example, if a word was 4 bytes, you would have 6 variables to store 21 characters of the string. Being typeless meant that multiple byte-oriented algorithms, such as string manipulation algorithms, were not efficiently written with B: This was because B was using the memory words not bytes, and they could not be used efficiently to manage multi-byte data types such as integers and characters. B didn’t support floating-point operations: At the time, these operations were becoming increasingly available on the new hardware, but there was no support for that in the B language. Through the availability of machines such as PDP-1, which could address memory on a byte basis, B showed that it could be inefficient in addressing bytes of memory: This became even clearer with B pointers, which could only address the words in the memory, and not the bytes. In other words, for a program wanting to access a specific byte or a byte range in the memory, more computations had to be done to calculate the corresponding word index. The difficulties with B, particularly its slow development and execution on machines that were available at the time, forced Dennis Ritchie to develop a new language. This new language was called NB, or New B at first, but it eventually turned out to be C. This newly developed language, C, tried to cover the difficulties and flaws of B and became a de facto programming language for system development, instead of the assembly language. In less than 10 years, newer versions of Unix were completely written in C, and all newer operating systems that were based on Unix got tied with C and its crucial presence in the system. As you can see, C was not born as an ordinary programming language, but instead, it was designed by having a complete set of requirements in mind. You may consider languages such as Java, Python, and Ruby to be higher-level languages, but they cannot be considered as direct competitors as they are different and serve different purposes. For instance, you cannot write a device driver or a kernel module with Java or Python, and they themselves have been built on top of a layer written in C. Unlike some programming languages, C is standardized by ISO, and if it is required to have a certain feature in the future, then the standard can be modified to support the new feature. To summarize In this article, we began with the relationship between Unix and C. Even in non-Unix operating systems, you see some traces of a similar design to Unix systems. We also looked at the history of C and explained how Unix appeared from Multics OS and how C was derived from the B programming language. The book Extreme C, written by Kamran Amini will help you make the most of C's low-level control, flexibility, and high performance. Is Dark an AWS Lambda challenger? Microsoft mulls replacing C and C++ code with Rust calling it a “modern safer system programming language” with great memory safety features Is Scala 3.0 a new language altogether? Martin Odersky, its designer, says “yes and no”
Read more
  • 0
  • 0
  • 11775

article-image-how-do-you-become-a-developer-advocate
Packt Editorial Staff
11 Oct 2019
8 min read
Save for later

How do you become a developer advocate?

Packt Editorial Staff
11 Oct 2019
8 min read
Developer advocates are people with a strong technical background, whose job is to help developers be successful with a platform or technology. They act as a bridge between the engineering team and the developer community. A developer advocate does not only fill in the gap between developers and the platform but also looks after the development of developers in terms of traction and progress on their projects. Developer advocacy, is broadly referred to as "developer relations". Those who practice developer advocacy have fallen into in this profession in one way or another. As the processes and theories in the world of programming have evolved over several years, so has the idea of developer advocacy. This is the result of developer advocates who work in the wild using their own initiatives. This article is an excerpt from the book Developer, Advocate! by Geertjan Wielenga. This book serves as a rallying cry to inspire and motivate tech enthusiasts and burgeoning developer advocates to take their first steps within the tech community. The question then arises, how does one become a developer advocate? Here are some experiences shared by some well-known developer advocates on how they started the journey that landed them to this role. Is developer advocacy taught in universities? Bruno Borges, Principal Product Manager at Microsoft says, for most developer advocates or developer relations personnel, it was something that just happened. Developer advocacy is not a discipline that is taught in universities; there's no training specifically for this. Most often, somebody will come to realize that what they already do is developer relations. This is a discipline that is a conjunction of several other roles: software engineering, product management, and marketing. I started as a software engineer and then I became a product manager. As a product manager, I was engaged with marketing divisions and sales divisions directly on a weekly basis. Maybe in some companies, sales, marketing, and product management are pillars that are not needed. I think it might vary. But in my opinion, those pillars are essential for doing a proper developer relations job. Trying to aim for those pillars is a great foundation. Just as in computer science when we go to college for four years, sometimes we don't use some of that background, but it gives us a good foundation. From outsourcing companies that just built business software for companies, I then went to vendor companies. That's where I landed as a person helping users to take full advantage of the software that they needed to build their own solutions. That process is, ideally, what I see happening to others. The journey of a regular tech enthusiast to a developer advocate Ivar Grimstad, a developer advocate at Eclipse foundation, speaks about his journey from being a regular tech enthusiast attending conferences to being there speaking at conferences as an advocate for his company. Ivar Grimstad says, I have attended many different conferences in my professional life and I always really enjoyed going to them. After some years of regularly attending conferences, I came to the point of thinking, "That guy isn't saying anything that I couldn't say. Why am I not up there?" I just wanted to try speaking, so I started submitting abstracts. I already gave talks at meetups locally, but I began feeling comfortable enough to approach conferences. I continued submitting abstracts until I got accepted. As it turned out, while I was becoming interested in speaking, my company was struggling to raise its profile. Nobody, even in Sweden, knew what we did. So, my company was super happy for any publicity it could get. I could provide it with that by just going out and talking about tech. It didn't have to be related to anything we did; I just had to be there with the company name on the slides. That was good enough in the eyes of my company. After a while, about 50% of my time became dedicated to activities such as speaking at conferences and contributing to open source projects. Tables turned from being an engineer to becoming a developer advocate Mark Heckler, a Spring developer and advocate at Pivotal, narrates his experience about how tables turned for him from University to Pivotal Principal Technologist & Developer Advocate. He says, initially, I was doing full-time engineering work and then presenting on the side. I was occasionally taking a few days here and there to travel to present at events and conferences. I think many people realized that I had this public-facing level of activities that I was doing. I was out there enough that they felt I was either doing this full-time or maybe should be. A good friend of mine reached out and said, "I know you're doing this anyway, so how would you like to make this your official role?" That sounded pretty great, so I interviewed, and I was offered a full-time gig doing, essentially, what I was already doing in my spare time. A hobby turned out to be a profession Matt Raible, a developer advocate at Okta has worked as an independent consultant for 20 years. He did advocacy as a side hobby. He talks about his experience as a consultant and walks through the progress and development. I started a blog in 2002 and wrote about Java a lot. This was before Stack Overflow, so I used Struts and Java EE. I posted my questions, which you would now post on Stack Overflow, on that blog with stack traces, and people would find them and help. It was a collaborative community. I've always done the speaking at conferences on the side. I started working for Stormpath two years ago, as a contractor part-time, and I was working at Computer Associates at the same time. I was doing Java in the morning at Stormpath and I was doing JavaScript in the afternoon at Computer Associates. I really liked the people I was working with at Stormpath and they tried to hire me full-time. I told them to make me an offer that I couldn't refuse, and they said, "We don't know what that is!" I wanted to be able to blog and speak at conferences, so I spent a month coming up with my dream job. Stormpath wanted me to be its Java lead. The problem was that I like Java, but it's not my favorite thing. I tend to do more UI work. The opportunity went away for a month and then I said, "There's a way to make this work! Can I do Java and JavaScript?" Stormpath agreed that instead of being more of a technical leader and owning the Java SDK, I could be one of its advocates. There were a few other people on board in the advocacy team. Six months later, Stormpath got bought out by Okta. As an independent consultant, I was used to switching jobs every six months, but I didn't expect that to happen once I went full-time. That's how I ended up at Okta! Developer advocacy can be done by calculating the highs and lows of the tech world Scott Davis, a Principal Engineer at Thoughtworks, was also a classroom instructor, teaching software classes to business professionals before becoming a developer advocate. As per him, tech really is a world of strengths and weaknesses. Advocacy, I think, is where you honestly say, "If we balance out the pluses and the minuses, I'm going to send you down the path where there are more strengths than weaknesses. But I also want to make sure that you are aware of the sharp, pointy edges that might nick you along the way." I spent eight years in the classroom as a software instructor and that has really informed my entire career. It's one thing to sit down and kind of understand how something works when you're cowboy coding on your own. It's another thing altogether when you're standing up in front of an audience of tens, or hundreds, or thousands of people. Discover how developer advocates are putting developer interests at the heart of the software industry in companies including Microsoft and Google with Developer, Advocate! by Geertjan Wielenga. This book is a collection of in-depth conversations with leading developer advocates that reveal the world of developer relations today. 6 reasons why employers should pay for their developers’ training and learning resources “Developers need to say no” – Elliot Alderson on the FaceApp controversy in a BONUS podcast episode [Podcast] GitHub has blocked an Iranian software developer’s account How do AWS developers manage Web apps? Are you looking at transitioning from being a developer to manager? Here are some leadership roles to consider
Read more
  • 0
  • 0
  • 11273

article-image-bootstrap-vs-material-design-for-your-next-web-or-app-development-project
Guest Contributor
08 Oct 2019
8 min read
Save for later

Should you use Bootstrap or Material Design for your next web or app development project?

Guest Contributor
08 Oct 2019
8 min read
Superior user experience is becoming increasingly important for businesses as it helps them to engage users and boost brand loyalty. Front-end website and app development platforms, namely Bootstrap vs Material Design empower developers to create websites with a robust structure and advanced functionality, thereby delivering outstanding business solutions and unbeatable user experience. Both Twitter’s Bootstrap vs Material Design are used by developers to create functional and high-quality websites and apps. If you are an aspiring front-end developer, here’s a direct comparison between the two, so you can choose the one that’s better suited for your upcoming project. BootStrap Bootstrap is an open-source, intuitive, and powerful framework used for responsive mobile-first solutions on the web. For several years, Bootstrap has helped developers create splendid mobile-ready front-end websites. In fact, Bootstrap is the most popular  CSS framework as it’s easy to learn and offers a consistent design by using re-usable components. Let’s dive deeper into the pros and cons of Bootstrap. Pros High speed of development If you have limited time for the website or app development, Bootstrap is an ideal choice. It offers ready-made blocks of code that can get you started within no time. So, you don’t have to start coding from scratch. Bootstrap also provides ready-made themes, templates, and other resources that can be downloaded and customized to suit your needs, allowing you to create a unique website as quickly as possible. Bootstrap is mobile first Since July 1, 2019, Google started using mobile-friendliness as a critical ranking factor for all websites. This is because users prefer using sites that are compatible with the screen size of the device they are using. In other words, they prefer accessing responsive sites. Bootstrap is an ideal choice for responsive sites as it has an excellent fluid grid system and responsive utility classes that make the task at hand easy and quick. Enjoys  a strong community support Bootstrap has a huge number of resources available on its official website and enjoys immense support from the developers’ community. Consequently, it helps all developers fix issues promptly. At present, Bootstrap is being developed and maintained on GitHub by Mark Otto, currently Principal Design & Brand Architect at GitHub, with nearly 19 thousand commits and 1087 contributors. The team regularly releases updates to fix any new issues and improve the effectiveness of the framework. For instance, currently, the Bootstrap team is working on releasing version 4.3 that will drop jQuery for regular JavaScript. This is primarily because jQuery adds 30KB to the webpage size and is tricky to configure with bundlers like Webpack. Similarly, Flexbox is a new feature added to the Bootstrap 4 framework. In fact, Bootstrap version 4 is rich with features, such as a Flexbox-based grid, responsive sizing and floats, auto margins, vertical centering, and new spacing utilities. Further, you will find plenty of websites offering Bootstrap tutorials, a wide collection of themes, templates, plugins, and user interface kit that can be used as per your taste and nature of the project. Cons All Bootstrap sites look the same The Twitter team introduced Bootstrap with the objective of helping developers use a standardized interface to create websites within a short time. However, one of the major drawbacks of this framework is that all websites created using this framework are highly recognizable as Bootstrap sites. Open Airbnb, Twitter, Apple Music, or Lyft. They all look the same with bold headlines, rounded sans-serif fonts, and lots of negative space. Bootstrap sites can be heavy Bootstrap is notorious for adding unnecessary bloat to websites as the files generated are huge in size. This leads to longer loading time and battery draining issues. Further, if you delete them manually, it defeats the purpose of using the framework. So, if you use this popular front-end UI library in your project, make sure you pay extra attention to page weight and page speed. May not be suitable for simple websites Bootstrap may not be the right front-end framework for all types of websites, especially the ones that don’t need a full-fledged framework. This is because, Bootstrap’s theme packages are incredibly heavy with battery-draining scripts. Also, Bootstrap has CSS weighing in at 126KB and 29KB of JavaScript that can increase the site’s loading time. In such cases, Bootstrap alternatives, namely Foundation, Skeleton, Pure, and Semantic UI adaptable and lightweight frameworks that can meet your developmental needs and improve your site’s user-friendliness. Material Design When compared to Bootstrap vs Material Design is hard to customize and learn. However, this design language was introduced by Google in 2014 with the objective of enhancing Android app’s design and user interface. The language is quite popular among developers as it offers a quick and effective way for web development. It includes responsive transitions and animations, lighting and shadows effects, and grid-based layouts. When developing a website or app using Material Design, designers should play to its strengths but be wary of its cons. Let’s see why. Pros Offers numerous components  Material Design offers numerous components that provide a base design, guidelines, and templates. Developers can work on this to create a suitable website or application for the business. The Material Design concept offers the necessary information on how to use each component. Moreover, Material Design Lite is quite popular for its customization. Many designers are creating customized components to take their projects to the next level. Is compatible across various browsers Both Bootstrap vs Material Design have a sound browser compatibility as they are compatible across most browsers. Material Design supports Angular Material and React Material User Interface. It also uses the SASS preprocessor. Doesn’t require JavaScript frameworks Bootstrap completely depends on JavaScript frameworks. However, Material Design doesn’t need any JavaScript frameworks or libraries to design websites or apps. In fact, the platform provides a material design framework that allows developers to create innovative components such as cards and badges. Cons The animations and vibrant colors can be distracting Material Design extensively uses animated transitions and vibrant colors and images that help bring the interface to life. However, these animations can adversely affect the human brain’s ability to gather information. It is affiliated to Google Since Material Design is a Google-promoted framework, Android is its prominent adopter. Consequently, developers looking to create apps on a platform-independent UX may find it tough to work with Material Design. However, when Google introduced the language, it had broad vision for Material Design that encompasses many platforms, including iOS. The tech giant has several Google Material Design components for iOS that can be used to render interesting effects using a flexible header, standard material colors, typography, and sliding tabs Carries performance overhead Material Design extensively uses animations that carry a lot of overhead. For instance, effects like drop shadow, color fill, and transform/translate transitions can be jerky and unpleasant for regular users. Wrapping up: Should you use Bootstrap vs Material Design for your next web or app development project? Bootstrap is great for responsive, simple, and professional websites. It enjoys immense support and documentation, making it easy for developers to work with it. So, if you are working on a project that needs to be completed within a short time, opt for Bootstrap. The framework is mainly focused on creating responsive, functional, and high-quality websites and apps that enhance the user experience. Notice how these websites have used Bootstrap to build responsive and mobile-first sites. (Source: cssreel) (Source: Awwwards) Material Design, on the other hand, is specific as a design language and great for building websites that focus on appearance, innovative designs, and beautiful animations. You can use Material Design for your portfolio sites, for instance. The framework is pretty detailed and straightforward to use and helps you create websites with striking effects. Check out how these websites and apps use the customized themes, popups, and buttons of Material Design. (Source:  Nimbus 9) (Source: Digital Trends) What do you think? Which framework works better for you? Bootstrap vs Material Design. Let us know in the comments section below. Author Bio Gaurav Belani is a Senior SEO and Content Marketing Analyst at The 20 Media, a Content Marketing agency that specializes in data-driven SEO. He has more than seven years of experience in Digital Marketing and along with that loves to read and write about AI, Machine Learning, Data Science and much more about the emerging technologies. In his spare time, he enjoys watching movies and listening to music. Connect with him on Twitter and Linkedin. Material-UI v4 releases with CSS specificity, Classes boilerplate, migration to Typescript and more Warp: Rust’s new web framework Learn how to Bootstrap a Spring application [Tutorial] Bootstrap 5 to replace jQuery with vanilla JavaScript How to use Bootstrap grid system for responsive website design?  
Read more
  • 0
  • 0
  • 25606

article-image-introducing-woz-a-progressive-webassembly-application-pwa-web-assembly-generator-written-entirely-in-rust
Sugandha Lahoti
04 Sep 2019
5 min read
Save for later

Introducing Woz, a Progressive WebAssembly Application (PWA + Web Assembly) generator written entirely in Rust

Sugandha Lahoti
04 Sep 2019
5 min read
Progressive Web Apps are already being deployed at a massive scale evidenced by their presence on most websites now. But what’s next for PWA? Alex Kehayis, developer at Stripe things its the merging of WebAssembly to PWA. According to him, the adoption of WebAssembly and ease of distribution on the web creates compelling new opportunities for application development. He has created what he calls Progressive Webassembly Applications (PWAAs) which is built entirely using Rust. In his talk at WebAssembly San Francisco Meetup, Alex walks through the creation of Woz, a PWA toolchain for Rust. Woz is a progressive WebAssembly app generator (PWAA) for Rust. Woz makes distributing your app as simple as sharing a hyperlink. Read Also: Fastly CTO Tyler McMullen on Lucet and the future of WebAssembly and Rust [Interview] Web content has become efficient Alex begins his talk by pointing out how web content has become massively efficient; this is because it solves three problems: Distribution: Actually serving content to your users Unification: Write once and run it everywhere Experience: Consume content in a low friction environment Mobile applications vs Web applications Applications are kind of an elevated form of content. They tend to be more experiential, dynamic, and interactive. Alex points out the definition of ‘application’ from Wikipedia, which states that applications are software that is designed to perform a group of coordinated functions tasks and activities for the benefit of users. Despite all progress, mobile apps are still hugely inefficient to create, distribute, and use. Its distribution is generally in the hands of the duopoly, Apple and Google. The unification is generally handled through third-party frameworks such as React Native, or Xamarin. User experience on mobile apps, although performant leads to high friction as a user has to generally switch between apps, take time for it to install, load etc. Web based applications on the other hand are quite efficient to create, distribute and use. Anybody who's got an internet connection and a browser can go through the web application. For web applications, unification happens through standards, unlike frameworks which is more efficient. User experience is also quite dynamic and fast; you jump right into it and don't have to necessarily install anything. Should everybody just use web apps instead of mobile apps? Although mobile applications are a bit inefficient, they bring certain features: Native application has better performance than web based apps Encapsulation (e.g. home screen, self-contained experience) Mobile apps are offline by default Mobile apps use Hardware/sensors Native apps typically consume less battery than web apps In order to get the best of both worlds, Alex suggests the following steps: Bring web applications to mobile This has already been implemented and are called Progressive web applications Improve the state of performance and providing access. Alex says that WebAssembly is a viable choice for achieving this. WebAssembly is highly performant when it's paired with a language like Rust. Progressive WebAssembly Applications Woz, a Progressive WebAssembly Application generator Alex proceeds to talk about Woz, which is a progressive WebAssembly application generator.  It combines all the good things of a PWA and WebAssembly and works as a toolchain for building and deploying performant mobile apps with Rust. You can distribute your app as simply as sharing a hyperlink. Woz brings distribution via browsers, unification via web standards, and experience via hyperlinks. Woz uses wasm-bindgen to generate the interop calls between WebAssembly and JavaScript. This allows you to write the entire application in Rust—including rendering to the DOM. It will soon be coming with ‘managed charging’ for your apps and even provide multiple copies your users can share all with a hyperlink. Unlike all the things you need for a PWA (SSL certificate, PWA Manifest, Splash screen, Home screen icons, Service worker), PWAAs requires JS bindings to WebAssembly and to fetch, compile, and run wasm. His talks also talked about some popular Rust-based frontend frameworks Yew: “Yew is a modern Rust framework inspired by Elm and React for creating multi-threaded frontend apps with WebAssembly.” Sauron: “Sauron is an html web framework for building web-apps. It is heavily inspired by elm.” Percy: “A modular toolkit for building isomorphic web apps with Rust + WebAssembly” Seed: “A Rust framework for creating web apps” Read Also: “Rust is the future of systems programming, C is the new Assembly”: Intel principal engineer Josh Triplett With Woz, the goal, Alex says, was to stay in Rust and create a PWA that can be installed to your home screen. The sample app that he created only weighs about 300Kb. Alex says, “In order to actually write the app, you really only need one entry point - it’s a public method render that's decorated wasm_bindgen. The rest will kind of figure itself out. You don't necessarily need to go create your own JavaScript file.” He then proceeded to show a quick demo of what it looks like. What’s next? WebAssembly will continue to evolve. More languages and ecosystem can target WebAssembly. Progressive web apps will continue to evolve. PWAAs are an interesting proposition. We should really be liberating mobile apps and bringing them to the web. I think web assembly is kind of a missing link to some of these things. Watch Alex Kehayis’s full talk on YouTube. Slides are available here. https://www.youtube.com/watch?v=0ySua0-c4jg Other news in Tech Wasmer’s first Postgres extension to run WebAssembly is here! Mozilla proposes WebAssembly Interface Types to enable language interoperability Wasmer introduces WebAssembly Interfaces for validating the imports and exports of a Wasm module
Read more
  • 0
  • 0
  • 9755
article-image-react-js-why-you-should-learn-the-front-end-javascript-library-and-how-to-get-started
Guest Contributor
25 Aug 2019
9 min read
Save for later

React.js: why you should learn the front end JavaScript library and how to get started

Guest Contributor
25 Aug 2019
9 min read
React.JS is one of the most powerful JavaScript libraries. It empowers the interface of major organisations such as Amazon (an e-commerce giant has recently introduced a programming language of its own), PayPal, BBC, CNN, and over a million other websites worldwide. Created by Facebook, React.JS has quickly built a daunting technical reputation and a loyal fan following. Currently React.js is extensively mentioned in job openings - companies want to hire dedicated react.js developer more than Vue.js engineers. In this post, you’ll find out why React.JS is the right framework to start your remote work, despite the library’s steep learning curve and what are the ways to use it more efficiently. 5 Reasons to learn React.JS Developers might be hesitant to learn React as it’s not a full-fledged framework and a developer needs to handle models and controllers on their own. Nevertheless, there are more than a handful of reasons to become a react js developer. Let’s take a closer look at them: 1. It’s functional There’s no need to use classes in React. The platform relies heavily on functional components, allowing developers not to overcomplicate the codebase. While classes offer developers a handful of convenient features (using life cycle hooks, and such), the benefits provided by the functional syntax are loud and clear: Higher readability. Properties like state functions or lifecycle hooks tend to make reading and testing the code a pain in the neck. Plain JS functions are easier to wrap your head around. A developer can achieve the same functionality with less code. The software engineering team will more likely adhere to best practices. Stateless functional components encourage front-end engineers to separate presentational and container components. It takes more time to adjust to a more complex workflow - in the long run, it pays off in a better code structure. ES6 destructuring helps spot bloated components. A developer can see the list of dependencies bound to every component. As a result, you will be able to break up overly complex structures or rethink them altogether. React.JS is the tool that recognizes the power of functional components to their fullest extent (even the glorified Angular 2 can’t compare). As a result, developers can strive for maximum code eloquence and improved performance. 2. It’s declarative Most likely, you are no stranger to CSS and the SQL database programming language, and, as such, are familiar with declarative programming. Still, to recap, here are the differences between declarative and imperative approaches: Imperative programming uses statements to manipulate the state of the program. Declarative programming is a paradigm that changes the system based on the communication logic. While imperative programming gives developers a possibility to design a control flow step-by-step in statements and may come across as easier, it is declarative programming to have more perks in the long run. Higher readability. Low-level details will not clutter the code as the paradigm is not concerned with them. More freedom for reasoning. Instead of outlining the procedure step-by-step, a  successful React JS developer focuses on describing the solution and its logic. Reusability. You can apply a declarative description to various scenarios - that is times more challenging for a step-by-step construct. Efficient in solving specific domain problems. High performance of declarative programming stems from the fact that it adapts to the domain. For databases, for instance, a developer will create a set of operations to handle data, and so on. Capitalizing on the benefits of declarative programming is React’s strong point. You will be able to create transparent, reusable, and highly readable interfaces. 3. Virtual DOM Developers that manage high-load projects often face DOM-related challenges. Bottlenecks tend to appear even after a small change in the document-object-model. Due to the document object model’s tree structure, there’s a high interconnectivity between DOM components. To facilitate maintenance, Facebook has implemented the virtual DOM in React.JS. It allows developers to ensure the project’s error-free performance before updating an actual DOM tree. Virtual DOM provides extra assurance in the app’s performance - in the long run, it significantly improves user satisfaction rates. 4. Downward data binding As opposed to Angular two-way data binding, React.JS uses the downward structure to ensure the changes in child structures will not affect parents. A developer can only transfer data from a parent to a child, not vice versa. The key components of downward data binding include: Passing the state to the child components as well as the view; The view triggers actions; Actions can update the state; State updates are passed on to the view and the child components. Compared to the two-way data binding, the one implemented by React.JS is not as error-prone (a developer controls data to a larger extent), more comfortable to test and debug due to a clearly defined structure. 5. React Developer Tools React.JS developers get to benefit from a wide toolkit that covers all facets of the application performance. There’s a wide array of debugging and design solutions, including a life-saving React Developer Tools extension for Chrome and Firefox. Using this and other tools, you can define child and parent components, examine their state, observe hierarchies, and inspect props. Advantages of React.js React.JS helps developers systemize the interfaces of their projects by introducing the ‘components’ structure. The library allows the creation of modular views that consist of reusable blocks - pop-ups, tables, etc. One of the most significant advantages of using React.js is the way it improves user experience. A textbook example of library usage on Facebook is the possibility to see the changing number of likes in real-time without reloading the page. Originally, React.JS was released back in 2011 by a Facebook engineer as a way to upscale and maintain the complex interface of the Facebook Ads app. The library’s high functionality resulted in its adoption by other SMEs and large corporations - now React JS is one of the most widely used development tools. How to Use React.JS? Depending on your HTML and JavaScript proficiency, it may take anywhere from a few days to months to get the hang of React. For the basic understanding of the library, take a look at React.JS features as well as the setup process. Getting started with React.JS To start working with React, a developer has to import React and React to DOM libraries using a basic HTML file. Now that you have set up a working space, take your time to examine the defining features of React.JS. Components All React.JS elements are components. Depending on the syntax, they are grouped into the class and functional ones. As, in most cases, both lead to equal outcomes, a React.JS beginner should start by learning functional components. Props Props are the way for React.JS developers to pass data from parent to child structures. Keep in mind that, unlike states, props are immutable under any circumstances. They provide developers with high code reusability as the same message will be displayed on all pages. At times, developers do want components to change themselves. That’s when states come in handy. States States are used when a developer wants the application data to change. The most common operations that have to do with states include: Initialization; Modification; Adding event handlers. These were the basic concepts a React.JS developer has to be familiar with to get the most out of the library. React.JS best practices If you’re already using React.JS, be sure to make the most out of it. Keep track of new trends and best practices in all facets of app management - accessibility, performance, security, and others. Here’s a short collection of React.JS development secret tips that’ll improve the maintenance and development efficiency. Performance: Consider using React.Fragment to avoid extra DOM nodes. To load components on-demand, use React.Lazy, along with React.Suspense. Another popular practice among JS developers is taking advantage of shouldComponentUpdate to avoid unnecessary rendering. Try to keep the JS code as clean as possible. For instance, delete the DOM components you don’t use with ComponentDidUnomunt (). For component caching, use React.Memo. Accessibility Pay attention to the casing and reserved word differences in HTML and React.js to avoid bottlenecks. To set up page titles, use the react-handle plugin to set up page titles. Don’t forget to put ALT-tags for any non-text content. Use ref() functions to pinpoint the focus on a given component. External tools like ESLint plugin help developers monitor accessibility. Debugging Use Chrome Dev Tools - there are dozens of features - reduct logger, error messages handler, and so on. Leave the console open while coding to detect errors faster. To have a better understanding of the code you’re dealing with, adopt a table view for objects. Other quick debugging hacks include marking DOM items to find them quickly in a Google Chrome Inspector. View full stack traces for functions. The bottom line Thanks to a powerful team of engineers at work, React.JS has quickly become a powerhouse for front end development. Its huge reliance on JavaScript makes a library easier to get to know. While React.JS pros and cons are extensive - however, the possibility to express UIs declaratively along with the promotion of functional components makes it a favorite framework for many. A wide variety of the projects it empowers and a large number of job openings prove that knowing React is no longer optional for developers. The good news, there’s no lack of learning tools and resources online. Take your time to explore the library - you’ll be amazed by the order and efficiency React brings to applications. Author Bio Anastasia Stefanuk is a passionate writer and a marketing manager at Mobilunity. The company provides professional staffing services, so she is always aware of technology news and wants to share her experience to help tech startups and companies to be up-to-date.   Getting started with React Hooks by building a counter with useState and useEffect React 16.9 releases with an asynchronous testing utility, programmatic Profiler, and more 5 Reasons to Learn ReactJS
Read more
  • 0
  • 0
  • 8537

article-image-what-are-apis-why-should-businesses-invest-in-api-development
Packt Editorial Staff
25 Jul 2019
9 min read
Save for later

What are APIs? Why should businesses invest in API development?

Packt Editorial Staff
25 Jul 2019
9 min read
Application Programming Interfaces (APIs) are like doors that provide access to information and functionality to other systems and applications. APIs share many of the same characteristics as doors; for example, they can be as secure and closely monitored as required. APIs can add value to a business by allowing the business to monetize information assets, comply with new regulations, and also enable innovation by simply providing access to business capabilities previously locked in old systems. This article is an excerpt from the book Enterprise API Management written by Luis Weir. This book explores the architectural decisions, implementation patterns, and management practices for successful enterprise APIs. In this article, we’ll define the concept of APIs and see what value APIs can add to a business. APIs, however, are not new. In fact, the concept goes way back in time and has been present since the early days of distributed computing. However, the term as we know it today refers to a much more modern type of APIs, known as REST or web APIs. The concept of APIs Modern APIs started to gain real popularity when, in the same year of their inception, eBay launched its first public API as part of its eBay Developers Program. eBay's view was that by making the most of its website functionality and information also accessible via a public API, it would not only attract, but also encourage communities of developers worldwide to innovate by creating solutions using the API. From a business perspective, this meant that eBay became a platform for developers to innovate on and, in turn, eBay would benefit from having new users that perhaps it couldn't have reached before. eBay was not wrong. In the years that followed, thousands of organizations worldwide, including known brands, such as Salesforce.com, Google, Twitter, Facebook, Amazon, Netflix, and many others, adopted similar strategies. In fact, according to the programmableweb.com (a well-known public API catalogue), the number of publicly available APIs has been growing exponentially, reaching over 20k as of August 2018. Figure 1: Public APIs as listed in programmableweb.com in August 2018 It may not sound like much, but considering that each of the listed APIs represents a door to an organization's digital offerings, we're talking about thousands of organizations worldwide that have already opened their doors to new digital ecosystems, where APIs have become the product these organizations sell and developers have become the buyers of them. Figure: Digital ecosystems enabled by APIs In such digital ecosystems, communities of internal, partner, or external developers can rapidly innovate by simply consuming these APIs to do all sorts of things: from offering hotel/flight booking services by using the Expedia API, to providing educational solutions that make sense of the space data available through the NASA API. There are ecosystems where business partners can easily engage in business-to-business transactions, either to resell goods or purchase them, electronically and without having to spend on Electronic Data Interchange (EDI) infrastructure. Ecosystems where an organization's internal digital teams can easily innovate as key enterprise information assets are already accessible. So, why should businesses care about all this? There is, in fact, not one answer but multiple, as described in the subsequent sections. APIs as enablers for innovation and bimodal IT What is innovation? According to a common definition, innovation is the process of translating an idea or invention into a good or service that creates value or for which customers will pay. In the context of businesses, according to an article by HBR, innovation manifests itself in two ways: Disruptive innovation: Described as the process whereby a smaller company with fewer resources is able to successfully challenge established incumbent businesses. Sustaining innovation: When established businesses (incumbents) improve their goods and services in the eyes of existing customers. These improvements can be incremental advances or major breakthroughs, but they all enable firms to sell more products to their most profitable customers. Why is this relevant? It is well known that established businesses struggle with disruptive innovation. The Netflix vs Blockbuster example reminds us of this fact. By the time disruptors are able to catch up with an incumbent's portfolio of goods and services, they are able to do so with lower prices, better business models, lower operation costs, and far more agility, and speed to introduce new or enhanced features. At this point, sustaining innovation is not good enough to respond to the challenge. With all the recent advances in technology and the internet, the rate at which disruptive innovation is challenging incumbents has only grown exponentially. Therefore, in order for established businesses to endure the challenge put upon them, they must somehow also become disruptors. The same HBR article describes a point of view on how to achieve this from a business standpoint. From a technology standpoint, however, unless the several systems that underpin a business are "enabled" to deliver such disruption, no matter what is done from a business standpoint, this exercise will likely fail. Perhaps by mere coincidence, or by true acknowledgment of the aforesaid, Gartner introduced the concept of bimodal IT in December 2013, and this concept is now mainstream. Gartner defined bimodal IT as the following: "The practice of managing two separate, coherent modes of IT delivery, one focused on stability and the other on agility. Mode 1 is traditional and sequential, emphasizing safety and accuracy. Mode 2 is exploratory and nonlinear, emphasizing agility and speed." Figure: Gartner's bimodal IT According to Gartner, Mode 1 (or slow) IT organizations focus on delivering core IT services on top of more traditional and hard-to-change systems of record, which in principle are changed and improved in longer cycles, and are usually managed with long-term waterfall project mechanisms. Whereas for Mode 2 (or fast) IT organizations, the main focus is to deliver agility and speed, and therefore they act more like a startup (or digital disruptor in HBR terms) inside the same enterprise. However, what is often misunderstood is how fast IT organizations can disruptively innovate, when most of the information assets, which are critical to bringing context to any innovation, reside in backend systems, and any sort of access has to be delivered by the slowest IT sibling. This dilemma means that the speed of innovation is constrained to the speed by which the relevant access to core information assets can be delivered. As the saying goes, "Where there's a will there's a way." APIs could be implemented as the means for the fast IT to access core information assets and functionality, without the intervention of the slow IT. By using APIs to decouple the fast IT from the slow IT, innovation can occur more easily. However, as with everything, it is easier said than done. In order to achieve such organizational decoupling using APIs, organizations should first build an understanding about what information assets and business capabilities are to be exposed as APIs, so fast IT can consume them as required. This understanding must also articulate the priorities of when different assets are required and by whom, so the creation of APIs can be properly planned for and delivered. Luckily for those organizations that already have mature service-oriented architectures (SOA), some of this work will probably already be in place. For organizations without such luck, this activity should be planned for and should be a fundamental component of the digital strategy. Then the remaining question would be: which team is responsible for defining and implementing such APIs; the fast IT or slow IT? Although the long answer to this question is addressed throughout the different chapters of this book, the short answer is neither and both. It requires a multi-disciplinary team of people, with the right technology capabilities available to them, so they can incrementally API-enable the existing technology landscape, based on business-driven priorities. APIs to monetize on information assets Many experts in the industry concur that an organization's most important asset is its information. In fact, a recent study by Massachusetts Institute of Technology (MIT) suggests that data is the single most important asset for organizations "Data is now a form of capital, on the same level as financial capital in terms of generating new digital products and services. This development has implications for every company's competitive strategy, as well as for the computing architecture that supports it." If APIs act as doors to such assets, then APIs also provide businesses with an opportunity to monetize them. In fact, some organizations are already doing so. According to another article by HBR, 50% of the revenue that Salesforce.com generates comes from APIs, while eBay generates about 60% of its revenue through its API. This is perhaps not such a huge surprise, given that both of these organizations were pioneers of the API economy. Figure: The API economy in numbers What's even more surprising is the case of Expedia. According to the same article, 90% of Expedia's revenue is generated via APIs. This is really interesting, as it basically means that Expedia's main business is to indirectly sell electronic travel services via its public API. However, it's not all that easy. According to the previous study by MIT, most of the CEOs for Fortune 500 companies don't yet fully acknowledge the value of APIs. An intrinsic reason for this could be the lack of understanding and visibility over how data is currently being (or not being) used. Assets that sit hidden on systems of record, only being accessed via traditional integration platforms, will not, in most cases, give insight to the business on how information is being used, and the business value it adds. APIs, on the other hand, are better suited to providing insight about how/by whom/when/why information is being accessed, therefore giving the business the ability to make better use of information to, for example, determine which assets have better capital potential. In this article we provided a short description of APIs, and how they act as an enabler to digital strategies. Define the right organisation model for business-driven APIs with Luis Weir’s upcoming release Enterprise API Management. To create effective API documentation, know how developers use it, says ACM GraphQL API is now generally available Next.js 9 releases with built in zero-config TypeScript support, automatic static optimization, API routes and more
Read more
  • 0
  • 0
  • 3963

article-image-declarative-ui-programming-faceoff-apples-swiftui-vs-googles-flutter
Guest Contributor
14 Jun 2019
5 min read
Save for later

Declarative UI programming faceoff: Apple’s SwiftUI vs Google’s Flutter

Guest Contributor
14 Jun 2019
5 min read
Apple recently announced a new declarative UI framework for its operating system - SwiftUI, at its annual developer conference WWDC 2019. SwiftUI will power all of Apple’s devices (MacBooks, watches, tv’s, iPads and smartphones). You can integrate SwiftUI views with objects from the UIKit, AppKit, and WatchKit frameworks to take further advantage of platform-specific functionality. It's said to be productive for developers and would save effort while writing codes. SwiftUI documentation,  states that, “Declare the content and layout for any state of your view. SwiftUI knows when that state changes, and updates your view’s rendering to match.”   This means that the developers simply have to describe the current UI state to the response of events and leave the in-between transitions to the framework. The UI updates the state automatically as it changes. Benefits of a Declarative UI language Without describing the control flow, the declarative UI language expresses the logic of computation. You describe what elements you need and how they would look like without having to worry about its exact position and its visual style. Some of the benefits of Declarative UI language are: Increased speed of development. Seamless integration between designers and coders. Forces separation between logic and presentation.    Changes in UI don’t require recompilation SwiftUI’s declarative syntax is quite similar to Google’s Flutter which also runs on declarative UI programming. Flutter contains beautiful widgets with captivating logos, fonts, and expressive style. The use of Flutter has significantly increased in 2019 and is among the fastest developing skills in the developer community. Similar to Flutter, SwiftUI provides layout structure, controls, and views for the application’s user interface. This is the first time Apple’s stepping up to the declarative UI programming and has described SwiftUI as a modern way to declare user interfaces. In the imperative method, developers had to manually construct a fully functional UI entity and later change it using methods and setters. In SwiftUI the application layout just needs to be described once, vastly reducing the code complexity. Apart from declarative UI, SwiftUI also features Xcode, which contains software development tools and is an integrated development environment for the OS.  If any code modifications are made inside Xcode, developers now can preview the codes in real-time and tweak parameters. Swift UI also features dark mode, drag and drop building tools by Xcode and interface layout.  Languages such as Hebrew and Arabic are also incorporated. However, one of the drawbacks of SwiftUI is that it will only support apps that will continue to relay forward with iOS13. It’s a sort of limited tool in this sense and the production would take at least a year or two if an older iOS version is to be supported. SwiftUI vs Flutter Development   Apple’s answer to Google is simple here. Flutter is compatible with both Android and iOS whereas SwiftUI is a new member of Apple’s ecosystem. Developers use Flutter for cross-platform apps with a single codebase. It highlights that Flutter is pushing other languages to adopt its simplistic way of developing UI. Now with the introduction of SwiftUI, which works on the same mechanism as Flutter, Apple has announced itself to the world of declarative UI programming. What does it mean for developers who build exclusively for iOS? Well, now they can make Native Apps for their client’s who do not prefer the Flutter way. SwiftUI will probably reduce the incentive for Apple-only developers to adopt Flutter. Many have pointed out that Apple has just introduced a new framework for essentially the same UI experience. We have to wait and see what Swift UI has under its closet for the longer run. Developers in communities like Reddit and others are actively sharing their thoughts on the recent arrival of SwiftUI. Many agree on the fact that “SwiftUI is flutter with no Android support”.   Developers who’d target “Apple only platform” through SwiftUI, will eventually return to Flutter to target all other platforms, which makes Flutter could benefit from SwiftUI and not the other way round. The popularity of the react native is no brainer. Native mobile app development for iOS and Android is always high on cost and companies usually work with 2 different sets of teams. Cross-platform solutions drastically bridge the gaps in terms of developmental costs. One could think of Flutter as React native with the full support of native features (one doesn’t have to depend on native platforms for solutions and Flutter renders similar performance to native). Like React Native, Flutter uses reactive-style views. However, while React Native transpiles to native widgets, Flutter compiles all the way to native code. Conclusion SwiftUI is about making development interactive, faster and easier. The latest inbuilt graphical UI design tool allows designers to assemble a user interface without having to write any code. Once the code is modified, it instantly appears in the visual design tool. Codes can be assembled, redefined and tested in real time with previews that could run on a range of Apple's devices. However, SwiftUI is still under development and will take its time to mature. On the other hand, Flutter app development services continue to deliver scalable solutions for startups/enterprises. Building native apps are not cheap and Flutter with the same feel of native provides cost-effective services. It still remains a competitive cross-platform network with or without SwiftUI’s presence. Author Bio Keval Padia is the CEO of Nimblechapps, a prominent Mobile app development company based in India. He has a good knowledge of Mobile App Design and User Experience Design. He follows different tech blogs and current updates of the field lure him to express his views and thoughts on certain topics.
Read more
  • 0
  • 0
  • 13111
article-image-why-ruby-developers-like-elixir
Guest Contributor
26 Apr 2019
7 min read
Save for later

Why Ruby developers like Elixir

Guest Contributor
26 Apr 2019
7 min read
Learning a new technology stack requires time and effort, and some developers prefer to stick with their habitual ways. This is one of the major reasons why developers stick with Ruby. Ruby libraries are very mature making it a very productive language, used by many developers worldwide. However, more and more experienced Ruby coders are turning to Elixir. Why is it so? Let’s find out all the ins and outs about Elixir and what makes it so special for Ruby developers. What is Elixir? Elixir is a vibrant and practical functional programming language created for developing scalable and maintainable applications. This programming language leverages the Erlang VM. The latter is famous for running low-latency, as well as distributed and fault-tolerant systems. Elixir is currently being used in web development successfully. This general-purpose programming language first appeared back in 2011. It is created by José Valim, one of the major authors of Ruby on Rails. Elixir became a result of Valim’s efforts to solve problems with concurrency that Ruby on Rails has. Phoenix Framework If you are familiar with Elixir, you have probably heard of Phoenix as well. Phoenix is an Elixir-powered web framework, most frequently used by Elixir developers. This framework incorporates some of the best Ruby solutions while taking them to the next level. This framework allows the developers to enjoy speed and maintainability at the same time. Core features of Elixir Over time, Elixir evolved into a dynamic language that numerous programmers around the world use for their projects. Below are its core features that make Elixir so appealing to web developers. Scalability. Elixir code is executed within the frames of small isolated processes. Any information is transferred via messages. If an application has many users or is growing actively, Elixir is a perfect choice because it can cope with high loads without the need for extra servers. Functionality. Elixir is built to make coding easier and faster. This language is well-designed for writing fast and shortcode that can be maintained easily. Extensibility and DSLs. Elixir is an extensible language that allows coders to extend it naturally to special domains. This way, they can increase their productivity significantly. Interactivity. With tools like IEx, Elixir’s interactive shell, developers can use auto-complete, debug, reload code, and format their documentation well. Error resistance. Elixir is one of the strongest systems in terms of fault tolerance. Elixir supervisors assist developers by describing how to take the needed action when a failure occurs to achieve complete recovery. Elixir supervisors carry different strategies to create a hierarchical process structure, also referred to as a supervision tree. This guarantees the smooth performance of applications, tolerant of errors. The handiness of Elixir Tools. Elixir gives the developers working with it an opportunity to use a wide range of handy tools like Hex and Mix. These tools help programmers to improve the software resources in terms of discovery, quality, and sustainability. Compatibility with Erlang. Elixir developers have full access to the Erlang ecosystem. It is so because Elixir code executes on the Erlang VM. Disadvantages of Elixir The Elixir ecosystem isn’t perfect and complete yet. Chances are, there isn’t a library to integrate with a service you are working on. When coding in Elixir, you may have to build your own libraries sometimes. The reason behind it is that the Elixir community isn’t as huge as the communities of well-established popular coding languages like Ruby. Some developers believe that Elixir is a niche language and is difficult to get used to. Functional programming. This feature of Elixir is both an advantage and a disadvantage at the same time. Most coding languages are object-oriented. For this reason, it might be hard for a developer to switch to a functional-oriented language. Limited talent pool. Elixir is still quite new, and it’s harder to find professional coders who have a lot of experience with this language compared to others. Yet, as the language gets more and more traction, companies and individual developers show more interest in it. As you can see, there are some downsides to using Elixir as your programming language. However, due to the advantages it offers, some Ruby developers think that it is worth a try. Let's find out why. Why Elixir is popular among Ruby developers As you probably know, Ruby and Ruby on Rails are the technologies that contribute to programmers' happiness a lot. There are many reasons for developers to love them but are there any with respect to Elixir? If you analyze what makes programmers happy, you will make a list of a few important points. Let's name them and analyze whether Elixir comes within them. Productive technologies Elixir is extremely productive. With it, it is possible to grow and scale apps quickly. Having many helpful frameworks, tools, and services Though there are not many libraries in Elixir, their number is continuously growing due to the work of its team and contributors. However, Phoenix and Elixir's extensive toolset is its strong side for now. Speed of building new features Due to the clean syntax of Elixir, features can be implemented by fewer lines of code. Active community Though Elixir community is still not massive, it is very friendly, active and growing at a fast pace. Comfort and satisfaction from development Elixir programmers enjoy the fact that this programming language is good at performance and development speed. They don't need to compromise on any of these important aspects. As you can see, Elixir still has room for improvement but it is progressing swiftly. In addition to the overall experience, there are other technical reasons that make Ruby developers hooked to Elixir programming. Elixir solves the concurrency issue that Ruby currently has. As Elixir runs on Erlang VM, it can handle the distributed systems much more effectively compared to Ruby. Elixir runs fast. In fact, it is faster than Ruby in terms of response and compilation times. Fits decentralized systems perfectly. Unlike Ruby, Elixir uses message passing to convey the commands. This way, it is perfect for building fault-tolerant decentralized systems. Scalability. Applications can be scaled with Elixir easily. If you expect the code of your project to be very large, and the website you are building to get a lot of traffic, it’s a good idea to choose Elixir for it. Thanks to its incorporated tools like umbrella projects, you can easily break the code in chunks to deal with it easier. Elixir is the first programming language after Ruby that considers code aesthetics and language UX. It also cares about the libraries and the whole ecosystem. Elixir is one of the most practical functional programming languages. In addition to being efficient, it has a modern-looking syntax similar to Ruby. Clear and direct code representation. This programming language is nearly homoiconic. Open Telecom Platform (OTP). OTP gives Elixir fault tolerance and concurrency capabilities. Quick response. Elixir response time is under 100ms. So, there’s no waste of time, and you can handle numerous requests with the same hardware. Zero downtime. With Elixir, you can reach 100% up-time without having to stop for updates. You can deliver the updates to the production without interfering with its performance. No reinventing the wheel. With Elixir, developers can use existing coding patterns and libraries for their projects. Exhaustive documentation. Elixir has pretty instructive documentation that is easy to comprehend. Being quite a young programming language, Elixir has already attracted a lot of devoted followers thanks to all the above-described features. It has the potential to make programming easier, more fun, and in line with the demands of modern businesses. Choosing Elixir is definitely worth it for all the benefits the language offers. We believe that clean and comprehensible syntax, fast performance, high stability, and error tolerance gives Elixir a successful future. Technological giants like Discord, Bleacher Report, Pinterest and Moz have been using Elixir for a while now, enjoying all the competitive advantages it has to offer. Author Bio Maria Redka is a Technology Writer at MLSDev, a web and mobile app development company in Ukraine. She has been writing content professionally for more than 3 years.
Read more
  • 0
  • 0
  • 5246

article-image-a-five-level-learning-roadmap-for-functional-programmers
Sugandha Lahoti
12 Apr 2019
4 min read
Save for later

A five-level learning roadmap for Functional Programmers

Sugandha Lahoti
12 Apr 2019
4 min read
The following guide serves as an excellent learning roadmap for functional programming. It can be used to track our level of knowledge regarding functional programming. This guide was developed for the Fantasyland institute of learning for the LambdaConf conference. It was designed for statically-typed functional programming languages that implement category theory. This post is extracted from the book Hands-On Functional Programming with TypeScript by Remo H. Jansen. In this book, you will understand the pros, cons, and core principles of functional programming in TypeScript. This roadmap talks about five levels of difficulty: Beginner, Advanced Beginner, Intermediate, Proficient, and Expert. Languages such as Haskell support category theory natively, but, we can take advantage of category theory in TypeScript by implementing it or using some third-party libraries. Not all the items in the list are 100% applicable to TypeScript due to language differences, but most of them are 100% applicable. Beginner To reach the beginner level, you will need to master the following concepts and skills: CONCEPTS SKILLS Immutable data Second-order functions Constructing and destructuring Function composition First-class functions and lambdas Use second-order functions (map, filter, fold) on immutable data structures Destructure values to access their components Use data types to represent optionality Read basic type signatures Pass lambdas to second-order functions Advanced beginner To reach the advanced beginner level, you will need to master the following concepts and skills: CONCEPTS SKILLS Algebraic data types Pattern matching Parametric polymorphism General recursion Type classes, instances, and laws Lower-order abstractions (equal, semigroup, monoid, and so on) Referential transparency and totality Higher-order functions Partial application, currying, and point-free style Solve problems without nulls, exceptions, or type casts Process and transform recursive data structures using recursion Able to use functional programming in the small Write basic monadic code for a concrete monad Create type class instances for custom data types Model a business domain with abstract data types (ADTs) Write functions that take and return functions Reliably identify and isolate pure code from an impure code Avoid introducing unnecessary lambdas and named parameters Intermediate To reach the intermediate level, you will need to master the following concepts and skills: CONCEPTS SKILLS Generalized algebraic data type Higher-kinded types Rank-N types Folds and unfolds Higher-order abstractions (category, functor, monad) Basic optics Implement efficient persistent data structures Existential types Embedded DSLs using combinators Able to implement large functional programming applications Test code using generators and properties Write imperative code in a purely functional way through monads Use popular purely functional libraries to solve business problems Separate decision from effects Write a simple custom lawful monad Write production medium-sized projects Use lenses and prisms to manipulate data Simplify types by hiding irrelevant data with existential Proficient To reach the proficient level, you will need to master the following concepts and skills: CONCEPTS SKILLS Codata (Co)recursion schemes Advanced optics Dual abstractions (comonad) Monad transformers Free monads and extensible effects Functional architecture Advanced functors (exponential, profunctors, contravariant) Embedded domain-specific languages (DSLs) using generalized algebraic datatypes (GADTs) Advanced monads (continuation, logic) Type families, functional dependencies (FDs) Design a minimally powerful monad transformer stack Write concurrent and streaming programs Use purely functional mocking in tests. Use type classes to modularly model different effects Recognize type patterns and abstract over them Use functional libraries in novel ways Use optics to manipulate state Write custom lawful monad transformers Use free monads/extensible effects to separate concerns Encode invariants at the type level. Effectively use FDs/type families to create safer code Expert To reach the expert level, you will need to master the following concepts and skills: CONCEPTS SKILLS High performance Kind polymorphism Generic programming Type-level programming Dependent-types, singleton types Category theory Graph reduction Higher-order abstract syntax Compiler design for functional languages Profunctor optics Design a generic, lawful library with broad appeal Prove properties manually using equational reasoning Design and implement a new functional programming language Create novel abstractions with laws Write distributed systems with certain guarantees Use proof systems to formally prove properties of code Create libraries that do not permit invalid states. Use dependent typing to prove more properties at compile time Understand deep relationships between different concepts Profile, debug, and optimize purely functional code with minimal sacrifices Summary This guide should be a good resource to guide you in your future functional-programming learning efforts. Read more on this in our book Hands-On Functional Programming with TypeScript. What makes functional programming a viable choice for artificial intelligence projects? Why functional programming in Python matters: Interview with best selling author, Steven Lott Introducing Coconut for making functional programming in Python simpler
Read more
  • 0
  • 0
  • 6001