Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech Guides - Application Development

18 Articles
article-image-denys-vuika-on-building-secure-and-performant-electron-apps-and-more
Bhagyashree R
02 Dec 2019
7 min read
Save for later

Denys Vuika on building secure and performant Electron apps, and more

Bhagyashree R
02 Dec 2019
7 min read
Building cross-platform desktop applications can be difficult. It requires you to have knowledge of specific tools and technologies for each platform you want to target. Wouldn't it be great if you could write and maintain a single codebase and that too with your existing web development skills? Electron helps you do exactly that. It is a framework for building cross-platform desktop apps with JavaScript, HTML, and CSS. Electron was originally not a separate project but was actually built to port the Mac-only Atom text editor to different platforms. The Atom team at GitHub tried out solutions like Chromium Embedded Framework (CEF) and node-webkit (now known as NW.js), but nothing was working right. This is when Cheng Zhao, a GitHub engineer started a new project and rewrote node-webkit from scratch. This project was Atom Shell, that we now know as Electron. It was open-sourced in 2014 and was renamed to Electron in May 2015. To get an insight into why so many companies are adopting Electron, we interviewed Denys Vuika, a veteran programmer and author of the book Electron Projects. He also talked about when you should choose Electron, best practices for building secure Electron apps, and more. Electron Projects is a project-based guide that will help you explore the components of the Electron framework and its integration with other JS libraries to build 12 real-world desktop apps with an increasing level of complexity. When is Electron the best bet and when is it not  Many popular applications are built using Electron including VSCode, GitHub Desktop, and Slack. It enables developers to deliver new features fast, while also maintaining consistency with all platforms. Vuika says, “The cost and speed of the development, code reuse are the main reasons I believe. The companies can effectively reuse existing code to build desktop applications that look and behave exactly the same across the platforms. No need to have separate developer teams for various platforms.” When we asked Vuika about why he chose Electron, he said, “Historically, I got into the Electron app development to build applications that run on macOS and Linux, alongside traditional Windows platform. I didn't want to study another stack just to build for macOS, so Electron shell with the web-based content was extremely appealing.” Sharing when you should choose Electron, he said, "Electron is the best bet when you want to have a single codebase and single developer team working with all major platforms. Web developers should have a very minimal learning curve to get started with Electron development. And the desktop application codebase can also be shared with the website counterpart. That saves a huge amount of time and money. Also, the Node.js integration with millions of useful packages to cover all possible scenarios." The case when it is not a good choice is, “if you are just trying to wrap the website functionality into a desktop shell. The biggest benefit of Electron applications is access to the local file system and hardware.” Building Electron application using Angular, React, Vue Electron integrates with all the three most popular JavaScript frameworks: React, Vue, and Angular. All these three have their own pros and cons. If you are coming from a JavaScript background, React could probably be a good option as it has much less abstraction away from vanilla JS. Other advantages are it is very flexible, you can extend its core functionality by adding libraries, and it is backed by a great community. Vue is a lightweight framework that’s easier to learn and get productive. Angular has exceptional TypeScript support and includes dependency injections, Http services, internationalization, formatting pipes, server-side rendering, a CLI, animations and much more. When it comes to Electron, choosing one of them depends on which framework you are comfortable with and what fits your needs. Vuika recommends, "There are three pretty much big developer camps out there: React, Angular and Vue. All of them focus on web components and client applications, so it’s a matter of personal preferences or historical decisions if speaking about companies. Also, each JavaScript framework has more than one set of mature UI libraries and design systems, so there are always options to choose from. For novice developers he recommends, “keep in mind it is still a web stack. Pick whatever you are comfortable to build a web application with." Vuika's book, Electron Projects, has a dedicated chapter, Integrating Electron applications with Angular, React, and Vue to help you learn how to integrate them with your Electron apps. Tips on building performant and secure apps Electron’s core components are Chromium, more specifically the libchromiumcontent library, Node.js, and of Chromium Google V8 javascript engine. Each Electron app ships with its own isolated copy of Chromium, which could affect their memory footprint as well as the bundle size. Sharing other reasons Vuika said, “It has some memory footprint but, based on my personal experience, most of the memory issues are usually related to the application implementation and resource management rather than the Electron shell itself.” Some of the best practices that the Electron team recommends are: examining modules and their dependencies before adding to your applications, ensuring the main process is not blocked, among others. You can find the full checklist on Electron’s official site. Vuika suggests, “Electron developers have all the development toolset they use for web development: Chrome Developer Tools with debuggers, profilers, and many other great features. There are also build tools for each frontend framework that allow minification, code splitting, and tree shaking. Routing libraries allow loading only the content the user needs at a particular point. Many areas to improve memory and resource consumption.” More recently, some developers have also started using Rust and also recommend using WebAssembly with Electron to minimize the Electron pain points while enjoying its benefits.  Coming to security, Vuika says, “With Electron, a web application can have nearly full access to the local file system and operating system resources by means of the Node.js process. Developers should be very careful trusting web content, especially if using remotely served HTML content.”   “Electron team has recently published a very good article on the security that I strongly recommend to read and keep in the bookmarks. The article dwells on explaining major security pitfalls, as well as ways to harden your applications,” he recommends. Meanwhile, Electron is also improving with its every subsequent release. Starting with Electron 6.0 the team has started laying “the groundwork for a future requirement that native Node modules loaded in the renderer process be either N-API or Context Aware.” This update is expected to come in Electron 11.0.  “Also, keep in mind that Electron keeps improving and evolving all the time. It is getting more secure and faster with each next release. For developers, it is more important to build the knowledge of creating and debugging applications, as for me,” he adds. About the author Denys Vuika is an Applications Platform Developer and Tech Lead at Alfresco Software, Inc. He is a full-stack developer and a constant open source contributor, with more than 16 years of programming experience, including ten years of front-end development with AngularJS, Angular, ASP.NET, React.js and other modern web technologies, more than three years of Node.js development. Denys works with web technologies on a daily basis, has a good understanding of Cloud development, and containerization of the web applications. Denys Vuika is a frequent Medium blogger and the author of the "Developing with Angular" book on Angular, Javascript, and Typescript development. He also maintains a series of Angular-based open source projects. Check out Vuika’s latest book, Electron Projects on PacktPub. This book is a project-based guide to help you create, package, and deploy desktop applications on multiple platforms using modern JavaScript frameworks Follow Denys Vuika on Twitter: @DenysVuika. Electron 6.0 releases with improved Promise support, native Touch ID authentication support, and more The Electron team publicly shares the release timeline for Electron 5.0 How to create a desktop application with Electron [Tutorial]
Read more
  • 0
  • 0
  • 5727

article-image-microservices-require-a-high-level-vision-to-shape-the-direction-of-the-system-in-the-long-term-says-jaime-buelta
Bhagyashree R
25 Nov 2019
9 min read
Save for later

"Microservices require a high-level vision to shape the direction of the system in the long term," says Jaime Buelta

Bhagyashree R
25 Nov 2019
9 min read
Looking back 4-5 years ago, the sentiment around microservices architecture has changed quite a bit. First, it was in the hype phase when after seeing the success stories of companies like Netflix, Amazon, and Gilt.com developers thought that microservices are the de facto of application development. Cut to now, we have realized that microservices is yet another architectural style which when applied to the right problem in the right way works amazingly well but comes with its own pros and cons. To get an understanding of what exactly microservices are, when we should use them, when not to use them, we sat with Jaime Buelta, the author of Hands-On Docker for Microservices with Python. Along with explaining microservices and their benefits, Buelta shared some best practices developers should keep in mind if they decide to migrate their monoliths to microservices. [box type="shadow" align="" class="" width=""] Further Learning Before jumping to microservices, Buelta recommends building solid foundations in general software architecture and web services. “They'll be very useful when dealing with microservices and afterward,” he says. Buelta’s book, Hands-on Docker for Microservices with Python aims to guide you in your journey of building microservices. In this book, you’ll learn how to structure big systems, encapsulate them using Docker, and deploy them using Kubernetes. [/box] Microservices: The benefits and risks A traditional monolith application encloses all its capabilities in a single unit. On the contrary, in the microservices architecture, the application is divided into smaller standalone services that are independently deployable, upgradeable, and replaceable. Each microservice is built for a single business purpose, which communicates with other microservices with lightweight mechanisms. Buelta explains, “Microservice architecture is a way of structuring a system, where several independent services communicate with each other in a well-defined way (typically through web RESTful services). The key element is that each microservice can be updated and deployed independently.” Microservices architecture does not only dictates how you build your application but also how your team is organized. [box type="shadow" align="" class="" width=""]"Though [it] is normally described in terms of the involved technologies, it’s also an organizational structure. Each independent team can take full ownership of a microservice. This allows organizations to grow without developers clashing with each other," he adds. [/box] One of the key benefits of microservices is it enables innovation without much impact on the system as a whole. With microservices, you can do horizontal scaling, have strong module boundaries, use diverse technologies, and develop parallelly. Coming to the risks associated with microservices, Buelta said, "The main risk in its adoption, especially when coming from a monolith, is to make a design where the services are not truly independent. This generates an overhead and complexity increase in inter-service communication." He adds, "Microservices require a high-level vision to shape the direction of the system in the long term. My recommendation to organizations moving towards this kind of structure is to put someone in charge of the “big picture”. You don't want to lose sight of the forest for the trees." Migrating from monoliths to microservices Martin Fowler, a renowned author and software consultant, advises going for a "monolith-first" approach. This is because using microservices architecture from the get-go can be risky as it is mostly found suitable for large systems and large teams. Buelta shared his perspective, "The main metric to start thinking into getting into this kind of migration is raw team size. For small teams, it is not worth it, as developers understand everything that is going on and can ask the person sitting right across the room for any question. A monolith works great in these situations, and that’s why virtually every system starts like this." This asserts the "two-pizza team" rule by Amazon, which says that if a team responsible for one microservice couldn’t be fed with two pizzas, it is too big. [box type="shadow" align="" class="" width=""]"As business and teams grow, they need better coordination. Developers start stepping into each other's toes often. Knowing the intent of a particular piece of code is trickier. Migrating then makes sense to give some separation of function and clarity. Each team can set its own objectives and work mostly on their own, presenting a clear external interface. But for this to make sense, there should be a critical mass of developers," he adds.[/box] Best practices to follow when migrating to microservices When asked about what best practices developers can practice when migrating to microservices, Buelta said, "The key to a successful microservice architecture is that each service is as independent as possible." A question that arises here is ‘how can you make the services independent?’ "The best way to discover the interdependence of system is to think in terms of new features: “If there’s a new feature, can it be implemented by changing a single service? What kind of features are the ones that will require coordination of several microservices? Are they common requests, or are they rare? No design will be perfect, but at least will help make informed decisions,” explains Buelta. Buelta advises doing it right instead of doing it twice. "Once the migration is done, making changes on the boundaries of the microservices is difficult. It’s worth to invest time into the initial phase of the project," he adds. Migrating from one architectural pattern to another is a big change. We asked what challenges he and his team faced during the process, to which he said, [box type="shadow" align="" class="" width=""]"The most difficult challenge is actually people. They tend to be underestimated, but moving into microservices is actually changing the way people work. Not an easy task![/box] He adds, “I’ve faced some of these problems like having to give enough training and support for developers. Especially, explaining the rationale behind some of the changes. This helps developers understand the whys of the change they find so frustrating. For example, a common complaint moving from a monolith is to have to coordinate deployments that used to be a single monolith release. This needs more thought to ensure backward compatibility and minimize risk. This sometimes is not immediately obvious, and needs to be explained." On choosing Docker, Kubernetes, and Python as his technology stack We asked Buelta what technologies he prefers for implementing microservices. For language his answer was simple: "Python is a very natural choice for me. It’s my favorite programming language!" He adds, "It’s very well suited for the task. Not only is it readable and easy to use, but it also has ample support for web development. On top of that, it has a vibrant ecosystem of third-party modules for any conceivable demand. These demands include connecting to other systems like databases, external APIs, etc." Docker is often touted as one of the most important tools when it comes to microservices. Buelta explained why, "Docker allows to encapsulate and replicate the application in a convenient standard package. This reduces uncertainty and environment complexity. It simplifies greatly the move from development to production for applications. It also helps in reducing hardware utilization.  You can fit multiple containers with different environments, even different operative systems, in the same physical box or virtual machine." For Kubernetes, he said, "Finally, Kubernetes allows us to deploy multiple Docker containers working in a coordinated fashion. It forces you to think in a clustered way, keeping the production environment in mind. It also allows us to define the cluster using code, so new deployments or configuration changes are defined in files. All this enables techniques like GitOps, which I described in the book, storing the full configuration in source control. This makes any change in a specific and reversible way, as they are regular git merges. It also makes recovering or duplicating infrastructure from scratch easy." "There is a bit of a learning curve involved in Docker and Kubernetes, but it’s totally worth it. Both are very powerful tools. And they encourage you to work in a way that’s suited for avoiding downfalls in production," he shared. On multilingual microservices Microservices allow you to use diverse technologies as each microservice ideally is handled by an independent team. Buelta shared his opinion regarding multilingual microservices, "Multilingual microservices are great! That’s one of its greatest advantages. A typical example of this is to migrate legacy code written in some language to another. A microservice can replace another that exposes the same external interface. All while being completely different internally. I’ve done migrations from old PHP apps to replace them with Python apps, for example." He adds, "As an organization, working with two or more frameworks at the same time can help understand better both of them, and when to use one or the other." Though using multilingual microservices is a great advantage, it can also increase the operational overhead. Buelta advises, "A balance needs to be stuck, though. It doesn’t make sense to use a different tool each time and not be able to share knowledge across teams. The specific numbers may depend on company size, but in general, more than two or three should require a good explanation of why there’s a new tool that needs to be introduced in the stack. Keeping tools at a reasonable level also helps to share knowledge and how to use them most effectively." About the author Jaime Buelta has been a professional programmer and a full-time Python developer who has been exposed to a lot of different technologies over his career. He has developed software for a variety of fields and industries, including aerospace, networking and communications, industrial SCADA systems, video game online services, and financial services. As part of these companies, he worked closely with various functional areas, such as marketing, management, sales, and game design, helping the companies achieve their goals. He is a strong proponent of automating everything and making computers do most of the heavy lifting so users can focus on the important stuff. He is currently living in Dublin, Ireland, and has been a regular speaker at PyCon Ireland. Check out Buelta’s book, Hands-On Docker for Microservices with Python on PacktPub. In this book, you will learn how to build production-grade microservices as well as orchestrate a complex system of services using containers. Follow Jaime Buelta on Twitter: @jaimebuelta. Microsoft launches Open Application Model (OAM) and Dapr to ease developments in Kubernetes and microservices Kong CTO Marco Palladino on how the platform is paving the way for microservices adoption [Interview] Yuri Shkuro on Observability challenges in microservices and cloud-native applications
Read more
  • 0
  • 0
  • 5910

article-image-exploring-net-core-3-0-components-with-mark-j-price-a-microsoft-specialist
Packt Editorial Staff
15 Nov 2019
8 min read
Save for later

Exploring .Net Core 3.0 components with Mark J. Price, a Microsoft specialist

Packt Editorial Staff
15 Nov 2019
8 min read
There has been continuous transformation since the last few years to bring .NET to platforms other than Windows. .NET Core 3.0 released in September 2019 with primary focus on adding Windows specific features. .NET Core 3.0 supports side-by-side and app-local deployments, a fast JSON reader, serial port access and other PIN access for Internet of Things (IoT) solutions, and tiered compilation on by default. In this article we will explore the .Net Core components of its new 3.0 release. This article is an excerpt from the book C# 8.0 and .NET Core 3.0 - Modern Cross-Platform Development - Fourth Edition written by Mark J. Price. Mark follows a step-by-step approach in the book filled with exciting projects and fascinating theory for the readers in this highly acclaimed franchise.  Pieces of .NET Core components These are pieces that play an important role in the development of the .NET Core: Language compilers: These turn your source code written with languages such as C#, F#, and Visual Basic into intermediate language (IL) code stored in assemblies. With C# 6.0 and later, Microsoft switched to an open source rewritten compiler known as Roslyn that is also used by Visual Basic. Common Language Runtime (CoreCLR): This runtime loads assemblies, compiles the IL code stored in them into native code instructions for your computer's CPU, and executes the code within an environment that manages resources such as threads and memory. Base Class Libraries (BCL) of assemblies in NuGet packages (CoreFX): These are prebuilt assemblies of types packaged and distributed using NuGet for performing common tasks when building applications. You can use them to quickly build anything you want rather combining LEGO™ pieces. .NET Core 2.0 implemented .NET Standard 2.0, which is a superset of all previous versions of .NET Standard, and lifted .NET Core up to parity with .NET Framework and Xamarin. .NET Core 3.0 implements .NET Standard 2.1, which adds new capabilities and enables performance improvements beyond those available in .NET Framework. Understanding assemblies, packages, and namespaces An assembly is where a type is stored in the filesystem. Assemblies are a mechanism for deploying code. For example, the System.Data.dll assembly contains types for managing data. To use types in other assemblies, they must be referenced. Assemblies are often distributed as NuGet packages, which can contain multiple assemblies and other resources. You will also hear about metapackages and platforms, which are combinations of NuGet packages. A namespace is the address of a type. Namespaces are a mechanism to uniquely identify a type by requiring a full address rather than just a short name. In the real world, Bob of 34 Sycamore Street is different from Bob of 12 Willow Drive. In .NET, the IActionFilter interface of the System.Web.Mvc namespace is different from the IActionFilter interface of the System.Web.Http.Filters namespace. Understanding dependent assemblies If an assembly is compiled as a class library and provides types for other assemblies to use, then it has the file extension .dll (dynamic link library), and it cannot be executed standalone. Likewise, if an assembly is compiled as an application, then it has the file extension .exe (executable) and can be executed standalone. Before .NET Core 3.0, console apps were compiled to .dll files and had to be executed by the dotnet run command or a host executable. Any assembly can reference one or more class library assemblies as dependencies, but you cannot have circular references. So, assembly B cannot reference assembly A, if assembly A already references assembly B. The compiler will warn you if you attempt to add a dependency reference that would cause a circular reference. Understanding the Microsoft .NET Core App platform By default, console applications have a dependency reference on the Microsoft .NET Core App platform. This platform contains thousands of types in NuGet packages that almost all applications would need, such as the int and string types. When using .NET Core, you reference the dependency assemblies, NuGet packages, and platforms that your application needs in a project file. Let's explore the relationship between assemblies and namespaces. In Visual Studio Code, create a folder named test01 with a subfolder named AssembliesAndNamespaces, and enter dotnet new console to create a console application. Save the current workspace as test01 in the test01 folder and add the AssembliesAndNamespaces folder to the workspace. Open AssembliesAndNamespaces.csproj, and note that it is a typical project file for a .NET Core application, as shown in the following markup: Check out this code on GitHub. Although it is possible to include the assemblies that your application uses with its deployment package, by default the project will probe for shared assemblies installed in well-known paths. First, it will look for the specified version of .NET Core in the current user's .dotnet/store and .nuget folders, and then it looks in a fallback folder that depends on your OS, as shown in the following root paths: Windows: C:\Program Files\dotnet\sdk macOS: /usr/local/share/dotnet/sdk Most common .NET Core types are in the System.Runtime.dll assembly. You can see the relationship between some assemblies and the namespaces that they supply types for, and note that there is not always a one-to-one mapping between assemblies and namespaces, as shown in the following table: Assembly Example namespaces Example types System.Runtime.dll System, System.Collections, System.Collections.Generic Int32, String, IEnumerable<T> System.Console.dll System Console System.Threading.dll System.Threading Interlocked, Monitor, Mutex System.Xml.XDocument.dll System.Xml.Linq XDocument, XElement, XNode Understanding NuGet packages .NET Core is split into a set of packages, distributed using a Microsoft-supported package management technology named NuGet. Each of these packages represents a single assembly of the same name. For example, the System.Collections package contains the System.Collections.dll assembly. The following are the benefits of packages: Packages can ship on their own schedule. Packages can be tested independently of other packages. Packages can support different OSes and CPUs by including multiple versions of the same assembly built for different OSes and CPUs. Packages can have dependencies specific to only one library. Apps are smaller because unreferenced packages aren't part of the distribution. The following table lists some of the more important packages and their important types: Package Important types System.Runtime Object, String, Int32, Array System.Collections List<T>, Dictionary<TKey, TValue> System.Net.Http HttpClient, HttpResponseMessage System.IO.FileSystem File, Directory System.Reflection Assembly, TypeInfo, MethodInfo Understanding frameworks There is a two-way relationship between frameworks and packages. Packages define the APIs, while frameworks group packages. A framework without any packages would not define any APIs. .NET packages each support a set of frameworks. For example, the System.IO.FileSystem package version 4.3.0 supports the following frameworks: .NET Standard, version 1.3 or later. .NET Framework, version 4.6 or later. Six Mono and Xamarin platforms (for example, Xamarin.iOS 1.0). Understanding dotnet commands When you install .NET Core SDK, it includes the command-line interface (CLI) named dotnet. Creating new projects The dotnet command-line interface has commands that work on the current folder to create a new project using templates. In Visual Studio Code, navigate to Terminal. Enter the dotnet new -l command to list your currently installed templates, as shown in the following screenshot: Managing projects The dotnet CLI has the following commands that work on the project in the current folder, to manage the project: dotnet restore: This downloads dependencies for the project. dotnet build: This compiles the project. dotnet test: This runs unit tests on the project. dotnet run: This runs the project. dotnet pack: This creates a NuGet package for the project. dotnet publish: This compiles and then publishes the project, either with dependencies or as a self-contained application. add: This adds a reference to a package or class library to the project. remove: This removes a reference to a package or class library from the project. list: This lists the package or class library references for the project. To summarize, we explored the .NET Core components of the new 3.0 release. If you want to learn the fundamentals, build practical applications, and explore latest features of C# 8.0 and .NET Core 3.0, check out our latest book C# 8.0 and .NET Core 3.0 - Modern Cross-Platform Development - Fourth Edition written by Mark J. Price. .NET Framework API Porting Project concludes with .NET Core 3.0 .NET Core 3.0 is now available with C# 8, F# 4.7, ASP.NET Core 3.0 and general availability of EF Core 3.0 and EF 6.3 .NET Core 3.0 Preview 6 is available, packed with updates to compiling assemblies, optimizing applications ASP.NET Core and Blazor Inspecting APIs in ASP.NET Core [Tutorial] Use App Metrics to analyze HTTP traffic, errors & network performance of a .NET Core app [Tutorial]
Read more
  • 0
  • 0
  • 5489

article-image-salesforce-lightning-platform-powerful-fast-and-intuitive-user-interface
Fatema Patrawala
05 Nov 2019
6 min read
Save for later

What makes Salesforce Lightning Platform a powerful, fast and intuitive user interface

Fatema Patrawala
05 Nov 2019
6 min read
Salesforce has always been proactive in developing and bringing to market new features and functionality in all of its products. Throughout the lifetime of the Salesforce CRM product, there have been several upgrades to the user interface. In 2015, Salesforce began promoting its new platform – Salesforce Lightning. Although long time users and Salesforce developers may have grown accustomed to the classic user interface, Salesforce Lightning may just covert them. It brings in a modern UI with new features, increased productivity, faster deployments, and a seamless transition across desktop and mobile environments. Recently, Salesforce has been actively encouraging its developers, admins and users to migrate from the classic Salesforce user interface to the new Lightning Experience. Andrew Fawcett, currently VP Product Management and a Salesforce Certified Platform Developer II at Salesforce, writes in his book, Salesforce Lightning Enterprise Architecture, “One of the great things about developing applications on the Salesforce Lightning Platform is the support you get from the platform beyond the core engineering phase of the production process.” This book is a comprehensive guide filled with best practices and tailor-made examples developed in the Salesforce Lightning. It is a must-read for all Lightning Platform architects! Why should you consider migrating to Salesforce Lightning Earlier this year, Forrester Consulting published a study quantifying the total economic impact and benefits of Salesforce Lightning for Service Cloud. In the study, Forrester found that a composite service organization deploying Lightning Experience obtained a return on investment (ROI) of 475% over 3 years. Among the other potential benefits, Forrester found that over 3 years organizations using Lighting platform: Saved more than $2.5 million by reducing support handling time; Saved $1.1 million by avoiding documentation time; and Increased customer satisfaction by 8.5% Apart from this, the Salesforce Lightning platform allows organizations to leverage the latest cloud-based features. It includes responsive and visually attractive user interfaces which is not available within the Classic themes. Salesforce Lightning provides stupendous business process improvements and new technological advances over Classic for Salesforce developers. How does the Salesforce Lightning architecture look like While using the Salesforce Lightning platform, developers and users interact with a user interface backed by a robust application layer. This layer runs on the Lightning Component Framework which comprises of services like the navigation, Lightning Data Service, and Lightning Design System. Source: Salesforce website As part of this application layer, Base Components and Standard Components are the building blocks that enable Salesforce developers to configure their user interfaces via the App Builder and Community Builder. Standard Components are typically built up from one or more Base Components, which are also known as Lightning Components. Developers can build Lightning Components using two programming models: the Lightning Web Components model, and the Aura Components model. The Lightning platform is critical for a range of services and experiences in Salesforce: Navigation Service: The navigation service is supported for Lightning Experience and the Salesforce app. It is built with extensive routing, deep linking, and login redirection, Salesforce's navigation service powers app navigation, state changes, and refreshes. Lightning Data Service: Lightning Data Service is built on top of the User Interface API, It enables developers to load, create, edit, or delete a record in your component without requiring Apex code. Lightning Data Service improves performance and data consistency across components. Lightning Design System: With Lightning Design System, developers can build user interfaces easily including the component blueprints, markup, CSS, icons, and fonts. Base Lightning Components: Base Lightning Components are the building blocks for all UI across the platform. Components range from a simple button to a highly functional data table and can be written as an Aura component or a Lightning web component. Standard Components: Lightning pages are made up of Standard Components, which in turn are composed of Base Lightning Components. Salesforce developers or admins can drag-and-drop Standard Components in tools like Lightning App Builder and Community Builder. Lightning App Builder: Lightning App Builder will let developers build and customize interfaces for Lightning Experience, the Salesforce App, Outlook Integration, and Gmail Integration. Community Builder: For Communities, developers can use the Community Builder to build and customize communities easily. Apart from the above there are other services available within the Salesforce Lightning platform, like the Lightning security measures and record detail pages on the platform and Salesforce app. How to plan transitioning from Classic to Lightning Experience As Salesforce admins/developers prepare for the transition to Lightning Experience, they will need to evaluate three things: how does the change benefit the company, what work is needed to prepare for the change, and how much will it cost. This is the stage to make the case for moving to Lightning Experience by calculating the return on investment of the company and defining what a Lightning Experience implementation will look like. First they will need to analyze how prepared the organization is for the transition to Lightning Experience. Salesforce admins/developers can use the Lightning Experience Readiness Check, it is a tool that produces a personalized Readiness Report and shows which users will benefit right away, and how to adjust the implementation for Lightning Experience. Further Salesforce developers/admins can make the case to their leadership team by showing how migrating to Lightning Experience can realize business goals and improve the company's bottom line. Finally, by using the results of the activities carried out to assess the impact of the migration, understand the level of change required and decide on a suitable approach. If the changes required are relatively small, consider migrating all users and all areas of functionality at the same time. However, if the Salesforce environment is more complex and the amount of change is far greater, consider implementing the migration in phases or as an initial pilot to start with. Overall, the Salesforce Lightning Platform is being increasingly adopted by admins, business analysts, consultants, architects, and especially Salesforce developers. If you want to deliver packaged applications using Salesforce Lightning that cater to enterprise business needs, read this book, Salesforce Lightning Platform Enterprise Architecture, written by Andrew Fawcatt.  This book will take you through the architecture of building an application on the Lightning platform and help you understand its features and best practices. It will also help you ensure that the app keeps up with the increasing customers’ and business requirements. What are the challenges of adopting AI-powered tools in Sales? How Salesforce can help Salesforce open sources ‘Lightning Web Components framework’ “Facebook is the new Cigarettes”, says Marc Benioff, Salesforce Co-CEO Build a custom Admin Home page in Salesforce CRM Lightning Experience How to create and prepare your first dataset in Salesforce Einstein  
Read more
  • 0
  • 0
  • 5266

article-image-founder-ceo-of-odoo-fabien-pinckaers-discusses-the-new-odoo-13-framework
Vincy Davis
04 Nov 2019
6 min read
Save for later

Founder &amp; CEO of Odoo, Fabien Pinckaers discusses the new Odoo 13 framework

Vincy Davis
04 Nov 2019
6 min read
Odoo, formerly known as OpenERP (Enterprise Resource Planning), is a popular open source, business application development software. It comes with many features like a powerful GUI, performance optimization, integrated in-app purchase features and more. It is used by companies to manage and organize their workloads like materials and warehouse management, human resources, finance, accounting, sales, and many other enterprise features. With a fast-growing community, Odoo is being used by companies of all sizes. At the Odoo Experience 2019 event conducted earlier this month, the Odoo team announced the release of Odoo 13, its latest version of the all-in-one business software. This release contains an abundance of major and minor improvements, including new features like sales coupons & promotions module, MRP subcontracting, website form builder, skill management module and more.  At the event, founder & CEO of Odoo, Fabien Pinckaers explained the many concepts behind the new Odoo framework, which he says is one of the best improvements in Odoo 13. New to Odoo? If you are a beginner in Odoo, read our book Working with Odoo 12 - Fourth Edition written by Greg Moss to learn how to start a new company database in Odoo and to understand the basics of Odoo sales management. You can also master customer relationship management in Odoo for setting up a modern business environment. This book will also take you through the OpenChatter feature with notes and messages associated with the Odoo documents. Also, learn how to use Odoo's API to integrate with other applications from our book.   The Odoo 13 framework is also called an In-Memory ORM, because it provides more considerable memory than before. When employed for operational measures, on an average, it runs 4.5 times faster when compared to earlier versions of Odoo. Key features of Odoo 13 framework Simplified Cache process Pinckaers says that in the new framework, they have simplified the cache as the stored fields will now only need a single value. On the other hand, the non-stored fields’ computed value will depend on the keywords present in the context (eg. translatable and context). He added that, in version 12, most fields did not need a cache so it contained only one global cache with an exception for fields that were text-dependent. It also had a new attribute for a multi-line inventory where the projects depend on “way roads”. However, the difficulty in this version is that when creating a field, users had to select the cache value and if the context of the field is changing, then the users had to again specify the new value of cache. This step is made simpler in version 13, as the user now needs to specify the value of the cache only once. “It seems simple but actually in the business code we're passing it to all the fields at the same time,” asserts Pinckaers. This simplified cache process will also reduce the alert memory access of the code. In-memory updates While specifying the various test field values, in the earlier versions, users had to update its validation value each time making it a time consuming process. To overcome this problem, the Odoo team has included all the data transactions in memory in the new version. Consequently,  in Odoo 13, when assigning the field value, the user can put it in the cache each time. Hence, when a field value needs to be read, it is taken from the cache itself. To manage all the dependencies in Python, Pinckaers demonstrated how users should always:  Use the inverse field, instead of SQL query Avoid using SELECT, as the implementation of the compute will read the same object When create(), set one2many to[] Delaying the computing field for faster transactions In order to delay a computing field in the line.product_quantity and the line.discount in the preceding Odoo versions, a user had to compute the dependency value for all the for line in order commands. Once the transaction was completed, the values were then recomputed and written in the code. This process is also made easy in Odoo 13, as the user can now mark all the line commands to recompute and use the self.flush() command to compute the values after the transaction is completed. This makes all the computation transactions to be conducted in memory. According to Pinckaers, this support will help users with more than 100 customers as it will make the process much faster and simpler. Optimize dependency tree to reduce Python and SQL computations Pinckaers takes the ‘change order’ example to demonstrate how version 13 of Odoo has a clean dependency tree. This means that if the price list of the order is changed, the total cost of the order will also change indirectly, thus optimizing the dependency tree. He explains that this indirect change will happen due to the indirect dependency between the pricelist identity and the total cost list of the field in Odoo 13. In the earlier versions, due to the recursive nature of the dependencies, each order of the line entailed the order ID of the field. This required the user to read sometimes even more than 100 lines of the list to get the order ID. In Odoo 13, this prolonged process is altered to get a more optimized dependency tree. This means that the user can now directly get the order ID from the dependent tree, without the Python and SQL computations.  Improvements in browse optimization() The major improvement instilled in Odoo 13 browse optimization() is the mechanism to avoid multiple format cache conversion. In the previous versions, users had to read and convert all the SQL queries to cache format followed by put in cache command. This meant that it required three commands just to read the data, making the process very tedious. With the latest version, the prefetch command will directly save all the similar data formats in the memory. “But if the format is different, then we have to apply everything a color conversion method. As  Python is extremely slow,” Pinckaers says, “applying a dictionary that we see from outside the cache” makes the process faster because a C implementation can be used to directly convert the data in the cache format. You can watch the full video to see Pinckaer’s demonstration of code cleanup and Python optimization. If you want to use Odoo to build enterprise applications and set up the functional requirements for your business, read our book ‘Working with Odoo 12 - Fourth Edition' written by Greg Moss to learn how to use the MRP module to create, process, and schedule the manufacturing and production order. This book will also guide you with in-depth knowledge about the business intelligence required in Odoo, its architecture and will also unveil how to customize Odoo to meet the specific needs of your business.  Creating views in Odoo 12 – List, Form, Search [Tutorial] How to set up Odoo as a system service [Tutorial] Handle Odoo application data with ORM API [Tutorial] Implement an effective CRM system in Odoo 11 [Tutorial] “Everybody can benefit from adopting Odoo, whether you’re a small start-up or a giant tech company” – An interview with Odoo community hero, Yenthe Van Ginneken
Read more
  • 0
  • 0
  • 4785

article-image-how-do-you-become-a-developer-advocate
Packt Editorial Staff
11 Oct 2019
8 min read
Save for later

How do you become a developer advocate?

Packt Editorial Staff
11 Oct 2019
8 min read
Developer advocates are people with a strong technical background, whose job is to help developers be successful with a platform or technology. They act as a bridge between the engineering team and the developer community. A developer advocate does not only fill in the gap between developers and the platform but also looks after the development of developers in terms of traction and progress on their projects. Developer advocacy, is broadly referred to as "developer relations". Those who practice developer advocacy have fallen into in this profession in one way or another. As the processes and theories in the world of programming have evolved over several years, so has the idea of developer advocacy. This is the result of developer advocates who work in the wild using their own initiatives. This article is an excerpt from the book Developer, Advocate! by Geertjan Wielenga. This book serves as a rallying cry to inspire and motivate tech enthusiasts and burgeoning developer advocates to take their first steps within the tech community. The question then arises, how does one become a developer advocate? Here are some experiences shared by some well-known developer advocates on how they started the journey that landed them to this role. Is developer advocacy taught in universities? Bruno Borges, Principal Product Manager at Microsoft says, for most developer advocates or developer relations personnel, it was something that just happened. Developer advocacy is not a discipline that is taught in universities; there's no training specifically for this. Most often, somebody will come to realize that what they already do is developer relations. This is a discipline that is a conjunction of several other roles: software engineering, product management, and marketing. I started as a software engineer and then I became a product manager. As a product manager, I was engaged with marketing divisions and sales divisions directly on a weekly basis. Maybe in some companies, sales, marketing, and product management are pillars that are not needed. I think it might vary. But in my opinion, those pillars are essential for doing a proper developer relations job. Trying to aim for those pillars is a great foundation. Just as in computer science when we go to college for four years, sometimes we don't use some of that background, but it gives us a good foundation. From outsourcing companies that just built business software for companies, I then went to vendor companies. That's where I landed as a person helping users to take full advantage of the software that they needed to build their own solutions. That process is, ideally, what I see happening to others. The journey of a regular tech enthusiast to a developer advocate Ivar Grimstad, a developer advocate at Eclipse foundation, speaks about his journey from being a regular tech enthusiast attending conferences to being there speaking at conferences as an advocate for his company. Ivar Grimstad says, I have attended many different conferences in my professional life and I always really enjoyed going to them. After some years of regularly attending conferences, I came to the point of thinking, "That guy isn't saying anything that I couldn't say. Why am I not up there?" I just wanted to try speaking, so I started submitting abstracts. I already gave talks at meetups locally, but I began feeling comfortable enough to approach conferences. I continued submitting abstracts until I got accepted. As it turned out, while I was becoming interested in speaking, my company was struggling to raise its profile. Nobody, even in Sweden, knew what we did. So, my company was super happy for any publicity it could get. I could provide it with that by just going out and talking about tech. It didn't have to be related to anything we did; I just had to be there with the company name on the slides. That was good enough in the eyes of my company. After a while, about 50% of my time became dedicated to activities such as speaking at conferences and contributing to open source projects. Tables turned from being an engineer to becoming a developer advocate Mark Heckler, a Spring developer and advocate at Pivotal, narrates his experience about how tables turned for him from University to Pivotal Principal Technologist & Developer Advocate. He says, initially, I was doing full-time engineering work and then presenting on the side. I was occasionally taking a few days here and there to travel to present at events and conferences. I think many people realized that I had this public-facing level of activities that I was doing. I was out there enough that they felt I was either doing this full-time or maybe should be. A good friend of mine reached out and said, "I know you're doing this anyway, so how would you like to make this your official role?" That sounded pretty great, so I interviewed, and I was offered a full-time gig doing, essentially, what I was already doing in my spare time. A hobby turned out to be a profession Matt Raible, a developer advocate at Okta has worked as an independent consultant for 20 years. He did advocacy as a side hobby. He talks about his experience as a consultant and walks through the progress and development. I started a blog in 2002 and wrote about Java a lot. This was before Stack Overflow, so I used Struts and Java EE. I posted my questions, which you would now post on Stack Overflow, on that blog with stack traces, and people would find them and help. It was a collaborative community. I've always done the speaking at conferences on the side. I started working for Stormpath two years ago, as a contractor part-time, and I was working at Computer Associates at the same time. I was doing Java in the morning at Stormpath and I was doing JavaScript in the afternoon at Computer Associates. I really liked the people I was working with at Stormpath and they tried to hire me full-time. I told them to make me an offer that I couldn't refuse, and they said, "We don't know what that is!" I wanted to be able to blog and speak at conferences, so I spent a month coming up with my dream job. Stormpath wanted me to be its Java lead. The problem was that I like Java, but it's not my favorite thing. I tend to do more UI work. The opportunity went away for a month and then I said, "There's a way to make this work! Can I do Java and JavaScript?" Stormpath agreed that instead of being more of a technical leader and owning the Java SDK, I could be one of its advocates. There were a few other people on board in the advocacy team. Six months later, Stormpath got bought out by Okta. As an independent consultant, I was used to switching jobs every six months, but I didn't expect that to happen once I went full-time. That's how I ended up at Okta! Developer advocacy can be done by calculating the highs and lows of the tech world Scott Davis, a Principal Engineer at Thoughtworks, was also a classroom instructor, teaching software classes to business professionals before becoming a developer advocate. As per him, tech really is a world of strengths and weaknesses. Advocacy, I think, is where you honestly say, "If we balance out the pluses and the minuses, I'm going to send you down the path where there are more strengths than weaknesses. But I also want to make sure that you are aware of the sharp, pointy edges that might nick you along the way." I spent eight years in the classroom as a software instructor and that has really informed my entire career. It's one thing to sit down and kind of understand how something works when you're cowboy coding on your own. It's another thing altogether when you're standing up in front of an audience of tens, or hundreds, or thousands of people. Discover how developer advocates are putting developer interests at the heart of the software industry in companies including Microsoft and Google with Developer, Advocate! by Geertjan Wielenga. This book is a collection of in-depth conversations with leading developer advocates that reveal the world of developer relations today. 6 reasons why employers should pay for their developers’ training and learning resources “Developers need to say no” – Elliot Alderson on the FaceApp controversy in a BONUS podcast episode [Podcast] GitHub has blocked an Iranian software developer’s account How do AWS developers manage Web apps? Are you looking at transitioning from being a developer to manager? Here are some leadership roles to consider
Read more
  • 0
  • 0
  • 11277
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-introducing-woz-a-progressive-webassembly-application-pwa-web-assembly-generator-written-entirely-in-rust
Sugandha Lahoti
04 Sep 2019
5 min read
Save for later

Introducing Woz, a Progressive WebAssembly Application (PWA + Web Assembly) generator written entirely in Rust

Sugandha Lahoti
04 Sep 2019
5 min read
Progressive Web Apps are already being deployed at a massive scale evidenced by their presence on most websites now. But what’s next for PWA? Alex Kehayis, developer at Stripe things its the merging of WebAssembly to PWA. According to him, the adoption of WebAssembly and ease of distribution on the web creates compelling new opportunities for application development. He has created what he calls Progressive Webassembly Applications (PWAAs) which is built entirely using Rust. In his talk at WebAssembly San Francisco Meetup, Alex walks through the creation of Woz, a PWA toolchain for Rust. Woz is a progressive WebAssembly app generator (PWAA) for Rust. Woz makes distributing your app as simple as sharing a hyperlink. Read Also: Fastly CTO Tyler McMullen on Lucet and the future of WebAssembly and Rust [Interview] Web content has become efficient Alex begins his talk by pointing out how web content has become massively efficient; this is because it solves three problems: Distribution: Actually serving content to your users Unification: Write once and run it everywhere Experience: Consume content in a low friction environment Mobile applications vs Web applications Applications are kind of an elevated form of content. They tend to be more experiential, dynamic, and interactive. Alex points out the definition of ‘application’ from Wikipedia, which states that applications are software that is designed to perform a group of coordinated functions tasks and activities for the benefit of users. Despite all progress, mobile apps are still hugely inefficient to create, distribute, and use. Its distribution is generally in the hands of the duopoly, Apple and Google. The unification is generally handled through third-party frameworks such as React Native, or Xamarin. User experience on mobile apps, although performant leads to high friction as a user has to generally switch between apps, take time for it to install, load etc. Web based applications on the other hand are quite efficient to create, distribute and use. Anybody who's got an internet connection and a browser can go through the web application. For web applications, unification happens through standards, unlike frameworks which is more efficient. User experience is also quite dynamic and fast; you jump right into it and don't have to necessarily install anything. Should everybody just use web apps instead of mobile apps? Although mobile applications are a bit inefficient, they bring certain features: Native application has better performance than web based apps Encapsulation (e.g. home screen, self-contained experience) Mobile apps are offline by default Mobile apps use Hardware/sensors Native apps typically consume less battery than web apps In order to get the best of both worlds, Alex suggests the following steps: Bring web applications to mobile This has already been implemented and are called Progressive web applications Improve the state of performance and providing access. Alex says that WebAssembly is a viable choice for achieving this. WebAssembly is highly performant when it's paired with a language like Rust. Progressive WebAssembly Applications Woz, a Progressive WebAssembly Application generator Alex proceeds to talk about Woz, which is a progressive WebAssembly application generator.  It combines all the good things of a PWA and WebAssembly and works as a toolchain for building and deploying performant mobile apps with Rust. You can distribute your app as simply as sharing a hyperlink. Woz brings distribution via browsers, unification via web standards, and experience via hyperlinks. Woz uses wasm-bindgen to generate the interop calls between WebAssembly and JavaScript. This allows you to write the entire application in Rust—including rendering to the DOM. It will soon be coming with ‘managed charging’ for your apps and even provide multiple copies your users can share all with a hyperlink. Unlike all the things you need for a PWA (SSL certificate, PWA Manifest, Splash screen, Home screen icons, Service worker), PWAAs requires JS bindings to WebAssembly and to fetch, compile, and run wasm. His talks also talked about some popular Rust-based frontend frameworks Yew: “Yew is a modern Rust framework inspired by Elm and React for creating multi-threaded frontend apps with WebAssembly.” Sauron: “Sauron is an html web framework for building web-apps. It is heavily inspired by elm.” Percy: “A modular toolkit for building isomorphic web apps with Rust + WebAssembly” Seed: “A Rust framework for creating web apps” Read Also: “Rust is the future of systems programming, C is the new Assembly”: Intel principal engineer Josh Triplett With Woz, the goal, Alex says, was to stay in Rust and create a PWA that can be installed to your home screen. The sample app that he created only weighs about 300Kb. Alex says, “In order to actually write the app, you really only need one entry point - it’s a public method render that's decorated wasm_bindgen. The rest will kind of figure itself out. You don't necessarily need to go create your own JavaScript file.” He then proceeded to show a quick demo of what it looks like. What’s next? WebAssembly will continue to evolve. More languages and ecosystem can target WebAssembly. Progressive web apps will continue to evolve. PWAAs are an interesting proposition. We should really be liberating mobile apps and bringing them to the web. I think web assembly is kind of a missing link to some of these things. Watch Alex Kehayis’s full talk on YouTube. Slides are available here. https://www.youtube.com/watch?v=0ySua0-c4jg Other news in Tech Wasmer’s first Postgres extension to run WebAssembly is here! Mozilla proposes WebAssembly Interface Types to enable language interoperability Wasmer introduces WebAssembly Interfaces for validating the imports and exports of a Wasm module
Read more
  • 0
  • 0
  • 9773

article-image-what-are-apis-why-should-businesses-invest-in-api-development
Packt Editorial Staff
25 Jul 2019
9 min read
Save for later

What are APIs? Why should businesses invest in API development?

Packt Editorial Staff
25 Jul 2019
9 min read
Application Programming Interfaces (APIs) are like doors that provide access to information and functionality to other systems and applications. APIs share many of the same characteristics as doors; for example, they can be as secure and closely monitored as required. APIs can add value to a business by allowing the business to monetize information assets, comply with new regulations, and also enable innovation by simply providing access to business capabilities previously locked in old systems. This article is an excerpt from the book Enterprise API Management written by Luis Weir. This book explores the architectural decisions, implementation patterns, and management practices for successful enterprise APIs. In this article, we’ll define the concept of APIs and see what value APIs can add to a business. APIs, however, are not new. In fact, the concept goes way back in time and has been present since the early days of distributed computing. However, the term as we know it today refers to a much more modern type of APIs, known as REST or web APIs. The concept of APIs Modern APIs started to gain real popularity when, in the same year of their inception, eBay launched its first public API as part of its eBay Developers Program. eBay's view was that by making the most of its website functionality and information also accessible via a public API, it would not only attract, but also encourage communities of developers worldwide to innovate by creating solutions using the API. From a business perspective, this meant that eBay became a platform for developers to innovate on and, in turn, eBay would benefit from having new users that perhaps it couldn't have reached before. eBay was not wrong. In the years that followed, thousands of organizations worldwide, including known brands, such as Salesforce.com, Google, Twitter, Facebook, Amazon, Netflix, and many others, adopted similar strategies. In fact, according to the programmableweb.com (a well-known public API catalogue), the number of publicly available APIs has been growing exponentially, reaching over 20k as of August 2018. Figure 1: Public APIs as listed in programmableweb.com in August 2018 It may not sound like much, but considering that each of the listed APIs represents a door to an organization's digital offerings, we're talking about thousands of organizations worldwide that have already opened their doors to new digital ecosystems, where APIs have become the product these organizations sell and developers have become the buyers of them. Figure: Digital ecosystems enabled by APIs In such digital ecosystems, communities of internal, partner, or external developers can rapidly innovate by simply consuming these APIs to do all sorts of things: from offering hotel/flight booking services by using the Expedia API, to providing educational solutions that make sense of the space data available through the NASA API. There are ecosystems where business partners can easily engage in business-to-business transactions, either to resell goods or purchase them, electronically and without having to spend on Electronic Data Interchange (EDI) infrastructure. Ecosystems where an organization's internal digital teams can easily innovate as key enterprise information assets are already accessible. So, why should businesses care about all this? There is, in fact, not one answer but multiple, as described in the subsequent sections. APIs as enablers for innovation and bimodal IT What is innovation? According to a common definition, innovation is the process of translating an idea or invention into a good or service that creates value or for which customers will pay. In the context of businesses, according to an article by HBR, innovation manifests itself in two ways: Disruptive innovation: Described as the process whereby a smaller company with fewer resources is able to successfully challenge established incumbent businesses. Sustaining innovation: When established businesses (incumbents) improve their goods and services in the eyes of existing customers. These improvements can be incremental advances or major breakthroughs, but they all enable firms to sell more products to their most profitable customers. Why is this relevant? It is well known that established businesses struggle with disruptive innovation. The Netflix vs Blockbuster example reminds us of this fact. By the time disruptors are able to catch up with an incumbent's portfolio of goods and services, they are able to do so with lower prices, better business models, lower operation costs, and far more agility, and speed to introduce new or enhanced features. At this point, sustaining innovation is not good enough to respond to the challenge. With all the recent advances in technology and the internet, the rate at which disruptive innovation is challenging incumbents has only grown exponentially. Therefore, in order for established businesses to endure the challenge put upon them, they must somehow also become disruptors. The same HBR article describes a point of view on how to achieve this from a business standpoint. From a technology standpoint, however, unless the several systems that underpin a business are "enabled" to deliver such disruption, no matter what is done from a business standpoint, this exercise will likely fail. Perhaps by mere coincidence, or by true acknowledgment of the aforesaid, Gartner introduced the concept of bimodal IT in December 2013, and this concept is now mainstream. Gartner defined bimodal IT as the following: "The practice of managing two separate, coherent modes of IT delivery, one focused on stability and the other on agility. Mode 1 is traditional and sequential, emphasizing safety and accuracy. Mode 2 is exploratory and nonlinear, emphasizing agility and speed." Figure: Gartner's bimodal IT According to Gartner, Mode 1 (or slow) IT organizations focus on delivering core IT services on top of more traditional and hard-to-change systems of record, which in principle are changed and improved in longer cycles, and are usually managed with long-term waterfall project mechanisms. Whereas for Mode 2 (or fast) IT organizations, the main focus is to deliver agility and speed, and therefore they act more like a startup (or digital disruptor in HBR terms) inside the same enterprise. However, what is often misunderstood is how fast IT organizations can disruptively innovate, when most of the information assets, which are critical to bringing context to any innovation, reside in backend systems, and any sort of access has to be delivered by the slowest IT sibling. This dilemma means that the speed of innovation is constrained to the speed by which the relevant access to core information assets can be delivered. As the saying goes, "Where there's a will there's a way." APIs could be implemented as the means for the fast IT to access core information assets and functionality, without the intervention of the slow IT. By using APIs to decouple the fast IT from the slow IT, innovation can occur more easily. However, as with everything, it is easier said than done. In order to achieve such organizational decoupling using APIs, organizations should first build an understanding about what information assets and business capabilities are to be exposed as APIs, so fast IT can consume them as required. This understanding must also articulate the priorities of when different assets are required and by whom, so the creation of APIs can be properly planned for and delivered. Luckily for those organizations that already have mature service-oriented architectures (SOA), some of this work will probably already be in place. For organizations without such luck, this activity should be planned for and should be a fundamental component of the digital strategy. Then the remaining question would be: which team is responsible for defining and implementing such APIs; the fast IT or slow IT? Although the long answer to this question is addressed throughout the different chapters of this book, the short answer is neither and both. It requires a multi-disciplinary team of people, with the right technology capabilities available to them, so they can incrementally API-enable the existing technology landscape, based on business-driven priorities. APIs to monetize on information assets Many experts in the industry concur that an organization's most important asset is its information. In fact, a recent study by Massachusetts Institute of Technology (MIT) suggests that data is the single most important asset for organizations "Data is now a form of capital, on the same level as financial capital in terms of generating new digital products and services. This development has implications for every company's competitive strategy, as well as for the computing architecture that supports it." If APIs act as doors to such assets, then APIs also provide businesses with an opportunity to monetize them. In fact, some organizations are already doing so. According to another article by HBR, 50% of the revenue that Salesforce.com generates comes from APIs, while eBay generates about 60% of its revenue through its API. This is perhaps not such a huge surprise, given that both of these organizations were pioneers of the API economy. Figure: The API economy in numbers What's even more surprising is the case of Expedia. According to the same article, 90% of Expedia's revenue is generated via APIs. This is really interesting, as it basically means that Expedia's main business is to indirectly sell electronic travel services via its public API. However, it's not all that easy. According to the previous study by MIT, most of the CEOs for Fortune 500 companies don't yet fully acknowledge the value of APIs. An intrinsic reason for this could be the lack of understanding and visibility over how data is currently being (or not being) used. Assets that sit hidden on systems of record, only being accessed via traditional integration platforms, will not, in most cases, give insight to the business on how information is being used, and the business value it adds. APIs, on the other hand, are better suited to providing insight about how/by whom/when/why information is being accessed, therefore giving the business the ability to make better use of information to, for example, determine which assets have better capital potential. In this article we provided a short description of APIs, and how they act as an enabler to digital strategies. Define the right organisation model for business-driven APIs with Luis Weir’s upcoming release Enterprise API Management. To create effective API documentation, know how developers use it, says ACM GraphQL API is now generally available Next.js 9 releases with built in zero-config TypeScript support, automatic static optimization, API routes and more
Read more
  • 0
  • 0
  • 3967

article-image-key-trends-in-software-development-in-2019-cloud-native-and-the-shrinking-stack
Richard Gall
18 Dec 2018
8 min read
Save for later

Key trends in software development in 2019: cloud native and the shrinking stack

Richard Gall
18 Dec 2018
8 min read
Bill Gates is quoted as saying that we tend to overestimate the pace of change over a period of 2 years, but underestimate change over a decade. It’s an astute observation: much of what will matter in 2019 actually looks a lot like what we said will be important in development this year. But if you look back 10 years, the change in the types of applications and websites we build - as well as how we build them - is astonishing. The web as we understood it in 2008 is almost unrecognisable. Today, we are in the midst of the app and API economy. Notions of surfing the web sound almost as archaic as a dial up tone. Similarly, the JavaScript framework boom now feels old hat - building for browsers just sounds weird... So, as we move into 2019, progressive web apps, artificial intelligence, and native app development remain at the top of development agenda. But this doesn’t mean these changes are to be ignored as empty hype. If anything, as adoption increases and new tools emerge, we will begin to see more radical shifts in ways of working. The cutting edge will need to sharpen itself elsewhere. What will it mean to be a web developer in 2019? But these changes are enforcing wider changes in the industry. Arguably, it’s transforming what it means to be a web developer. As applications become increasingly lightweight (thanks to libraries and frameworks like React and Vue), and data becomes more intensive, thanks to the range of services upon which applications and websites depend, developers need to expand across the stack. You can see this in some of the latest Packt titles - in Modern JavaScript Web Development Cookbook, for example, you’ll learn microservices and native app development - topics that have typically fallen outside of the strict remit of web development. The simplification of many aspects of development has, ironically, forced developers to look more closely at how these aspects fit together. As you move further into layers of abstraction, the way things interact and work alongside each other become vital. For the most part, it’s no longer a case of writing the requisite code to make something run on the specific part of the application you’re working on, it’s rather about understanding how the various pieces - from the backend to the front end - fit together. This means, in 2019, you need to dive deeper and get to know your software systems inside out. Get comfortable with the backend. Dive into cloud. Start playing with microservices. Rethink and revisit languages you thought you knew. Get to know your infrastructure: tackling the challenges of API development It might sound strange, but as the stack shrinks and the responsibilities of developers - web and otherwise - shift, understanding the architectural components within the software their building is essential. You could blame some of this on DevOps - essentially, it has made developers responsible for how their code runs once it hits production. Because of this important change, the requisite skills and toolchain for the modern developer is also expanding. There are a range of routes into software architecture, but exploring API design is a good place to begin. Hands on RESTful API Design offers a practical way into the topic. While REST is the standard for API design, the diverse range of tools and approaches is making managing the client a potentially complex but interesting area. GraphQL, a query language developed by Facebook is said to have killed off REST (although we wouldn’t be so hasty), while Redux and Relay, two libraries for managing data in React applications, have seen a lot of interest over the last 12 months as two key tools for working with APIs. Want to get started with GraphQL? Try Beginning GraphQL. Learn Redux with Learning Redux.       Microservices: take responsibility for your infrastructure The reason that we’re seeing so many tools offering ways of managing APIs is that microservices are becoming the dominant architectural mode. This requires developer attention too. That’s not to say that you need to implement microservices now (in fact, there are probably many reasons not to), but if you want to be building software in 5 years time, getting to grips with the principles behind microservices and the tools that can help you use them. Perhaps one of the central technologies driving microservices are containers. You could run microservices in a virtual machine, but because they’re harder to scale than containers, you probably wouldn’t be seeing the benefits you’d be expecting from a microservices architecture. This means getting to grips with core container technologies is vital. Docker is the obvious place to start. There are varying degrees to which developers need to understand it, but even if you don’t think you’ll be using it immediately it does give you a nice real-world foundation in containers if you don’t already have one. Watch and learn how to put Docker to work with the Hands on Docker for Microservices video.  But beyond Docker, Kubernetes is the go to tool that allows you to scale and orchestrate containers. This gives you control over how you scale application services in a way that you probably couldn’t have imagined a decade ago. Get a grounding in Kubernetes with Getting Started with Kubernetes - Third Edition, or follow a 7 day learning plan with Kubernetes in 7 Days. If you want to learn how Docker and Kubernetes come together as part of a fully integrated approach to development, check out Hands on Microservices with Node.js. It's time for developers to embrace cloud It should come as no surprise that, if the general trend is towards full stack, where everything is everyone’s problem, that developers simply can’t afford to ignore cloud. And why would you want to - the levels of abstraction it offers, and the various services and integrations that come with the leading cloud services can make many elements of the development process much easier. Issues surrounding scale, hardware, setup and maintenance almost disappear when you use cloud. That’s not to say that cloud platforms don’t bring their own set of challenges, but they do allow you to focus on more interesting problems. But more importantly, they open up new opportunities. Serverless becomes a possibility - allowing you to scale incredibly quickly by running everything on your cloud provider, but there are other advantages too. Want to get started with serverless? Check out some of these titles… JavaScript Cloud Native Development Cookbook Hands-on Serverless Architecture with AWS Lambda [Video] Serverless Computing with Azure [Video] For example, when you use cloud you can bring advanced features like artificial intelligence into your applications. AWS has a whole suite of machine learning tools - AWS Lex can help you build conversational interfaces, while AWS Polly turns text into speech. Similarly, Azure Cognitive Services has a diverse range of features for vision, speech, language, and search. What cloud brings you, as a developer, is a way of increasing the complexity of applications and processes, while maintaining agility. Adding in features and optimizations previously might have felt sluggish - maybe even impossible. But by leveraging AWS and Azure (among others), you can do much more than you previously realised. Back to basics: New languages, and fresh approaches With all of this ostensible complexity in contemporary software development, you’d be forgiven for thinking that languages simply don’t matter. That’s obviously nonsense. There’s an argument that gaining a deeper understanding of how languages work, what they offer, and where they may be weak, can make you a much more accomplished developer. Be prepared is sage advice for a world where everything is unpredictable - both in the real world and inside our software systems too. So, you have two options - and both are smart. Either go back to a language you know and explore a new paradigm or learn a new language from scratch. Learn a new language: Kotlin Quick Start Guide Hands-On Go Programming Mastering Go Learning TypeScript 2.x - Second Edition     Explore a new programming paradigm: Functional Programming in Go [Video] Mastering Functional Programming Hands-On Functional Programming in RUST Hands-On Object-Oriented Programming with Kotlin     2019: the same, but different, basically... It's not what you should be saying if you work for a tech publisher, but I'll be honest: software development in 2019 will look a lot like it has in 2018.  But that doesn't mean you have time to be complacent. In just a matter of years, much of what feels new or ‘emerging’ today will be the norm. You don’t have to look hard to see the set of skills many full stack developer job postings are asking for - the demands are so diverse that adaptability is clearly immensely valuable both for your immediate projects and future career prospects. So, as 2019 begins, commit to developing yourself sharpening your skill set.
Read more
  • 0
  • 0
  • 5524

article-image-mark-reinhold-on-the-evolution-of-java-platform-and-openjdk
Sugandha Lahoti
02 Aug 2018
5 min read
Save for later

Mark Reinhold on the evolution of Java platform and OpenJDK

Sugandha Lahoti
02 Aug 2018
5 min read
Yesterday, Mark Reinhold, Chief architect of the Java Platform Group and tech lead at OpenJDK talked about both the short-term and long-term technical roadmap of Java and the JDK. He was speaking at the ongoing OpenJDK Committers’ Workshop which meets twice a year to discuss the state of the OpenJDK Community and the JDK technical roadmap. With decades as one of the world’s most popular programming language, you’d be forgiven for thinking it might be slowing down - especially with younger languages like Kotlin jostling for position in the popularity stakes. However, there’s plenty of life in it yet. Mark explained what Java’s future might look like and how developers can influence its growth for the better. Who is in charge of the future of Java and OpenJDK? Mark believes that the success of the Java platform depends on contributors focussing on the big picture. The leaders who guide the development platform are not merely developers, who are only interested in writing code or developing new features; the true leaders are what Mark likes to call, “stewards”. These stewards are people who assume responsibility for overseeing and protecting something considered worth caring for and preserving. They try to preserve the past while evolving in the future. A developer is considered as a steward if they demonstrate 3 key qualities: Deep Knowledge in at least one key area. Breadth of care across the platform: They think from time to time about the entire platform and how the whole thing fits together. Empathy: They have the ability to put themselves in the minds of ordinary developers who use the platform rather than work on the platform. In the case of OpenJDK, stewards are effectively in charge of the development of the platform. These stewards are led by Mark Reinhold, but he’s also supported by John Rose for the Java Virtual Machine and Brian Goetz for the language and libraries. Apart from these guys, many other developers, who have the 3 key qualities above, contribute to stewardship as a part of their day to day work. Every single one of them has demonstrated a deep long-term track-record of expertise in at least one area combined with a breadth of care with the entire platform and the ability to empathize with the ordinary developers. Stewards ensure reliability and compatibility The stewardship of the Java platform is guided by two key values. First, it's thinking about long-term goals and working to balance conservation with innovation. Second, it is about preserving the values of Readability and Compatibility. Readability is essential to maintainability. This means you don’t think about the code from a short-term perspective. Thinking about the long-term reliability of the code you’re writing is vital, not least because it makes life easier for other people using the software in the future. Compatibility is similar. It’s all about recognizing that software doesn’t exist in a vacuum - it exists in an ecosystem of tools and developers. There are a number of different types of compatibility that highlight what it means in practice: Source: existing code continues to compile Binary: existing code continues to link at run-time Behavior: Existing APIs continue to behave within the bounds of their specifications. Migration: Adopting a new feature incrementally Intellectual: New features are built on existing knowledge. Add selective features but make them look like they have been there all along. The Java platform ensures that stewards strive to balance conservation and innovation. It’s only through balance that the project can maintain its core values of readability and compatibility. How stewards guide the Java platform As Mark pointed out, it takes considerable solitary thinking, maybe months and years, before an idea takes off. Even then, it needs to be discussed intensively with other stewards. The fruits of these discussions surface in two ways that ensure visibility and transparency: New JEPs in the JEP’s process New OpenJDK projects which explain a problem area in depth, eventually generating more JEPs, which later wind up as features. Transparency is essential. Anyone is open to make an appeal if they don’t like a decision. In fact, if you don’t agree to a decision that the Head JDK makes, you are also free to appeal to the OpenJDK Governing Board. How you can influence the evolution of Java All developers, external contributors, and organizations have the opportunity to influence the direction of the Java platform. The degree of that influence is determined by the degree of the contributions made in the JDK community on a meaningful and ongoing basis. This includes detailed bug reports, constructive critiques, bug fixes, small enhancements, entire non-trivial JEPs. If you only participate in order to serve yourself or your employer’s narrow technical interests then you are unlikely to gain much influence. However, if you deliver a strong track record of consistent serious contributions over a long period of time, then your influence will grow quite large and you might even become a steward yourself. The OpenJDK community has been going strong over the past years under the leadership of the Java stewards. You can go through the entire conference on YouTube to review life at OpenJDK Community, and a quick look at what's ahead for the Java platform in general. Oracle announces a new pricing structure for Java Oracle reveals issues in Object Serialization. Plans to drop it from core Java. 5 Things you need to know about Java 10
Read more
  • 0
  • 0
  • 4008
article-image-learn-framework-forget-the-language
Aaron Lazar
04 Jul 2018
7 min read
Save for later

Learn a Framework; forget the language!

Aaron Lazar
04 Jul 2018
7 min read
If you’re new to programming or have just a bit of experience, you’re probably thoroughly confused, wondering whether what you’ve been told all this while was bogus! If you’re an experience developer, you’re probably laughing (or scorning) at the title by now, wondering if I was high when I wrote the article. What I’m about to tell you is something that I’ve seen happen, and could be professionally beneficial to you. Although, I must warn you that it’s not what everyone is going to approve of, so read further but implement at your own risk. Okay, so I was saying, learn the framework, not the language. I’m going to explain why to take this approach, keeping two sets of audience in mind. The first, are total newbies, who’re probably working in some X field and now want to switch roles but have realised that with all the buzz of automation and the rise of tech, the new role demands a basic understanding of programming. The latter are developers who probably have varied levels of experience with programming, and now want to get into a new job, which requires them to have a particular skill. Later I will clearly list down the benefits of taking this approach. Let’s take audience #1 first. You’re a young Peter Parker just bitten by the programming bug You’re completely new to programming and haven’t the slightest clue about what it’s all about. You can spend close to a month trying to figure out a programming language like maybe Python or Java and then try to build something with it. Or you could jump straight into learning a framework and building something out of it. Now, in both cases we’re going to assume that you’re learning from a book, a video course or maybe a simple online tutorial. When you choose to learn the framework and build something, you’re going with the move fast and break things approach, which according to me, is the best way that anyone can learn something new. Once you have something built in front of you, you’re probably going to remember it much easier than when you’re just learning something theoretical first and then tried to apply it in practice at a later stage. How to do it? Start by understanding your goals first. Where do you want to go from where you are currently at. Now if your answer was that you wanted to get into Web Development, to build websites for a living, you have your first answer. What you need to do next is to understand what skills your “dream” company is actually looking for. You’ll understand that from the Job Description and a little help from someone technical. If the skill is web development, look for the accompanying tools/frameworks. Say for example, you found it was Node. Start learning Node! Pick up a book/video/tutorial that will help you build something as you learn. Spend at least a good week getting used to stuff. Have it reviewed by someone knowledgeable and watch carefully as the person works. You’ll be able to relate quite easily to what is going on, and will pick up some really useful tips and tricks quickly. Keep practicing another week, you’ll start getting good at it. Why will it work? Well, to be honest, several organisations work primarily with frameworks on a number of projects, mainly because frameworks simplify the building of large applications. Very rarely will you find the need to work with the vanilla language. By taking the Framework-first approach, you’re gaining the skill, i.e. web development, fast, rather than worry about using the medium or tool that will enable you to build it. You’re not spending too much time on learning the foundations, which you may never use in your development. Another example - Say you’ve been longing to learn how to build games, but don’t know how to program. Plus C++ is a treacherous language for a newbie to learn. Don’t worry at all! Just start learning how to work with Unreal Engine or any other game engine/framework. Use its in-built features, like Blueprints, which allows you to drag and drop things to build your game, and voila! You have your very own game! ;) You’re a Ninja wielding a katana in Star Wars Now you’re the experienced one, you probably have a couple of apps under your belt and are looking to learn a new skill, maybe because that’s what your dream company is looking for. Let’s say you’re a web developer, who now wants to move into mobile or enterprise application development. You’re familiar with JavaScript but don’t really want to take the time to learn a new language like C#. Don’t learn it, then. Just learn Xamarin or .NET Core! In this case, you’re already familiar with how programming works, but all that you don’t know is the syntax and working of the new language, C#. When you jump straight into .NET Core, you’ll be able to pick up the nitty gritties much faster than if you were to learn C# first and then start with .NET Core. No harm done if you were to take that path, but you’re just slowing down your learning by picking up the language first. Impossible is what? I know for a fact that by now, half of you are itching to tear me apart! I encourage you to vent your frustration in the comments section below! :) I honestly think it’s quite possible for someone to learn how to build an application without learning the programming language. You could learn how to drive an automatic car first and not know a damn thing about gears, but you’re still driving, right? You don’t always need to know the alphabet to be able to hold a conversation. At this point, I could cross the line by saying that this is true even in the the latest most cutting edge tech domain: machine learning. It might be possible even for buddying Data Scientists to start using Tensorflow straight away without learning Python, but I’ll hold my horses there. Benefits of learning a Framework directly There are 4 main benefits of this approach: You’re learning to become relevant quickly, which is very important, considering the level of competition that’s out there You’re meeting the industry requirements of knowing how to work with the framework to build large applications You’re unconsciously applying a fail-fast approach to your learning, by building an application from scratch Most importantly, you’re avoiding all the fluff - the aspects you may never use in languages or maybe the bad coding practices that you will avoid altogether As I conclude, it’s my responsibility to advise you that not learning a language entirely can be a huge drawback. For example, suppose your framework doesn’t address the problem you have at hand, you will have to work around the situation by working with the vanilla language. So when I say forget the language, I actually mean for the time being, when you’re trying to acquire the new skill fast. But to become a true expert, you must learn to master both the language and framework together. So go forth and learn something new today! Should software be more boring? The “Boring Software” manifesto thinks so These 2 software skills subscription services will save you time – and cash Minko Gechev: “Developers should learn all major front end frameworks to go to the next level”
Read more
  • 0
  • 0
  • 5566

article-image-abandoning-agile
Aaron Lazar
23 May 2018
7 min read
Save for later

Abandoning Agile

Aaron Lazar
23 May 2018
7 min read
“We’re Agile”. That’s the kind of phrase I would expect from a football team, a troupe of ballet dancers or maybe a martial artist. Everytime I hear it come from the mouth of a software professional, I go like “Oh boy, not again!”. So here I am to talk about something that might touch a nerve or two, of an Agile fan. I’m talking about whether you should be abandoning agile once and for all! Okay, so what is Agile? Agile software development is an approach to software development, where requirements and solutions evolve through a collaborative effort of self-organizing and cross-functional teams, as well as the end user. Agile advocates adaptive planning, evolutionary development, early delivery, and a continuous improvement. It also encourages a rapid and flexible response to change. The Agile Manifesto was created by some of the top software gurus on the likes of Uncle Bob, Martin Fowler, et al. The values that it stands for are: Individuals and interactions over processes and tools Working software over comprehensive documentation Customer collaboration over contract negotiation Responding to change over following a plan Apart from these, it follows 12 principles, as given here, through which it aims to improve software development. At its heart, it is a mindset. So what’s wrong? Honestly speaking, everything looks rosy from the outside until you’ve actually experienced it. Let me ask you at this point, and I’d love to hear your answers in the comments section below. Has there never been a time when you felt at least one of the 12 principles were a hindrance to your personal, as well as team’s development process? Well, if yes, you’re not alone. But before throwing the baby out with the bathwater, let’s try and understand a bit and see if there’s been some misinterpretation, which could be the actual culprit. Here are some common misinterpretations of what it is, what it can and cannot do. I like to call them: The 7 Deadly Sins #1 It changes processes One of the main myths about Agile is that it changes processes. It doesn't really change your processes, it changes your focus. If you’ve been having problems with your process and you feel Agile would be your knight in shining armor, think again. You need something more than just Agile and Lean. This is one of the primary reasons teams feel that Agile isn’t working for them - they’ve not understood whether they should have gone Agile or not. In other words, they don’t know why they went Agile in the first place! #2 Agile doesn’t work for large, remote teams The 4th point of the Agile manifesto states, “developers must work together daily throughout the project”. Have you ever thought about how “awesome aka impractical” it is to coordinate with teams in India, all the way from the US on a daily basis? The fact is that it’s not practically possible for such a thing to happen when teams are spread across time zones. What it intends is to have the entire team communicating with each other on a daily basis and there’s always the possibility of a Special Point of Contact to communicate and pass on the information to other team members. So no matter how large the team, if implemented in the right way, Agile works. Strong communication and documentation helps a great deal here. #3 Prefer the “move fast and break things” approach Well, personally I prefer to MFABT. Mostly because at work, I’m solely responsible for my own actions. What about when you’re part of a huge team that’s working on something together? When you take such an approach, there are always hidden costs of being 'wrong'. Moreover, what if everytime you moved fast, all you did was break things? Do you think your team’s morale would be uplifted? #4 Sprints are counterproductive People might argue that sprints are dumb and what’s the point of releasing software in bits and pieces? I think what you should actually think about is whether what you’re focusing on can actually be done quicker. Faster doesn’t apply to everything. Take making babies for example. Okay, jokes apart, you’ll realise you might often need to slow things down in order to go fast, so that you reach your goal without making mistakes. At least not too many costly ones anyway. Before you dive right into Agile, understand whether it will add value to what you do. #5 I love micromanagement Well, too bad for you dude, Agile actually promotes self-driven, self-managed and autonomous teams that are learning continuously to adapt and adjust. In enterprises where there is bureaucracy, it will not work. Bear in mind that most organizations (may be apart from startups) are hierarchical in nature which brings with bureaucracy in some form or flavor. #6 Scrum saves time Well, yes it does. Although if you’re a manager and think Scrum is going to cut you a couple of hours from paying attention to your individual team members, you’re wrong. The idea of Scrum is to identify where you’ve reached, what you need to do today and whether there’s anything that might get in the way of that. Scrum doesn’t cover for knowing your team members problems and helping them overcome them. #7 Test everything, everytime No no no no…. That’s a wrong notion, which in fact wastes a lot of time. What you should actually be doing is automated regression tests. No testing is bad too; you surely don’t want bad surprises before you release! Teams and organisations tend to get carried away by the Agile movement and try to imitate others without understanding whether what they’re doing is actually in conjunction with what the business needs. Now back to what I said at the beginning - when teams say they’re agile, half of them only think they are. It was built for the benefit of software teams all across the globe, and from what teams say, it does work wonders! Like any long term relationship, it takes conscious efforts and time everyday to make it work. Should you abandon Agile? Yes and no. If you happen to have the slightest hint that one or more of the following are true for your organisation, you really need to abandon Agile or it will backfire: Your team is not self-managed and lacks matured and cross-functional developers Your customers need you to take approvals at every release stage Not everyone in your organisation believes in Agile Your projects are not too complex Always remember, Agile is not a tool and if someone is trying to sell you a tool to help you become Agile, they’re looting you. It is a mindset; a family that trusts each other, and a team that communicates effectively to get things done. My suggestion is to go ahead and become agile, only if the whole family is for it and is willing to transform together. In other words, Agile is not a panacea for all development projects. Your choice of methodology will come down to what makes the best sense for your project, your team and your organization. Don’t be afraid to abandon agile in favor of new methodologies such as Chaos Engineering and MOB Programming or even go back to the good ol’ waterfall model. Let us know what you think of Agile and how well your organisation has adapted to it, if has adopted it. You can look up some fun discussions about whether it works or sucks on Hacker news: In a nutshell, why do a lot of developers dislike Agile? Poor Man’s Agile: Scrum in 5 Simple Steps What is Mob Programming? 5 things that will matter in application development in 2018 Chaos Engineering: managing complexity by breaking things
Read more
  • 0
  • 2
  • 4871

article-image-erp-tool-in-focus-odoo-11
Sugandha Lahoti
22 May 2018
3 min read
Save for later

ERP tool in focus: Odoo 11

Sugandha Lahoti
22 May 2018
3 min read
What is Odoo? Odoo is an all-in-one management software that offers a range of business applications. It forms a complete suite of enterprise management applications targeting companies of all sizes. It is versatile in the sense that it can be used across multiple categories including CRM, website, e-commerce, billing, accounting, manufacturing, warehouse, and project management, and inventory. The community version is free-of-charge and can be installed with ease. Odoo is one of the fastest growing open source, business application development software products available. With the announcement of version 11 of Odoo, there are many new features added to Odoo and the face of business application development with Odoo has changed. In Odoo 11, the online installation documentation continues to improve and there are now options for Docker installations. In addition, Odoo 11 uses Python 3 instead of Python 2.7. This will not change the steps you take in installing Odoo but will change the specific libraries that will be installed. While much of the process is the same as previous versions of Odoo, there have been some pricing changes in Odoo 11. There are only two free users now and you pay for additional users. There is one free application that you can install for an unlimited number of users, but as soon as you have more than one application, then you must pay $25 for each user, including the first user. If you have thought about developing in Odoo, now is the best time to start. Before I convince you on why Odoo is great, let’s take a step back and revisit our fundamentals. What is an ERP? ERP is an acronym often used for Enterprise Resource Planning. The ERP gives a global and real-time view of data that can enable companies to address concerns and drive improvements. It automates the core business operations such as the order to fulfillment and procures to pay processes. It also reduces risk management for companies and enhances customer services by providing a single source for billing and relationship tracking. Why Odoo? Odoo is Extensible and easy to customize Odoo's framework was built with extensibility in mind. Extensions and modifications can be implemented as modules, to be applied over the module with the feature being changed, without actually changing it. This provides a clean and easy-to-control and customized applications. You get integrated information Instead of distributing data throughout several separate databases, Odoo maintains a single location for all the data. Moreover, the data remains consistent and up to date. Single reporting system Odoo has a unified and single reporting system to analyze and track the status. Users can also run their own reports without any help from IT. Single reporting systems, such as those provided by Odoo ERP software helps make reporting easier and customizable. Built around Python Odoo is built using the Python programming language, which is one of the most popular languages used by developers. Large community The capability to combine several modules into feature-rich applications, along with the open source nature of Odoo, is probably the important factors explaining the community that has grown around Odoo. In fact, there are thousands of community modules available for Odoo, covering virtually every topic, and the number of people getting involved has been steadily growing every year. Go through our video, Odoo 11 development essentials to learn to scaffold a new module, create new models, and use the proper functions that make Odoo 11 the best ERP out there. Top 5 free Business Intelligence tools How to build a live interactive visual dashboard in Power BI with Azure Stream Tableau 2018.1 brings new features to help organizations easily scale analytics
Read more
  • 0
  • 0
  • 4361
article-image-what-is-mob-programming
Pavan Ramchandani
24 Apr 2018
4 min read
Save for later

What is Mob Programming?

Pavan Ramchandani
24 Apr 2018
4 min read
Mob Programming is a programming paradigm that is an extension of Pair Programming. The difference is actually quite straightforward. If in Pair Programming engineers work in pairs, in Mob Programming the whole 'mob' of engineers works together. That mob might even include project managers and DevOps engineers. Like any good mob, it can get rowdy, but it can also get things done when you're all focused on the same thing. What is Mob programming? The most common definition given to this approach by Woody Zuill (the self-proclaimed father of Mob programming) is as following: “All the team members working on the same thing, at the same time, in the same space, and on the same computer.” Here are the key principles of Mob Programming: The team comes together in a meeting room with a set task due for the day. This group working together is called the mob. The entire code is developed on a single system. Only one member is allowed to operate the system. This means only the Driver can write the code or make any changes to the code. The other members are called “Navigator” and the expert among them for the problem at hand guides the Driver to write the code. Everyone keeps switching roles, meaning no one person will be at the system all the time. The session ends with all the aspects of the task getting successfully completed. The Mob Programming strategy The success of mob programming depends on the collaborative nature of the developers coming together to form the Mob. A group of 5-6 members make a good mob. For a productive session, each member needs to be familiar with software development concepts like testing, design patterns, software development life cycle, among others. A project manager can initiate the team to take the Mob programming approach in order to make the early stage of software development stress-free. Anyone stuck at a point in the problem will have Navigators who can bring in their expertise and keep the project development moving. The advantages of Mob Programming Mob programming might make you nervous about performing in a group. But the outcomes have shown that it tends to make work, stress free and almost error free since there are multiple opinions. The ground rules to define Mob remains at a state where a single person cannot be on the keyboard, writing code longer than the other. This reduces the grunt work and provides the opportunity to switch to a different role in the mob. This trait really challenges and intrigues  individuals to contribute to the project by using their creativity. Criticisms of Mob Programming Mob programming is about cutting the communication barrier in the team. However, in situations when the dynamics of some members is different, the session can turn out to be just some active members dictating the terms for the task at hand. Many developers out there are set in their own ways. When asked to work on a task/project at the same time, there might occur a conflict of interest. Some developers might not participate with their full capacity and this might lead the work being sub-standard. To do Mob Programming well, you need a good mob Mob programming is a modern approach to software development and comes with its own set of pros and cons. The productivity and fruitfulness of the approach lies in the credibility and dynamics of the members and not in the nature of the problem at hand. Hence the potential of this approach can be leveraged for solving difficult problems, given the best bunch of mobs to deal with it. More on programming paradigms: What is functional reactive programming? What is the difference between functional and object oriented programming?
Read more
  • 0
  • 3
  • 3761

article-image-5-application-development-tool-matter-2018
Richard Gall
13 Dec 2017
3 min read
Save for later

5 application development tools that will matter in 2018

Richard Gall
13 Dec 2017
3 min read
2017 has been a hectic year. Not least in application development. But it’s time to look ahead to 2018. You can read what ‘things’ we think are going to matter here, but here are the key tools we think are going to define the next 12 months in the area. 1. Kotlin Kotlin has been one of the most notable languages in 2017. It’s adoption has been dramatic over the last 12 months, and signals significant changes in what engineers want and need from a programming language. We think it’s likely to challenge Java's dominance throughout 2018 as more and more people adopt it. If you want a run down of the key reasons why you should start using Kotlin, you could do a lot worse than this post on Medium. Learn Kotlin. Explore Kotlin eBooks and videos. 2. Kubernetes Kubernetes is a tool that’s been following in the slipstream of Docker. It has been a core part of the growth of containerization, and we’re likely to see it move from strength to strength in 2018 as the technology matures and the size of container deployments continues to grow in size and complexity. Kubernetes’ success and importance was underlined earlier this year when Docker announced that its enterprise edition would support Kubernetes. Clearly, if Docker paved the way for the container revolution, Kubernetes is consolidating and helping teams take the next step with containerization. Find Packt’s Kubernetes eBooks and videos here. 3. Spring Cloud This isn’t a hugely well known tool, but 2018 might just be the year that the world starts to pay it more attention. In many respects Spring Cloud is a thoroughly modern software project, perfect for a world where microservices reign supreme. Following the core principles of Spring Boot, it essentially enables you to develop distributed systems in a really efficient and clean way. Spring is interesting because it represents the way Java is responding to the growth of open source software and the decline of the more traditional enterprise system. 4. Java 9 This nicely leads us on to the new version of Java 9. Here we have a language that is thinking differently about itself, that is moving in a direction that is heavily influenced by a software culture that is distinctive from where it belonged 5-10 years ago. The new features are enough to excite anyone that’s worked with Java before. They have all been developed to help reduce the complexity of modern development, modeled around the needs of developers in 2017 - and 2018. And they all help to radically improve the development experience - which, if you’ve been reading up, you’ll know is going to really matter for everyone in 2018. Explore Java 9 eBooks and videos here. 5. ASP.NET Core Microsoft doesn’t always get enough attention. But it should. Because a lot has changed over the last two years. Similar to Java, the organization and its wider ecosystem of software has developed in a way that moves quickly and responds to developer and market needs in an impressive way. ASP.NET Core is evidence of that. A step forward from the formidable ASP.NET, this cross-platform framework has been created to fully meet the needs of today’s cloud based, fully connected applications that run on microservices. It’s worth comparing it with Spring Cloud above - both will help developers build a new generation of applications, and both represent two of software’s old-guard establishment embracing the future and pushing things forward. Discover ASP.NET Core eBooks and videos.
Read more
  • 0
  • 0
  • 3640