Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Author Posts - Programming

20 Articles
article-image-getting-started-with-test-driven-development-in-angular
Ezéchiel Amen AGBLA
09 Dec 2024
10 min read
Save for later

Getting Started with Test-Driven Development in Angular

Ezéchiel Amen AGBLA
09 Dec 2024
10 min read
IntroductionTest-Driven Development (TDD) is a software development process widely used inAngular development to ensure code quality and reduce the time spent debugging. TDD is an agile approach to software development that emphasizes the iterative writing of tests before writing the actual code.In this article, we'll take a look at how to serenely build an application based on the Test-Driven Development approach.We'll start with a brief look at the process of developing an application without TDD, then use the Test-First approach to better understand the benefits of TDD, and finish with a case study on cart processing in an e-commerce application, for example.Developing an application without using Test-Driven Development? When we generally develop an application without using TDD, this is how we proceed:First, we write the source code;Next, we write the test corresponding to the source code;And finally, we refactor our code.As we evolve, we find it hard to detect and manage bugs in our application, because we always manage to write the tests to satisfy our code. As a result, we don't manage all the test cases we need to, and our application is very sensitive to the slightest modification. It's very easy to get regressions.In the next section, we'll look at how to develop an application using the Test-First approach.Developing a Test-First application? Developing an application using the Test-First approach means writing the tests before writing the source code, as the name indicates. Here's how we do it:Write the exhaustive tests related to the feature we want to implement;Then write the code associated with the tests;Finally, we refactor the code.Using this approach, we are constrained to have an exhaustive vision of the features to be developed on our product, which means we can't evolve in an agile context. If tomorrow's features need to evolve or be completely modified, we'll have a very difficult time updating our source code. Also, we'll be very slow in developing features, because we'll have to write all the possible test cases before we start writing the associated code.Now that we've seen these two approaches, in the next session we'll move on to the main topic of our article, the Test-Driven Development approach.Getting started with Test-Driven Development methodologyTest-Driven Development is a dynamic software development methodology that prioritizes the incremental creation of tests as minimalist code is implemented, while conforming to the User Story. In this section, we’ll explore the role of TDD in application development, then we’ll discover the core principle of TDD and the tools needed to successfully implement this approach.What role does TDD play in application development?TDD plays a crucial role in improving software quality, reducing bugs, and fostering better collaboration between developers and QA teams throughout the development process. We can summarize its role in three major points:Quality: TDD guarantees high test coverage, so that bugs can be detected quickly and high-quality code maintained.Design: By writing tests upstream, we think about the architecture of our code and obtain a clearer, more maintainable design.Agility: TDD favors incremental and iterative development, making it easier to adapt to changing needs.In this section, we’ll discover the core principle of TDD.What is the core principle of TDD?The TDD approach is based on a cycle known as the red-green-refactor cycle, which forms the core of this approach. Here's how it works:Write a test before coding: We start by defining a small piece of functionality and writing a test that will fail until that functionality is implemented. We don’t need to write the exhaustive tests related to the feature.Just the right minimalist code: We then write the minimum code necessary for the test to pass.Refactor: Once the test has been passed, the code is improved without changing its behavior.In the next section, we’ll learn the tools needed to successfully implement this approachTools for Test-Driven DevelopmentCase study: << Managing a shopping cart on a sales site. >>In this case study, we’ll implement some features about management of a shopping cart. The goal is very simple. Given a developer, your task is to develop the shopping cart of a sales site using a Test-Driven Development approach. We’ll do it step-by-step following the principles of TDD.When we’re asked to implement the shopping cart on an e-commerce application, we’ve to stay focused on the application and ask ourselves the right questions to find the most basic scenario when a customer lands on the shopping cart page. In this scenario, the shopping cart is empty by default when a customer launches your application. So, we’ll follow these steps about our first-ever scenario:Write the test on the assumption that the customer's shopping cart is empty at initialization (the most basic scenario):Write the minimal code associated with the test:Refactor the code if necessary:We don’t need a refactoring in our case.Now, we can continue with our second scenario. Once we’re satisfied that the basket is empty when the application is launched, we’ll check that the total cart price is 0. We’ll follow the principles of TDD and stay iterative on our code.Write the test to calculate the total basket price when the basket is empty:Write the minimal code associated with the test:Refactor the code if necessary.ConclusionTest-Driven Development (TDD) is a powerful methodology that enhances code quality, fosters maintainable design, and promotes agility in software development. By focusing on writing tests before code and iterating through the red-green-refactor cycle, developers can create robust, bug-resistant applications. In this article, we’ve explored the differences between traditional development approaches and TDD, delved into its core principles, and applied the methodology in a practical e-commerce case study.For a deeper dive into mastering TDD in Angular, consider reading Mastering Angular Test-Driven Development by Ezéchiel Amen AGBLA. With this book, you’ll learn the fundamentals of TDD and discover end-to-end testing using Protractor, Cypress, and Playwright. Moreover, you’ll improve your development process with Angular TDD best practices.Author BioEzéchiel Amen AGBLA is passionate about web and mobile development, with expertise in Angular and Ionic. A certified Ionic and Angular expert developer, he shares his knowledge through courses, articles, webinars, and training. He uses his free time to help beginners and spread knowledge by writing articles, including via Openclassrooms, the leading online university in the French-speaking world. He also helps fellow developers on a part-time basis with code reviews, technical choices, and bug fixes. His dedication to continuous learning and experience sharing contributes to his growth as a developerand mentor.
Read more
  • 0
  • 0
  • 122

article-image-understanding-memory-allocation-and-deallocation-in-the-net-common-language-runtime-clr
Trevoir Williams
29 Oct 2024
10 min read
Save for later

Understanding Memory Allocation and Deallocation in the .NET Common Language Runtime (CLR)

Trevoir Williams
29 Oct 2024
10 min read
IntroductionThis article provides an in-depth exploration of memory allocation and deallocation in the .NET Common Language Runtime (CLR), covering essential concepts and mechanisms that every .NET developer should understand for optimal application performance. Starting with the fundamentals of stack and heap memory allocation, we delve into how the CLR manages different types of data and the roles these areas play in memory efficiency. We also examine the CLR’s generational garbage collection model, which is designed to handle short-lived and long-lived objects efficiently, minimizing resource waste and reducing memory fragmentation. To help developers apply these concepts practically, the article includes best practices for memory management, such as optimizing object creation, managing unmanaged resources with IDisposable, and leveraging profiling tools. This knowledge equips developers to write .NET applications that are not only memory-efficient but also maintainable and scalable.Understanding Memory Allocation and Deallocation in the .NET Common Language Runtime (CLR) Memory management is a cornerstone of software development, and in the .NET ecosystem, the Common Language Runtime (CLR) plays a pivotal role in how memory is allocated and deallocated. The CLR abstracts much of the complexity involved in memory management, enabling developers to focus more on building applications than managing resources.  Understanding how memory allocation and deallocation work under the hood can help you write more efficient and performant .NET applications. Memory Allocation in the CLR When you create objects in a .NET application, the CLR allocates memory. This process involves several key components, including the stack, heap, and garbage collector. In .NET, memory is allocated in two main areas: the stack and the heap. Stack Allocation: The stack is a Last-In-First-Out (LIFO) data structure for storing value types and method calls. Variables stored on the stack are automatically managed, meaning that when a method exits, all its local variables are popped off the stack, and the memory is reclaimed. This process is very efficient because the stack operates linearly and predictably. Heap Allocation: On the other hand, the heap is used for reference types (such as objects and arrays). Memory on the heap is allocated dynamically, meaning that the size and lifespan of objects are not known until runtime. When you create a new object, memory is allocated on the heap, and a reference to that memory is returned to the stack where the reference type variable is stored. When a .NET application starts, the CLR reserves a contiguous block of memory called the managed heap. This is where all reference-type objects are stored. The managed heap is divided into three generations (0, 1, and 2), which are part of the Garbage Collector (GC) strategy to optimize memory management: Generation 0: Short-lived objects are initially allocated here. This is typically where small and temporary objects reside. Generation 1: Acts as a buffer between short-lived and long-lived objects. Objects that survive a garbage collection in Generation 0 are promoted to Generation 1. Generation 2: Long-lived objects like static data reside here. Objects that survive multiple garbage collections are eventually moved to this generation. When a new object is created, the CLR checks the available space in Generation 0 and allocates memory for the object. If Generation 0 is full, the GC is triggered to reclaim memory by removing objects that are no longer in use. Memory Deallocation and Garbage Collection The CLR’s garbage collector is responsible for reclaiming memory by removing inaccessible objects in the application. Unlike manual memory management, where developers must explicitly free memory, the CLR automatically manages this through garbage collection, which simplifies memory management but requires an understanding of how and when this process occurs. Garbage collection in the CLR involves three main steps: Marking: The GC identifies all objects still in use by following references from the root objects (such as global and static references, local variables, and CPU registers). Any objects not reachable from these roots are considered garbage. Relocating: The GC then updates the references to the surviving objects to ensure that they point to the correct locations after compacting memory. Compacting: The memory occupied by the unreachable (garbage) objects is reclaimed, and the remaining objects are moved closer together in memory. This compaction step reduces fragmentation and makes future memory allocations more efficient. The CLR uses the generational approach to garbage collection in .NET, designed to optimize performance by reducing the amount of memory that needs to be examined and reclaimed.  Generation 0 collections occur frequently but are fast because most objects in this generation are short-lived and can be quickly reclaimed. Generation 1 collections are less frequent but handle objects that have survived at least one garbage collection. Generation 2 collections are the most comprehensive and involve long-lived objects that have survived multiple collections. These collections are slower and more resource-intensive. Best Practices for Managing Memory in .NET Understanding how the CLR handles memory allocation and deallocation can guide you in writing more efficient code. Here are a few best practices: Minimize the Creation of Large Objects: Large objects (greater than 85,000 bytes) are allocated in a special section of the heap called the Large Object Heap (LOH), which is not compacted due to the overhead associated with moving large blocks of memory. Large objects should be used judiciously because they are expensive to allocate and manage.  Use `IDisposable` and `using` Statements: Implementing the `IDisposable` interface and using `using` statements ensures that unmanaged resources are released promptly. Profile Your Applications: Regularly use profiling tools to monitor memory usage and identify potential memory leaks or inefficiencies. Conclusion Mastering memory management in .NET is essential for building high-performance, reliable applications. By understanding the intricacies of the CLR, garbage collection, and best practices in memory management, you can optimize your applications to run more efficiently and avoid common pitfalls like memory leaks and fragmentation. Effective .NET Memory Management, written by Trevoir Williams, is your essential guide to mastering the complexities of memory management in .NET programming. This comprehensive resource equates developers with the tools and techniques to build memory-efficient, high-performance applications.  The book delves into fundamental concepts like: Memory Allocation and Garbage Collection Memory profiling and Optimization Strategies  Low-level programming with Unsafe Code Through practical examples and best practices, you’ll learn how to prevent memory leaks, optimize resource usage, and enhance application scalability. Whether you’re developing desktop, web, or cloud-based applications, this book provides the insights you need to manage memory effectively and ensure your .NET applications run smoothly and efficiently. Author BioTrevoir Williams, a passionate software and system engineer from Jamaica, shares his extensive knowledge with students worldwide. Holding a Master&rsquo;s degree in Computer Science with a focus on Software Development and multiple Microsoft Azure Certifications, his educational background is robust. His diverse experience includes software consulting, engineering, database development, cloud systems, server administration, and lecturing, reflecting his commitment to technological excellence and education. He is also a talented musician, showcasing his versatility. He has penned works like Microservices Design Patterns in .NET and Azure Integration Guide for Business. His practical approach to teaching helps students grasp both theory and real-world applications.
Read more
  • 0
  • 0
  • 959

article-image-clean-coding-in-python-with-mariano-anaya
Expert Network
27 Jul 2021
7 min read
Save for later

Clean Coding in Python with Mariano Anaya

Expert Network
27 Jul 2021
7 min read
Key-takeaways:   Clean code isn’t just a nice thing to have or a luxury in software projects; it's a necessity. If we want our project to successfully deliver features constantly at a steady and predictable pace, then having a good and maintainable code base is a must. The true nature of clean code rely on the fact that other practitioners should be able to read and maintain the code. Read the book Clean Code in Python, Second Edition, to know all about idiomatic Python, see the difference between good and bad code, and identify traits of good code and good architecture. There is no sole or strict definition of clean code. Moreover, there is probably no way of formally measuring clean code, so you cannot run a tool on a repository that will tell you how good, bad, or maintainable that code is. Sure, you can run tools such as checkers, linters, static analyzers, and so on, and those tools are of much help. They are necessary, but not sufficient. Clean code is not something a machine or script can recognize (so far) but rather something that professionals can decide. We interviewed Python expert, and bestselling author, Mariano Anaya on clean coding, importance of efficient code formatting and his recent book, Clean Code in Python, 2nd Edition. The interview in detail: To what extent can the inability to write efficient code harm/affect an organization/software?  In my experience, inefficient code can be so dangerous as to paralyze entire projects. I’ve seen services that needed to be re-written because of how unmaintainable they were. At some point, it became impossible to keep on making changes to that API, and the issues kept piling up, so it needed to be replaced by a brand-new system.  On another occasion, there was an application we knew had problems because of the way it was written, and its instability was causing frustration in customers, which permeated into the company. The buggy nature of the application wasn’t separate from the way it was written, rather it was the consequence. Customers were complaining about quality, and this shows up to which degree technical debt can harm an organization. I’ve seen this pattern several times, when the company must make the hard decision of stopping the release of new features to fix errors in the software.  I’d say that technical debt, if left untreated, can lead to very harmful results for a company.  2. What should developers keep in mind while starting out with legacy systems?  First to identify the degree of technical debt accrued. There are good software projects that have been designed correctly and their technical debt is relatively low (perhaps it’s just about updating some libraries to newer versions or moving parts of the code towards new features that weren’t available at the time it was originally written).  In the event of having a lot of technical debt, it’s important to understand what’s the most critical part that needs to be fixed. There’s certainly a part in the code, a module, or a functionality that’s responsible for most of the complaints from customers, and that’s what needs to be refactored more urgently.  It’s critical in this sense to do a proper analysis and have a plan about the improvements to make in the code, rather than jump straight to the code and start refactoring. This will help give a clearer idea of what needs to be changed, and the degree of the refactor needed. Meaning, if we’ll be fixing the code, or the situation requires a full rewrite. Generally, completely rewritten the application should be a last-resort kind of decision, although there are some obvious cases (for example, if the application was written with Python 2, then it’s clear that all the code will need to be changed).  3. What are the future advancements that you anticipate in Python?  It’s hard to know for sure what will happen with Python in years to come, but it’s interesting to see that in a similar way Python took inspiration and features from other languages, it’s now inspiring modern languages as well, but it also catches up with new features and programming models. Such was the case of all the improvements made for asynchronous programming, incorporated into the standard library.  I believe the asynchronous programming capabilities will continue to be enhanced in future releases.  I have also noticed some improvements towards trying to make Python more efficient, whether this means having a more lightweight interpreter by reducing the number of packages in the standard library, to try to solve the GIL problem. These are the kind of improvements I’m more hopeful to see.  4. What are some of the popular myths around writing clean code?   Perhaps the most common misconception is that clean code is about formatting code, or maybe even about PEP-8. In fact, relating technical debt only to code issues is another popular myth. Technical debt, is also about technology, being caught up with dependencies.  Being able to update your dependencies quickly in case there’s a security issue, it’s also a concern related to technical debt, and therefore to clean code. Things like the speed of iteration, how fast and frequently deployments can be made, the adaptability of the architecture, play an important role in the success of the project.  5. Tell us about your book, Clean Code in Python, Second Edition. What trajectory does your book follow to help its readers develop maintainable and efficient code?  The first chapter starts with an introduction to the importance of having a well-structured code base, presenting a framework for the chapters to come. This is supported by tools, and recommendations on how to setup a project for success, considering automated tools that will help us format the code, and setting up a pipeline to effectively deploy our code with good quality gates (controls, tests, different stages).  Then, the book introduces some Python-specific concepts, making strong emphasis on the particularities of Python’s syntax, and a more succinct way of writing code, taking advantage of the features the language has to offer.  There are some chapters that revisit general design ideas from software engineering, like object-oriented design and design patterns. From that point, the chapter will explore topics of software engineering in terms of how they can be implemented in Python, using the particularities of the language itself.  The idea of the book is to provide readers with the tools and concepts for them to understand what clean code means beyond any definitions given. It’s a pragmatic book; oriented towards a practitioner’s audience, so it makes special focus on how to get things done in an effective way, which often means accepting tradeoffs.  6. Does your book provide hands-on scenarios to practice the techniques it teaches? Absolutely! The book has a very pragmatic, hands-on approach. As each idea is introduced, it’s followed by examples that demonstrate how that implementation would work. Moreover, I’ve put special effort into making the examples as realistic as possible. Considering that the examples need to showcase an idea irrespective of superfluous details (that is, leaving out everything that’s not relevant to the explanation being made, and isolating the problem at hand), they’re still real-world scenarios, pieces of code any reader can relate to their daily job. There’re no made-up examples like Fibonacci-series or things anyone wouldn’t normally find on real code. Extrapolating from the examples, readers can use the code as reference to solve their problems.  To practice more, there’s a Github repository where all the code from the book lies, and it’s constantly updated. There’s also a Docker image for the entire setup of the book, with the environment already configured, that readers can use to test the code, and learn by modifying it.  About Mariano Anaya is a software engineer who spends most of his time creating software with Python and mentoring fellow programmers. Mariano's main areas of interests besides Python are software architecture, functional programming, distributed systems, and speaking at conferences. He was a speaker at Euro Python 2016 and 2017. To know more about him, you can refer to his GitHub account with the username rmariano. His speakerdeck username is rmariano.
Read more
  • 0
  • 0
  • 9493

article-image-why-asp-net-core-is-the-best-choice-to-build-enterprise-web-applications-interview
Vincy Davis
30 Dec 2019
9 min read
Save for later

Why ASP.Net Core is the best choice to build enterprise web applications [Interview]

Vincy Davis
30 Dec 2019
9 min read
ASP.NET Core, the cross-platform and open-source framework is developed by Microsoft for building modern, cloud-based, and internet-connected applications. Designed to enable runtime components, APIs, compilers, and languages to evolve quickly, it runs on macOS, Linux, and Windows on the .NET Core or .NET Framework. To know more about the development cycle of ASP.NET Core and to gain knowledge of its future design directions, we interviewed Kenneth Y. Fukizi, the author of the book ‘Learn ASP.NET Core 3.0, Second edition’, published by Packt Publishing. He has more than 14 years of professional experience and is working as a software engineering contractor/consultant for client organizations based in South Africa, Australia, U.S.A, and Canada. Kenneth believes that the current performance of ASP.NET Core is a lot more superior than its predecessors and its competitor frameworks. He prefers to use ASP.Net Core to build enterprise web applications due to the flexibility that comes with it. He is also excited that .Net 5 will have more interoperability with other programming languages. When asked about his thoughts on Microsoft supporting the open source platform Pulumi, Kenneth says it will definitely help developers in building modern cloud applications. If you are an ASP.NET Core user, you should definitely read part 1 of our interview with ‘Kenneth Fukizi on the new Blazor framework, gRPC support, and other exciting features in ASP.NET Core 3.0’. In this interview, he shares his impressions on all the new exciting features in the ASP.NET Core 3.0 release and explains why all ASP.Net Core users should be looking forward to the high performance and scalability that comes with gRPC in this new release.   Here is the full interview with Kenneth on ASP.Net Core. On why ASP.NET Core is the best option for web application development What makes .NET Core, one of the best general-purpose development platforms? How does ASP.NET Core enhance the performance of web applications? What do you think are the key benefits of Asp.net Core for enterprise web application development? With .Net Core as a platform, you can develop Web applications, Desktop Applications, Cloud-native applications, mobile applications, gaming applications, Internet of Things (IoT) applications, and Artificial Intelligence (AI) applications, and you probably can’t ask for more from a development platform. ASP.Net Core has gone a long way in making sure that web application performance is enhanced compared to its predecessors or indeed some of its competitor frameworks, for example, by making full use of asynchronous programming models, in which ASP.Net Core has pretty much eliminated the need to have computer processing unit (cycles) that need to be waiting for database queries, web service calls, and IO Operations, and thereby wasting precious resources.  ASP.Net Core was designed from the ground up, unifying both the MVC and WebAPI frameworks. It has removed the dependency on IIS, removed several other excess baggage, including a preload of third party libraries, and as a result, it is much more lightweight and fast, gaining performance along the way. We can say a lot of things on performance including its improved capability with output caching, and other features, but the truth of the matter is the fact that it is getting more performant by the day. There is actually a tool you can use to track its performance metrics through TechEmpower benchmarks publicly available through the web. ASP.Net Core is my choice to build enterprise web applications on, mainly because of its flexibility that comes from it being cross-platform. It starts all the way from the tooling available to be able to develop ASP.Net Core applications using Visual Studio, or Visual Studio Code on either Windows or Mac operating systems, even on Linux.  Within an enterprise, you will have people with different roles working on an enterprise application, and the wide tooling available just makes it convenient to cater to a diverse group of project members.  ASP.Net Core has such a vibrant community that it is always allowed to give their input. The fact that it is open source actually paves way for faster improvements and applicability across industries. Apart from the development environment, when ASP.Net Core applications are ready to be deployed into production, you can do so internally in your organization, or just about any other worthwhile cloud hosting service provider including Azure and AWS. (Read chapter 3 of my book for more details on creating a continuous integration pipeline with Azure DevOps). It’s easy from ASP.Net Core to interact with other applications developed with other external tech stacks, and typically an enterprise application will need to talk to several other applications and I’m personally excited with the fact that a future version of the .Net Core runtime that ASP.Net Core runs on, that will be called .Net 5, is slated to have more interoperability with other languages like Java, Objective C, and Swift. There are many more advantages of using ASP.Net Core that comes to mind, and we can take the whole day discussing them, but to cut a long story short, ASP.Net Core will not disappoint you, and it’s moving fast in terms of improvements on where it is lacking. Recently, Microsoft announced that .NET Core will support the open source platform Pulumi for building modern cloud applications. This aims to help developers to declare cloud infrastructure including all of Azure such as Kubernetes and CosmosDB using any  .NET languages like C#, VB.NET, and F#. To what extent and how do you think Pulumi with .NET will help developers?  There are those that are not so familiar or not so comfortable navigating the cloud infrastructure, or just can’t be bothered to learn something new, and instead of getting out of their comfort zone that is within the code base, they can declare everything through code, for example, resource groups and everything else that makes up the cloud infrastructure. Pulumi just makes everything a bit easier, abstracts away everything and replaces the need to use different tools to create our cloud infrastructure. For example, to come up with JSON, YAML files or coming up with a cloud Domain Specific Language (DSL). Instead of all that we can just declare it using the language that we are already known as developers. It will definitely be handy.  On ASP.NET Core’s longevity and future design directions   At the NDC conference held recently, Ryan Nowak, a Microsoft developer and architect on ASP.NET Core shared the details of many future projects like BedRock, Houdini and SMALL FAST.NET Server. The common goal of these projects is to simplify cross-platform compatibility among different environments. How do you think these projects will help in shaping the future design directions of .NET 5 and ensure the longevity of ASP.NET Core?  .Net 5 is already in the process of being put together, with the full knowledge of what is happening around project Bedrock, project Houdini and SMALL FAST.NET server. What I can personally see from project Bedrock is the fact that starting at the lowest layer there is going to be more prominence of .Net Sockets in dealing with Network I/O at the expense of Libuv borrowed from NodeJS for its cross-platform capabilities. Obviously .Net Sockets will learn a thing or two from how Libuv has been operating and implement the lessons learned so that it works seamlessly with .Net technologies, and .Net 5 will stand to benefit a lot from the improvements.  I personally see .Net 5 being influenced to cater for more protocols like MQTT, AMQP, HTTP3, and QUIC, and I wouldn’t be surprised to even see a bit more interoperability with other programming languages on .Net 5. ASP.Net Core is there to stay as it is designed to work exclusively on the .Net Core runtime, which is transitioning into .Net 5 soon. I can see a lot of improvements on ASP.Net Core 3.0 especially from the point of view of taking a bit more responsibility off the MVC framework onto ASP.Net Core as a platform. This will allow reuse of functionality across different frameworks like SignalR, gRPC services, Blazor, Controllers, and Pages. This is already happening as is evident in the use of endpoint routing, which is catering for all of what I call the big 5 frameworks on project Houdini, mentioned above. Taking away responsibility from MVC to a lower layer actually makes it more lightweight and developer-friendly, and does not actually kill MVC as you can see it is pretty much still alive, and all this restructuring, in general, makes it more flexible to deal with change in the future, that is characteristic of different platforms, and actually makes it more ready in becoming truly cross-platform. About the Book  Get your hands on the book ‘Learn ASP.NET Core 3.0, Second edition’ by Packt Publishing to become highly efficient in developing and maintaining powerful web applications. It will also guide you on how to deploy and monitor your applications using Microsoft Azure, AWS, and Docker. This book will take you through realistically practical ASP.Net Core MVC application helping to give you a feel of how they would work in a real-life scenario. About the Author Kenneth Y. Fukizi is a solutions architect, consultant, software developer and engineer with more than 14 years of professional experience. He is a Microsoft Certified Trainer®, Microsoft Certified Solutions Developer®, Microsoft Certified Solutions Associate®, Microsoft Certified Professional®, among other professional and technical certifications.  Kenneth also lectures and mentors computer science degree students in programming. He has spent most of his professional life working as a software engineering contractor/consultant on various projects for client organizations based in South Africa, Australia, U.S.A, and Canada. .NET Core 3.0 is now available with C# 8, F# 4.7, ASP.NET Core 3.0 and general availability of EF Core 3.0 and EF 6.3 Inspecting APIs in ASP.NET Core [Tutorial] How to call an Azure function from an ASP.NET Core MVC application
Read more
  • 0
  • 0
  • 16133

article-image-kenneth-fukizi-on-the-new-blazor-framework-grpc-support-and-other-exciting-features-in-asp-net-core-3-0
Vincy Davis
30 Dec 2019
9 min read
Save for later

Kenneth Fukizi on the new Blazor framework, gRPC support, and other exciting features in ASP.NET Core 3.0

Vincy Davis
30 Dec 2019
9 min read
The open-source framework ASP.NET Core is one of the most popular web frameworks, developed by Microsoft and its community. The modular framework runs on both the full .NET Framework, on Windows, and the cross-platform .NET Core. One of the most captivating features of this framework is its performance. It is not only faster than other web frameworks but is also perfectly suited for Docker containers. In November, Microsoft released the new version of ASP.NET Core with many exciting features. To understand the advances in ASP.NET Core, more closely, we interviewed Kenneth Y. Fukizi, the author of the book ‘Learn ASP.NET Core 3.0, Second edition’.  Along with ASP.NET Core, Kenneth has also shared his thoughts on the latest version of .NET Core, and the now available Entity Framework Core 3.0 and Entity Framework 6.3. He says that he is personally most excited about the new Blazor framework as it allows him to avoid JavaScript. He also opines that Blazor will give developers a chance to specialize in Microsoft technologies. Kenneth also adds that ASP.Net Core users should be looking forward to the high performance and scalability that comes with gRPC in this new release. Kenneth’s take on the features in ASP.NET Core 3.0 In your book, ‘Learning ASP.NET Core 3.0, Second edition’, you say that ‘Model View Controller’ and ‘Entity Framework Core 3’ are the most widely used frameworks in ASP.NET Core. Can you elaborate on this? How do these frameworks enhance web application performance? Pretty much every worthwhile application out there will need to have some interface for user interaction and persist some form of data.  The Model View Controller is almost a default choice for may web application developers, mainly for its tried and tested versatility and its ability to separate concerns, allowing front-end developers to work on the views while back end developers are busy with their part. This allows for fast application development.  When persisting some data, Entity Framework core 3.0 is often the choice for most application developers on ASP.Net Core, because it is a safer option from a horde of other OR/M engines, since it is developed and maintained by Microsoft itself. Even though some time back it was found wanting in some areas compared to versatile OR/Ms like Nhibernate, Entity Framework Core has addressed its gaps quickly and effectively continues to grow in areas it was behind on. Its seamless integration with the rest of the framework makes it an easy choice for many application developers. ASP.Net Core MVC supports asynchronous calls, thereby releasing unnecessarily held resources and in turn increasing application performance. Since everything is logically separated into groups, in other words, high cohesion and low coupling, it increases the application performance on the whole.  If we talk about the advantages it gives to the application developer, there are many, and most of these allow for a developer to have code that does not repeat itself unnecessarily, code that has low coupling, allowing for everything to be tested individually. In general, it allows for an application to have efficient code more easily and in turn, makes the application more performant. (Read chapter 4 and 5 of my book for understanding all the basic concepts of ASP.NET Core 3.0). ASP.NET Core 3.0 has introduced a new framework called Blazor for building interactive client-side web UI with .NET. What are your thoughts on it? This is definitely a game-changer. I’m personally excited that I have an option to use Blazor instead of JavaScript. It gives a chance for a developer to specialize in Microsoft technologies and be truly full-stack, all within the MS tech stack.  For a typical back end C# developer, when they get familiar with C# syntax, type safety, and the environment in general, it becomes easier to just work with one language across the stack, whether it is backend, or frontend. Client-side Blazor with its Ahead Of Time (AOT) Compilation directly to WebAssembly will make client applications super fast, and it’s something to definitely look out for. (Chapter 6 of my book gives a detailed explanation on Blazor). The latest release of ASP.NET Core also supports gRPC and ships with templates and tools for building gRPC services. What are the advantages and disadvantages that gRPC will offer to ASP.NET Core 3 users? Also, what are your thoughts on the gRPC built-in security features? ASP.Net Core users should be looking forward to the high performance and scalability that comes with gRPC. With gRPCs, a contract-first approach to API development is possible, documentation for projects will become easier, making it easier for different teams on the same project to be able to communicate better, with consistent API models as well. Reusability will be enhanced and encouraged by the gRPC templates. For applications using Microservices, it will be advantageous in terms of security and performance to use gRPC mainly for communication. Since gRPC uses protocol buffers, as opposed to REST services, they need to be exposed to users in JSON or XML via HTTP.  The HTTP/2 protocol that gRPC uses is more efficient than its counterpart HTTP 1.1. With gRPC templates, we are able to have messages that flow in a bi-directional way, increasing efficiency, and in case of an event, we can do cancellations of sent requests. There are indeed some disadvantages to using gRPC templates including the fact that it now becomes a duty to maintain specification files, which are an integral part of the template. If you have different teams you will have to agree first on specifications before you even start to implement anything, and that can make things slow in terms of getting the application development done. It is great to see that gRPC has visibly built-in considerations for TLS/ SSL and that it makes sure that all its communications are first authenticated and encrypted. This goes a long way in not only preventing but acts as a deterrent to intended attacks on application services. Have you had a chance to explore .NET Core 3.0, and the now available Entity Framework Core 3.0 and Entity Framework 6.3? What are you most excited about this new release? Yes, I have managed to explore .Net Core 3.0, and admittedly with great pleasure to do so. Some of the most exciting features I have seen the introduction of support for Cosmos DB and C# 8, and that in itself makes a whole lot of development easier for global applications. LINQ has always been quite handy to me and am sure many developers, but there have been scenarios where some queries have not been as efficient I’d want them, and to hear and see that it has been re-architected in many ways to produce more efficient queries, it is absolutely exciting, and I would love to uncover more of its query capabilities aimed to be improved upon in the future.  Nullable reference types introduced for both C#8 and Entity Framework Core 3.0, will make my life simpler as a developer. I’m also excited that Entity Framework 6.3 is the first version of Entity Framework 6 that will be able to run on .Net Core, and this will make it simpler for migrating older applications that were using Entity Framework 6, onto the .Net Core platform. (Chapter 9 of the book gives details about accessing data using Entity Framework Core 3). We also asked Kenneth about his preferred way of improving the speed and rendering performance of any web application. Bundling which combines multiple files into a single file reduces the size of the JavaScript or CSS file by removing white space and commented code without altering any functionality. On the other hand, minification can perform multiple varieties of different code optimizations to scripts and CSS, thus resulting in smaller payloads.  When combined together, they both can improve the load time performance of an application by reducing the number of requests to the server and reducing the size of the requested assets.  My personal way of doing things, when I need to use some functionality from a package, is first to look internally within the development framework. Only when an implementation of that functionality cannot be found, or when it is evident of its deficiencies with regards to the task at hand, do I go outside of the primary provider Microsoft. Therefore, it is only natural that I prefer to use the out of the box solution provided by both the MVC and Razor pages that makes use of bundleconfig.json I have at times gone for the sophistication that is in Grunt, and in Webpack, but there being relatively a bit more complex naturally makes the inbuilt functionality as the first option, as long as it doesn’t horribly fail. About the Book  ‘Learn ASP.NET Core 3.0, Second edition’ will help you become highly efficient in developing and maintaining powerful web applications. It will also guide you to deploy and monitor the applications using Microsoft Azure, AWS, and Docker. About the Author Kenneth Y. Fukizi is a solutions architect, consultant, software developer and engineer with more than 14 years of professional experience. He is a Microsoft Certified Trainer®, Microsoft Certified Solutions Developer®, Microsoft Certified Solutions Associate®, Microsoft Certified Professional®, among other professional and technical certifications.  Kenneth also lectures and mentors computer science degree students in programming. He has spent most of his professional life working as a software engineering contractor/consultant on various projects for client organizations based in South Africa, Australia, U.S.A, and Canada. .NET Core 3.0 is now available with C# 8, F# 4.7, ASP.NET Core 3.0 and general availability of EF Core 3.0 and EF 6.3 Inspecting APIs in ASP.NET Core [Tutorial] An introduction to TypeScript types for ASP.NET core [Tutorial]
Read more
  • 0
  • 0
  • 6577

article-image-gabriel-baptista-on-how-to-build-high-performance-software-architecture-systems-with-c-and-net-core
Vincy Davis
11 Dec 2019
10 min read
Save for later

Gabriel Baptista on how to build high-performance software architecture systems with C# and .Net Core

Vincy Davis
11 Dec 2019
10 min read
A software architecture refers to the fundamental structure of a software system that serves as a blueprint to manage the system complexity. It is also used to maintain a coordination mechanism among the various components of the software. One of the popular combinations of tools that are used for building sustainable software architecture solutions are the general-purpose C# programming language and the open-source .NET Core computer software framework. This year, C# and .Net Core brought in some exciting features to help developers design a high-performance software system. To understand how C# and .Net Core aid in building software architecture systems, we interviewed Gabriel Baptista, one of the authors of the book ‘Hands-On Software Architecture with C# 8 and .NET Core 3’. Gabriel is a Software Architect, a specialist in Azure PaaS solutions and also the co-founder of a startup for developing mobile applications. According to Gabriel, the new features in C# 8 like async streams and nullable reference types are good to detect errors quickly and maintain the high quality of code programming respectively. When asked about the comparison between Visual Studio Code and Visual Studio for C# development, Gabriel insists that the productivity offered by Visual Studio is the best choice for C#. He is also of the opinion that Microsoft developed C# has a better roadmap than Java. On the applications of a microservice architecture and how .Net and C# enable code reusability In your book, ‘Hands-On Software Architecture with C# 8 and .NET Core 3’, you have demonstrated how microservice architecture can be applied to an enterprise application like microservice logging. Apart from the use cases in your book, what other applications can microservice architecture be used for? Microservices are being applied in a bunch of scenarios, due to the facilities they bring, like enabling different programming languages in different teams for the same enterprise App. Transversal aspects of software, like the Logging that we have as an example of the book, and Security, are quite simple to think about as microservices. However, the complexity increases when you think about functional requirements, like Customer Management, Logistics, or Inventory, this is a bit confusing. There is where Domain-Driven Design will help you with, since DDD is about the construction of a unique domain model, keeping the views as separate models. This is helpful because you will be able to create a domain characterized by the language spoken by the experts, that is what we call the Bounded Context Principle of DDD. Now, think about each of these domains as a microservice. This will surely facilitate your understanding of how to organize them. You can read Chapter 5 of my book to know how to apply a microservice architecture to your enterprise application. You also say in your book that code reusability is one of the most important features in Software Architecture. How does the .NET standard help in managing and maintaining a reusable library? Also, how does C# enable code reuse? Code reuse is for sure what differs the velocity of development between two great companies. The one that reuses more certainly is faster and more profit. .NET enables you to reuse code from many platforms by defining the .NET standard as the core of a class library. With .NET Standard, you can write a class library that runs in Windows, Linux and Android, for a Desktop App, a Mobile App, and Azure Function and a Web App! This is amazing! Besides, .NET itself has many opportunities for code reuse by giving us a dozen of already done classes due to its framework. To finish is good to remember that C# is an Object-Oriented Programming Language, which enables the principles of Abstraction, Polymorphism, Inheritance, and, Encapsulation, that are really useful for code reuse. Check out Chapter 11 of my book to learn how to create reusable libraries. One of the main tasks for a developer is to choose a suitable architecture that will provide the desired functionality to the software. With the many varieties of software architectural patterns available today, how should a user approach them and choose the best one? What aspects should they look at when comparing software architectures? When you need to choose a suitable architecture for a system, my first recommendation is to start the process with a specific goal – keep it simple. The more complex your architecture, the worse the path you are going to. If you stop and think a bit about the most complex solutions we have nowadays, you will find something in common and interesting in all of them. They are made by many small simpler parts. Thanks to the cloud and the bunch of APIs we have nowadays, you can design really simple solutions focused on your business. Gabriel’s views on the latest advancements in C# 8 In its latest release, C# 8 brings features like async streams, nullable reference types, and new indices/ranges. What were you most excited about in this release and why? How do you think C# 8 will help in improving the overall quality of the delivered software? I am almost sure that NullReferenceException is one of the main reasons why C# Apps crash. Then, when it comes to improving quality, for sure nullable reference types will help a lot since null reference exceptions are not detected in compilation time. With this feature, you will be able to get the errors at this point and the theory of software development says that the earlier you get a bug solved, the better and cheaper. Next, I believe that async programming is amazing to make your apps work more seamlessly since it mimics the behavior of classical synchronous code while keeping most of the performance advantages of general parallel programming. For this reason, async streams will be a good opportunity delivered, since we will be able to get the advantages of async programming in foreach loops, enabling a push-programming in this kind of loop. For instance, we will be able to program an asynchronous data pull that will not block the client. Entity Framework Core 3.0 and Entity Framework 6.3 are now generally available with C# 8. How do you think EF Core 3.0 and EF 6.3 can take advantage of the new features in C# 8? Well, the two features that I mostly enjoyed are the ones that EF Core and EF 6.3 have implemented too: nullable reference types and async streams. Reducing bugs for not having null type reference is always good! The possibilities given by async streams together with EF Core are great. So, with them, EF will be even more powerful. Another feature that it is good to know is that now they support the connection to Cosmos DB.  Read Chapter 6 of my book to understand the interactions of data in C# using Entity Framework Core. In your opinion, is C# a better programming language than Java? Which language do you think has a better future, C# or Java? As a software architect, you need to understand that the programming languages evolve. In other words, the programming language itself is not the most important part, whereas the fundamentals are the essence of the process of building systems. Considering this approach, I cannot say that one language is better than the other. The best programming language is the one that will give you the best result in the fastest time with the team you are working with. What I could say about C# and Java is that both were, are, and are going to be incredibly important to the evolution of humanity. Right now, I consider that the C# has a better roadmap than Java. The reason why I believe it is that Microsoft is always ahead of other companies when it comes to productivity. On why Visual Studio is the best option for C# development Why do most C# developers prefer Visual Studio? Can you elaborate on how VSCode differs from the other source code editors? How difficult is it to develop C# applications using Visual Studio Code? To me, Visual Studio is the most powerful development environment we have for programming nowadays. You can write code on so many platforms and for so many different solutions with incredible debugging environment, connectivity to the cloud and facility to manage your code whatever Version Control System you decide to use. With Visual Studio you have the opportunity to start any project related to C# and even more, it gives you the possibility to debug your different projects in many ways. For instance, debugging Threads or Windows Services is not easy, but with VS we find different ways to do so, which at the end causes an acceleration of development. The best answer that I always give to someone who asks me why Visual Studio is productivity. I really don’t think C# developers prefer Visual Studio Code. VS Code is really useful if you are running a different OS than Windows or if your writing code in other programming languages like NodeJS. However, when it comes to C# development, for sure Visual Studio is more powerful. Gabriel on learning curves and best practices for beginners You are a Software Architect with experience working in diverse projects for retail and industry. How much does the role differ between industries and sectors? How does the learning curve look like for beginners to become an expert in building enterprise applications with the .NET Stack? The role itself does not change due to the different sectors. Time-to-market, performance, security, reliability, and quality are requirements that will be asked for any customer you have, no matter the size they are, no matter the sector they work for. The learning curve starts by understanding the principles of .NET and C#, that means, the Object-Oriented Principles. Any developer needs to understand the process of creating software and software engineering will give them this background. To finish, I am totally sure that a person who wants to be in the development world of the 21st century needs to understand Cloud Computing, especially PaaS – Platform as a Service. And in this world, Azure is the best one for giving the results the sectors need. Can you suggest some best practices that every developer should follow for a safe and maintainable code in C#? Yes, developers should be vigilant about the following: Never leave a catch statement blank. Do not write big methods. Methods need to have a single responsibility. Every time you are not sure if there is an already done class for the code you are working to, first try to find it. Chances are that you already have this done. No matter the number of developers you have in your team, even if your team is only you, do write code the simpler you can. Threads are great if you really know what you’re doing. So before implementing them, study the topic a lot. If you want to develop highly scalable enterprise-ready apps that meet customers’ business needs, read Gabriel’s book ‘Hands-On Software Architecture with C# 8 and .NET Core 3’. This software architecture book will give you a hands-on approach to learn various architectural methods that will help you deliver high-quality products. About the Author Gabriel Baptista is a Software Architect in the R&D department of Toledo do Brasil. He leads a team who delivers weighting solutions software to retail and industry customers. Gabriel is a specialist in Azure PaaS solutions. He is also a Professor at Salvador Arena Foundation Educational Center in their Computing Engineering College Course, where he is responsible for the disciplines of Programming Language and Software Architecture. You can find him on Linkedin. You can now use WebAssembly from .NET with Wasmtime! Exploring .Net Core 3.0 components with Mark J. Price, a Microsoft specialist Microsoft announces .NET Jupyter Notebooks .NET Core 3.0 is now available with C# 8, F# 4.7, ASP.NET Core 3.0 and general availability of EF Core 3.0 and EF 6.3 Introducing ‘ixy’, a simple user-space network driver written in high-level languages like Rust, Go, and C#, among others
Read more
  • 0
  • 0
  • 10308
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-how-two-junior-intuit-engineers-helped-their-team-adopt-kotlin-within-a-month
Richard Gall
25 Nov 2019
6 min read
Save for later

How two junior Intuit engineers helped their team adopt Kotlin within a month

Richard Gall
25 Nov 2019
6 min read
Change might feel like a natural part of working in the software industry. But in truth it's not natural at all; it takes a hell of a lot of effort to do things differently. That's what software developers Shelby Cohen and Katie Levy found out when they decided that Kotlin could be a better programming language option when it came to their company's engineering team. As two relatively junior developers at financial software provider Intuit, Shelby and Katie didn't only have to take on the challenge of building out training programs and providing resources to thousands of developers across Intuit, they also had to negotiate internal hierarchies and politics that can prove resistant to change. To learn more about what this process was like, as well as why they're so passionate about Kotlin, I spoke to them over email. Visit the Packt store to explore Kotlin eBooks and videos. How did you get started in software engineering? Shelby Cohen: I’ve always been interested in solving problems and engaging with new challenges. In high school I really enjoyed math and one of my math teachers encouraged me to join the High School Robotics Club. It was all men and I didn’t feel like I fit in. With the help and encouragement of my teacher I helped start the Women Robotics Club at my high school. This is where I was exposed to programming for the first time and that inspired me to study computer science in college. During my junior year, I learned about Intuit’s co-op program at a hackathon and thought it was a great opportunity. I then travelled from New York to San Diego to spend a semester working at Intuit. I learned so much and was exposed to a lot of mentorship opportunities which led to me accepting a full time position as a Software Engineer I in 2017. Intuit really values teaching their employees and helping them continue to grow and develop so it felt like a natural fit. Why Kotlin? What’s unique about Kotlin? Katie Levy: Kotlin is an open source, cross-platform programming language, designed to interoperate with Java. What’s nice about it is that it allows for an easy transition to start using a functional programming style while also being safer and more concise than Java code. Kotlin is unique because it is very clean, simple, clear and removes a lot of the redundant, boilerplate code that’s in Java which allows the developer to focus on the business logic. It’s my favorite language to program in as I can write high quality code faster by using its built-in language features. It can be used on any application running on the JVM, including Spring boot backend services, Android apps, and even JavaScript applications. How/where did you learn about it? Was it an immediate thing or did it take time for you to decide to do this? Shelby: I volunteered at KotlinConf back in 2018, and it was such a great opportunity to meet a lot of the engineers and staff from Jetbrains. I got to meet a senior executive at Jetbrains and shared some of the projects I was working on at Intuit. He asked to be a guest on his podcast, TalkingKotlin, and through this conference I got to know some of the most influential people in the Kotlin community. How did people respond? Were they resistant to something new? Katie: My team was definitely hesitant to start learning the language. One of the engineers on my team was especially resistant and tried to identify flaws in the language, using anything he could come up with as a reason why we shouldn’t use Kotlin. In those cases it’s important to identify the real issue the engineers are having with the new language — is it the language itself or is it something else? With that particular engineer, I found out that he was feeling like he didn’t have the bandwidth to take on the work he was being assigned. To combat this, I created a training program for the team so we could learn together, build up the team’s domain knowledge on the language, and so everyone’s workload was more visible. How did you go about driving adoption? Katie: Influencing and driving change is a hard project to scope. It can mean many different things to different people. For us, when we were starting out, we wanted to introduce 500 engineers to Kotlin, and wanted 90% of them to start coding in Kotlin. We ended up exceeding our goal, reaching 370 engineers internal to Intuit and 4,329 external to Intuit. We want to improve the industry by encouraging engineers to develop in a more concise and less error-prone language. More recently, we were able to present on Kotlin to over 500 software engineers at LambdaWorld in Spain. Afterward, we had engineers wanting to take pictures with us and all saying they want to start using the language. We found that speaking at conferences helped us meet our goals, and scale our efforts. What were the challenges? Shelby: In addition to some of the initial resistance, one of the biggest challenges is the internal hierarchy. When I introduced Kotlin to my team, a lot of engineers were more senior than me and at the time, I wasn’t confident about the value that I was bringing to the team. I implemented group code reviews, sent out resources, and walked the team through examples as they were learning Kotlin. Once the team had a good foundation for Kotlin, I implemented a flatter teaching structure and encouraged everyone to learn and teach each other a specific part of the language. This was really effective because everyone learns from different teaching styles and team members felt more empowered in their work. Have you learned anything else about Kotlin throughout the process? And about engineering in general? Shelby: One of the most valuable things I learned from this experience is don’t be afraid to ask for advice on how to influence at scale and connect with others who have driven change at your company. This way you can learn from other’s experiences. Katie and I reached out to a lot of leaders making an impact in their area of expertise to learn from them and get feedback and advice on our journey. This is something we will continue to do as we keep learning and facing other challenging problems. Thanks to Shelby and Katie for talking to us - it's clear they have a lot of passion for Kotlin, but more importantly they also have a great sense of how to engage and support other developers. Follow Shelby on Twitter: @shelbyc0hen Follow Katie on Twitter: @klevy110
Read more
  • 0
  • 0
  • 6306

article-image-why-become-an-advanced-salesforce-administrator-enrico-murru-salesforce-mvp-solution-and-technical-architect-interview
Fatema Patrawala
14 Nov 2019
12 min read
Save for later

Why become an advanced Salesforce administrator: Enrico Murru, Salesforce MVP, Solution and Technical Architect [Interview]

Fatema Patrawala
14 Nov 2019
12 min read
As per a recent IDC study, the forecast for new jobs demanding Salesforce skills shows a huge surge from last year. The numbers reveal that the demand is set to create 3.3 million jobs in the Salesforce ecosystem by 2022.  Additionally, among Indeed’s top 10 best jobs include Salesforce-specific, Salesforce Administrator ranking 4th and Salesforce Developer ranking at 6th place. Though Salesforce admins are not developers, but they create easy-to-use dashboards, intelligent workflows and applications for any project. They keep the Salesforce users happy and business processes smart, hence they are high in demand. Companies, especially in the US, know the potential and value Salesforce admins bring and are making serious human capital investments. We recently interviewed, Enrico Murru, a Solution and Technical Architect, a platinum Salesforce partner and Salesforce MVP to discuss the Salesforce ecosystem, his Salesforce expert journey, various certifications for Salesforce admins, and how they enhance their careers. Enrico is the author of the latest edition of our book, Salesforce Advanced Administrator Guide. This guide extends beyond being an administrator certification and covers advanced platform features and functions such as configuration, automation, security, and customization. It is packed with exam-oriented questions and mock tests to help you earn advanced administrator credentials. On the Salesforce ecosystem and Enrico’s journey to becoming a Salesforce MVP As per a recent 10K Advisors research, the Salesforce ecosystem is innovating faster than the talent can keep pace. This has resulted in great career opportunities but also introduced challenges for Salesforce end-users. How is Salesforce dealing with the challenges? How can administrators and developers leverage growth opportunities in Salesforce? When I started working with Salesforce about 10 years ago, I had never heard about the Salesforce ecosystem in my life: honestly Italy was not a hot market at that time, that’s why my (small at the time) company had a chance to work with big customers...we were among the few Salesforce system integrators in our whole country, after all. About 4 to 5 years ago things changed dramatically and Italy finally aligned with the rest of the world: Salesforce was in high demand among all kinds of companies (small or huge, no difference). The Italian market is one of the fastest growing; we started growing more and more due to increasing number of customers joining us but we started suffering from lack of professionals. We built an internal academy but it wasn’t enough, we still needed (and currently need) more developers, administrators and business analysts, the demand has exceeded the supply! The amount of “free to access documentation” is huge, the Salesforce Ohana has produced tons of content with blogs, webinars and tens of books. When Salesforce delivered Trailhead to the world we all had a boost in training: learning Salesforce became ever easier! No surprise the number of people getting certified has increased drastically, and it’s not uncommon now to see people with 5, 10 or 20 certifications on their career backpack: you don’t need to stay hours and hours with your head in a book, now you can learn 15 minutes a time when you are free between your working tasks. This is a HUGE revolution: learn a bit often and you keep yourself always on the trail, for free! From now on, anyone can become a Salesforce trailblazer and start building their trail: a lot of people have decided to change jobs and dipped into the Salesforce world with few to no experience in computer science. However when it’s time to get a certification, especially when it is your first certification, Trailhead is not enough: you need some real-world experience (no Trailhead can prepare you enough, experience is an amazing fuel for increasing your overall knowledge). A book can be a good compromise to boost your knowledge while giving you the right amount of experience that the author melt on each topic, and that’s why I chose to start this amazing trail with Packt: I wanted to do something I’ve never done before (writing a book) while delivering then Ohana more chances to pass a certification...I guess this is a win-win situation! How did you start your journey of becoming a Salesforce expert? Did being a Java developer, help you in some way? What motivated you to make the choice? Good question and the answer is that I have to thank the randomness that we can encounter daily on our lives (we can call it destiny, if you prefer). I started working as a Java developer (I came from an Electronic Engineering MSc) for a small company in my local town (Cagliari, Italy). After a while I got bored of what I was doing (boredom is a fuel for me) then I decided to move to Ireland. I got immediately the day after I landed in Cork a new job with a great income (compared to what I was earning in Italy)...but I was not 100% sure if I wanted to move abroad and that’s why I rejected that position and got back to Italy (some say it was an act of cowardice, I partially agree but I was not ready to change my life so much at that time). After just 2 months from my return home, my boss told me about a new opportunity: moving to northern Italy to join WebResults, a small company (we were just 15 people, including the CEO and CTO) that worked with something called “Salesforce”. I accepted the challenge and moved for 6 months with my spouse-to-be to WebResults headquarters: I discovered the world of Salesforce and I immediately fell in love with it. In a few weeks I learnt all that I needed to start my journey as a Salesforce developer. Years to come, I’m still working with WebResults (that in the meanwhile has been acquired by Engineering Spa, the greatest Italian consultant company) as a Salesforce Solution and Technical Architect (the amount of time I spend on coding at work has dramatically dropped unfortunately) and with the honorable Salesforce MVP title I try to evangelize my company and all the Salesforce Ohana buddies anyway I can! So if you ask me if my Java dev position helped me to arrive where I am, the answer is “definitely yes” but there is a lot more in the story! On various Salesforce certifications and why he wrote a book There are many certifications available for beginners as well as for experienced CRM developers. How should one go about choosing them? How do different Salesforce certification programs enhance a developer’s career? If you want to start your journey with Salesforce you have to choose primarily among the following paths (more details at https://trailhead.salesforce.com/credentials/administratoroverview, but you can build your own trail!): Administrator Developer Marketer Consultant Architect In my experience any aspiring Salesforce consultant should start from bases, even though she is a skilled business analysts with 20 years of experience: you need to know how the Salesforce Lightning Platform works and the best way is to get your hands dirty. Whether you wanna start as an administrator or a developer, I always recommend you face administrator skills at the beginning: a good developer should be a good administrator as well! As far as Marketer and Consultant paths are concerned, they are more related to your knowledge of specific products of the platform such as Marketing Cloud, Pardot, Field Service, Community Cloud, Einstein Analytics and many others. The Architect path brings you to the Mount Olympus of all certifications - the Technical Architect certification, which any Salesforce trailblazer aspire to get one day (and I’m one of them). Some think that owning a Salesforce certification doesn’t necessarily indicate your proficiency in the technologies involved but I do not agree with them. When I tried to get the Salesforce Advanced Administrator exam I really thought I had the required skills to pass but I failed...why? Because I didn’t study some of the topics and I wasn’t that skilled on such topics either (you’ll read this story in the book as well). That’s why I needed hours of study to pass the exam, and thanks to that deep study I learnt new Salesforce stuff and increased my proficiency in features I hadn’t actually ever used, making me the “most skilled” guy in my company regarding Omni-Channel or Salesforce Knowledge. This is an absolute win for both you and your company: certifications are meant to make you a trailblazer. Needless to say headhunters really love Salesforce certifications (my owning 20 certifications  attracts tens of contact requests on my social channels). Your book, Salesforce Advanced Administrator Certification Guide promises to give administrators a deeper knowledge of advanced Salesforce features for administrators. Why should one read this book? How is it different from other available Salesforce certification guides in the market? At first I want to say that the Salesforce Advanced Administrator Certification is a bit mistreated by administrators (as far as I’ve seen in my career): it is usually considered too hard or too complex for the skills you earn…”after all I’m already an administrator why should I become an advanced administrator”? You should my friend, the amount of things you learn is really huge, you’ll keep playing with features such as Lightning Knowledge, Omni-Channel, Live Chat, Lightning Content, features that maybe you’ve never used before, or exploring in depth the world of Salesforce automation with Process Builder, Lightning Flows, Entitlements and Approvals or knowing everything related to security and sharing of records (and many many more). Why should you choose this book? It covers extensively all required topics for the Salesforce Advanced Administrator certification keeping in mind the requirements for the exam as well. While the number of topics is too large for us to cover anything and everything for each topic, you’ll get a good amount of knowledge, suggestions and external references to ensure you reach the Salesforce Advanced Administrator certification goal. On the challenges faced by Salesforce administrators What are some of the challenges faced by Salesforce administrators today? How is Salesforce as a platform helping overcome these challenges? Can Salesforce administrators become developers too and vice versa? What is next for Salesforce? The biggest challenge that Salesforce admins face day after day is keeping pace with the extraordinarily growing Salesforce ecosystems: new companies join the Lightning Platform and new features are delivered release after release. It is more than mandatory that consultant companies and, in general, IT divisions reserve a percentage of their employees time for continuous learning, to allow Salesforce admins and devs to stay on track with the changing environment. Learning is a cost for sure, when you study you are not productive, but the benefits of a skilled and always on top employee overtakes its cost. And I see no obstacles for administrators to start their developer path as well: all they need is passion, curiosity and patience, Rome wasn’t built in a day and your developer skills won’t for sure. Trailhead is the starting point for any career path and I guess in the coming years we’ll see artificial intelligence absolutely stealing the show in Salesforce world and so admins should be prepared for the revolution that is taking place year after year. On making an impact in the Salesforce community You have created highly popular Salesforce browser extensions like ORGanizer. Tell us about how this came about? What does it take to build such successful products? Are you working on or planning to work on similar projects now? I said that boredom is my fuel: when I get bored I usually start a new project or a new hobby, and ORGanizer for Salesforce Chrome & Firefox extension (available at https://organizer.enree.co) is no different. It started as a personal project to ease my daily work with Salesforce projects, by adding little features that could speed up my administrative and coding tasks, while increasing my overall productivity. Then I thought, why not deliver this cool thing to my Salesforce Ohana? That’s where I believe the community took notice of me and it has remained one of the main reasons for my Salesforce MVP nomination. After the cool experience of writing a book, which is something that has been on my check list since I was a child, I have a few side projects related to Salesforce with some trailblazer friends, that I believe will have a great impact on the Ohana. And, why not, perhaps another book in 2020? Author Bio Enrico Murru is a Solution and Technical Architect at WebResults (an engineering company), an Italian platinum Salesforce partner, and an Independent Software Vendor (ISV). He has completed his MSc in Electronic Engineering at the University of Cagliari in 2007. In 2013, he launched a blog named Nerd @ Work. In 2016, he was nominated as the first Italian Salesforce MVP for his commitment to the Salesforce community. Then over the course of 3 years, Murru gained 20 Salesforce certifications, including the Salesforce Technical Architect certification. In 2016, he started one of the most popular projects, the ORGanizer for Salesforce Chrome and Firefox extension. You can follow him on Twitter @Enreeco, LinkedIn, GitHub, Trailblazer Community as well as on his personal blog page. Are you planning to embark on the journey of being a Salesforce Advanced Administrator? Confused about the various Salesforce certification programs and don’t know what to choose? Grab this book right now! The Salesforce Advanced Administrator Certification Guide will help you master data access security, monitoring and auditing, and understanding best practices for handling change management and data across organizations. What makes Salesforce Lightning Platform a powerful, fast and intuitive user interface What are the challenges of adopting AI-powered tools in Sales? How Salesforce can help Salesforce is buying Tableau in a $15.7 billion all-stock deal Salesforce’s open sourcing Centrifuge: A library for accelerating JVM restarts Build a custom Admin Home page in Salesforce CRM Lightning Experience
Read more
  • 0
  • 0
  • 4293

article-image-listen-herman-fung-explains-what-its-like-to-manage-programmers-and-software-engineers-podcast
Richard Gall
13 Nov 2019
2 min read
Save for later

Listen: Herman Fung explains what its like to manage programmers and software engineers [Podcast]

Richard Gall
13 Nov 2019
2 min read
Management is a discipline that isn't short of coverage. In fact, it's probably fair to say that the world throws too much attention its way. This only serves to muddy the waters of management principles and make it hard to determine what really matters. To complicate things, technology is ripping up the rule book when it comes to processes and hierarchies. Programming and engineering are forcing management gurus to rethink what it means to 'manage' today. However, while the wealth of perspectives on modern management amount to a bit of a cacophony, looking specifically at what it means to be a manager in a world defined by software can be useful. That's why, in the latest episode of the Packt Podcast, we spoke to Herman Fung. Herman is someone with both development and management experience, and, following the publication of his book The Successful Software Manager earlier this year, he's been spending a lot of time seriously considering what it means to be a software manager. Listen to the podcast episode: https://soundcloud.com/packt-podcasts/what-does-it-mean-to-be-a-software-manager-herman-fung-explains Some of the topics covered in this episode include: How to approach software management if you haven't done it before The impact of Agile and DevOps What makes managing in the context of engineering and technology different from other domains The differences between leadership and management You can buy The Successful Software Manager from the Packt store as a print or eBook. Click here. Follow Herman on Twitter: @FUNG14
Read more
  • 0
  • 0
  • 3692

article-image-five-reasons-to-begin-a-packt-subscription
Packt
12 Nov 2019
1 min read
Save for later

Five reasons to begin a Packt subscription

Packt
12 Nov 2019
1 min read
The Packt library provides you with all the tools you need to stay relevant in tech, whether you’re looking to brush up your PHP skills or take advantage of our learning paths to start from scratch. Here’s our top five reasons to begin a Packt subscription.
Read more
  • 0
  • 0
  • 494
article-image-listen-how-activestate-is-tackling-dependency-hell-by-providing-enterprise-level-support-for-open-source-programming-languages-podcast
Richard Gall
08 Oct 2019
2 min read
Save for later

Listen: How ActiveState is tackling "dependency hell" by providing enterprise-level support for open source programming languages [Podcast]

Richard Gall
08 Oct 2019
2 min read
"Open source back in the late nineties - and even throughout the 2000s - was really hard to use," ActiveState CEO Bart Copeland says. "Our job," he continues, "was to make it much easier for developers to use open source and much easier for enterprises to use open source." How does ActiveState work? But how does ActiveState actually do this? Copeland explains: "ActiveState is exactly like Red Hat. So what Red Hat did to Linux - providing enterprise-grade Linux distributions - ActiveState does for open source programming languages." Clearly ActiveState is an interesting product that's playing an important part in helping enterprises to better manage the widespread migration to open source technology. For the latest edition of the Packt Podcast we spoke to Copeland about ActiveState and the growth of open source over the last decade. We think you'll find what he has to say interesting... Listen: https://soundcloud.com/packt-podcasts/activestate-making-open-source-more-accessible-for-the-enterprise-interview-with-bart-copeland   Read next: Can a modified MIT ‘Hippocratic License’ to restrict misuse of open source software prompt a wave of ethical innovation in tech? Key quotes from Bart Copeland Copeland on the relationship between enterprise management and developers: "If you look at the enterprise… they want to make sure that it works and it doesn’t cause security threats and their in compliance with all the licenses. And the result is, due to the complexities of open source, management within the enterprise will often limit developers on what languages and what open source stacks they can use because the more stacks you have, the more complexity you have in an organization." Copeland on developer freedom: "A developer is a very technical and creative individual and they want to be able to use the right tools to build the right solution. And so if a developer is handcuffed to certain technology stacks, they may not be able to use the best technology to solve the problem." Learn more about ActiveState here.
Read more
  • 0
  • 0
  • 3290

article-image-fastly-cto-tyler-mcmullen-on-lucet-and-the-future-of-webassembly-and-rust-interview
Bhagyashree R
09 Jul 2019
11 min read
Save for later

Fastly CTO Tyler McMullen on Lucet and the future of WebAssembly and Rust [Interview]

Bhagyashree R
09 Jul 2019
11 min read
Around this time in 2015, W3C introduced WebAssembly, a small binary format that promises to bring near-native performance to the web. Since then it has been well received by web developers, with some going as far as to say that the "death of JavaScript is near." It is also supported in all the major browsers including Mozilla, Chrome, Safari, and Edge. While WebAssembly was initially designed with the web in mind, it would be a waste not to take its performance and security benefits to go “beyond the web” environments as well. This year we are seeing many initiatives pushing WebAssembly beyond the web. One of them is by Fastly, an edge cloud platform provider. Beginning this year, Fastly open sourced its WebAssembly compiler and runtime, named Lucet. With Lucet, Fastly’s edge cloud can execute tens of thousands of WebAssembly programs simultaneously. We had a great opportunity to interview Fastly’s CTO Tyler McMullen, who gave us insight into why and how they came up with Lucet, what sets it apart from other WebAssembly compilers, the inner workings and design decisions behind Lucet, and more.   Here are some of the highlights from the interview: Benefits of WebAssembly beyond the Web It is exciting to think that we will be able to experience near-native experience on the web. But WebAssembly also aims to solve another major concern of today’s times: security. “WebAssembly was designed for performance, and also for security. WebAssembly programs carry much stronger security guarantees than native code, with comparable performance. That makes it a great candidate for the edge cloud, where we can use the Lucet compiler and runtime to execute WebAssembly programs in isolation from each other, at a much lower resource and performance cost than competing approaches to multi-tenant isolation of native code, like processes, containers, or virtual machines.” Along with these security and performance benefits, the growing support for WebAssembly by compilers like LLVM (since its version 8 release) also makes it suitable for non-web environments. McMullen adds, “Besides security, the other aspect that makes WebAssembly attractive beyond the browser is maturing support by compilers, most notably the LLVM toolchain, used by the Clang C compiler and Rust language compiler, among others. Rather than having to build a new language, or a new compiler, to emit code with the security guarantees we need, we can use the WebAssembly output of any compiler. And it means that tons of existing programs can be compiled to WebAssembly with minimal modification.” How Lucet ensures security With security being one of the major focus areas of Lucet, we asked McMullen how security in Lucet works. “WebAssembly provides a set of guarantees about the security and safety of the code that can be verified during compilation. But those guarantees only hold if verification and compilation are done correctly. Those guarantees also require the runtime to cooperate. So there are a lot of moving pieces here that need to work in concert with each other. Lucet takes a security-by-contract approach to this problem. The compilation phase builds up a set of constraints for the runtime. Those constraints get embedded into the compiled artifact. The runtime then picks up those constraints and enforces them while loading and running the module. This lets us enforce things like which functions a module will be allowed to import for the embedding program, how much memory it will attempt to use, as well as the layout of that memory. So, the security guarantees that Lucet provides end up being enforced with a combination of the compiler, runtime, and the embedding program.” Compilation in Lucet Lucet is designed to compile a code written in C/Rust to WebAssembly and then compile this to native. So, why can’t we directly compile code written in C/Rust to native code? McMullen says that this will give you control over the behavior of the generated code. “If you used a typical C or Rust compiler you’d have relatively little in the way of guarantees about the behavior of the generated code. With Rust you’d have a bit more in that you could guarantee memory safety, but that’s not sufficient by itself. On the other hand, we could certainly create a new C or Rust compiler that guaranteed all the safety guarantees we’ve already discussed, but that would be a tremendous amount of work and would require still more work for each language you wanted to safely compile. We chose WebAssembly because it provides many of the safety and performance guarantees we’re looking for and -- just as importantly -- also has community support. Rather than reinventing the wheel over and over again, we as a community can work together toward a common goal.” Lucet is still in its early stages of development. McMullen shares what the Lucet team is up to now: “Prior to open sourcing Lucet, we focused on WebAssembly programs emitted by a couple of compilers - LLVM via Clang and Rustc, and AssemblyScript. Supporting that subset of WebAssembly was sufficient to launch Terrarium late last year, where users can create complex web services that are compiled and deployed on demand. Since the Lucet announcement, we’ve seen interest and contributions from other languages, including Swift, Golang, Zig, and Wam. We’ve fixed a bunch of the spec compliance issues that blocked these users, and are actively working on fixing the remaining ones now.” To support, or not to support JavaScript, that is the question While building WebAssembly runtimes today, developers have two paths to choose from: either supporting JavaScript or not. Lucet follows the latter one, which helps it be simple yet performant. "Security and resource consumption also drove our design here. Modern, fast JavaScript engines are quite complex, require lots of RAM, startup time, and -- in order to make them fast -- highly advanced JIT compilers. These requirements run counter to what Fastly does. By dropping JavaScript, we can dramatically reduce the complexity and increase the performance of our system. To be clear, reducing complexity isn’t just about making life easier on ourselves. By cutting out the massive complexity of JavaScript we can also reduce the attack surface and increase confidence in our safety guarantees." In the myriad of WebAssembly runtimes, what sets Lucet apart There are currently quite a few WebAssembly runtimes, for instance, Nebulet, Wasmjit, Life, including the ones very similar to Lucet like Wasmer and Wasmtime. We were curious to know what differences Lucet brings to the table. “Lucet was designed from the ground up for multi-tenant, highly concurrent use cases, which matches the runtime requirements of Fastly’s edge cloud. The major design decisions that differentiate it are all focused on performance and resource consumption in our use case, where we need to launch WebAssembly instances for each request our edge cloud handles. Adam Foltzer, a senior software engineer at Fastly, wrote a detailed post on our design and benchmarked its performance here. Lucet shares a major component with the Wasmtime runtime, the Cranelift code generation engine. Wasmtime is currently designed for a single-tenant use case, and supports in-process compilation of WebAssembly, often called JIT. We are collaborating with the maintainers of Wasmtime on Cranelift, and on runtime implementations of the WebAssembly System Interface (WASI).” Why Fastly chose Rust for implementing Lucet Looking at Rust’s memory and thread safety guarantees, a supportive community, and a quickly evolving toolchain, many major projects are being written or rewritten in Rust. One of them is Servo, an HTML rendering engine that will eventually replace Firefox’s rendering engine. Mozilla is also using Rust to rewrite many key parts of Firefox under Project Quantum. More recently, Facebook chose Rust to implement its controversial Libra blockchain. And Fastly’s decision to choose Rust as Lucet’s implementation language was focused on security: “As for why we chose to write Lucet in Rust, the biggest reason was again safety. Writing compilers is complex work. Rust lets us take much of that complexity, describe it with types, and let the Rust compiler check our work in much deeper ways than other languages allow. It lets us focus on the problem we’re trying to solve, rather than the incidental issues of complex software.” Fastly on the future of Rust and WebAssembly In the past few years, Fastly seems to be focusing on Rust and WebAssembly. McMullen believes these languages will be central to the future and will impact key domains in tech. While Rust enables developers to write both highly efficient and safe code, WebAssembly gives you the flexibility of writing code in your choice of language and platform. “With our role in the internet, efficiency is of utmost importance. That’s why, traditionally, the type of software we build has been done with lower level languages like C and C++. We still, today, write and maintain quite a bit of software in C. There are some problems where C is still the correct option. That domain of C -- and to a lesser extent, processor-specific assembly code -- has been largely unassailable for decades as we’ve developed languages that make writing software faster and easier, but at the cost of efficiency. That’s been a great detriment to the entire industry because of how easy it is to write unsafe C code. We believe that Rust has finally been the language to change that. It allows us to write highly efficient code while also providing incredible safety. Now, WebAssembly. WebAssembly has the potential to provide something that we’ve never, in the history of computing, managed to accomplish: a common platform. It was designed to run in a browser, but manages to provide the other components that are needed: efficiency, safety, and platform-independence. We imagine a future in which a WebAssembly module can be run in a browser, on your watch, on your phone, on your TV, in the games you play, and inside server software. We’re still a ways off from that and many pieces are still needed. Lucet is our attempt at providing a WebAssembly compiler and runtime that is made to be used across many different use cases. The first one is Fastly’s edge, but we want to see many more.” Fastly on its other products and projects Limitations in the legacy CDNs that Fastly’s edge cloud platform addresses A CDN or Content Delivery Network consists of a geographically distributed group of servers that work together to ensure that content requested by a user reaches to them as fast as possible. However, it has many limitations like bulky XML based configuration files and specifications. McMullen adds, “Legacy CDNs suffer from a number of technical limitations that make them particularly ill-equipped to address changing consumer expectations, not to mention, developer and enterprise requirements. We’ve all had those online experiences when a site crashes or is non-responsive when we need it most, and our mission is to fuel the next modern digital experience, an experience that’s fast, secure, and reliable. By and large, traditional CDNs are black box solutions that are limited in their ability to provide real-time visibility and control, largely as a result of their outdated architecture, which adds cost and limits developers’ flexibility to expand on functionality.” Fastly’s edge cloud platform is not that -- rather, it aims to address these limitations by bringing data closer to the user. “As a result, developers have not been truly empowered to pursue digital transformations, despite many attempts for improvement within the industry,” he adds. What other projects by Fastly we should look forward to Fastly is continuously contributing towards making the internet better and safer by getting involved in projects like QUIC, Encrypted SNI, and standardizing WASI. Last year Fastly made three of its projects available on Fastly Labs: Terrarium, Fiddle, and Insights. When asked what else it is working on, McMullen shared, “Fastly Labs is heavily dependent on experimentation. If the experiment goes well and we think it’ll be useful for others, then we release it. We have quite a few experiments currently underway, and many of them are around the items listed in the question: ESNI, QUIC, WASI, as well as others like DNS-over-HTTPS. More iteration on what we have now is also in the cards. Lucet has come a long way, but it still has so much room to grow. Expect to see some pretty compelling developments in performance, safety, and features there.” Follow Tyler McMullen on Twitter: @tbmcmullen Learn more about Fastly and its edge-cloud platform at Fastly’s official website. Fastly open sources Lucet, a native WebAssembly compiler and runtime Fastly, edge cloud platform, files for IPO Rust’s original creator, Graydon Hoare on the current state of system programming and safety
Read more
  • 0
  • 0
  • 6632

article-image-developers-dont-belong-on-a-pedestal-theyre-doing-a-job-like-everyone-else-april-wensel-on-toxic-tech-culture-and-compassionate-coding-interview
Richard Gall
02 Jul 2019
15 min read
Save for later

"Developers don't belong on a pedestal, they're doing a job like everyone else" - April Wensel on toxic tech culture and Compassionate Coding [Interview]

Richard Gall
02 Jul 2019
15 min read
It’s well known that there’s a toxic element to tech culture. And although it isn’t new, it has nevertheless surfaced and become more visible thanks to the increasing maturity of the platforms that are today shaping public discourse. As those platforms empower new voices to speak and allow new communities to organize, the very fabric of the culture on which many of them were built - hyper-masculine, competitive, and with scant disregard for the wider implications of their decisions on users - becomes the target of critique. But while everything from sexual harassment cover-ups to content moderation crises signal deep rooted issues inside the tech industry, substantially transforming tech’s cultural problems is a problem that’s more difficult to solve. It’s also one that many leading organizations and individuals seem to be unwilling to properly engage with. This is where April Wensel comes in. She’s made it her mission to help tackle issues of toxicity and ultimately transform tech culture with her organization, Compassionate Coding. What is Compassionate Coding? Compassionate Coding was launched in 2016 as a “response to a lot of the problems I saw in the tech industry with culture,” Wensel tells me when we spoke recently over Skype. “The common thread,” she explains, “was a lack of concern for human beings that are involved in technology or affected by technology.” This is particularly significant for Wensel. While it might be tempting to see the Google Walkout, the Cambridge Analytica scandal, and the controversy around Rekognition as nothing more than a collection of troubling but ultimately unrelated issues, it’s vital that we understand them together. [caption id="attachment_28750" align="alignright" width="300"] via compassionatecoding.com[/caption] “For things to really change - we can’t approach each issue as one problem,” Wensel says. “They really have the same root problem, which is this lack of compassion.” Compassion is an important and very deliberate word. It wasn’t chosen purely for its alliterative impact. “I chose compassion because I see compassion as a really rational thing; not just an intangible thing.” Compassion is, Wensel continues, “a more active form of empathy. Empathy allows you to feel what others are feeling, compassion allows you to see suffering, and - the important piece - to want to alleviate suffering.” Compassion as an antidote to toxic tech culture To talk about compassion in the tech industry is provocative. She tells me she recalls someone on Reddit describing the idea of compassionate coding as ‘girly '. But she tries to “tune out” online resistance, adopting a measured attitude: “whenever you have new, challenging ideas people get defensive.” Even if people aren’t aggressively opposed to her ideas, initially there was a distinct unwillingness to really engage with the ideas she was putting forward. “I… saw that it wasn’t cool to talk about these things. If you started talking about humans or whatnot, people are like oh, you must be a designer or you must be in product… No, I’m a developer. I just care about the people we’re impacting.” Crucial to this attitude is Wensel’s point that compassionate coding is something that can have real effects at every level. She describes it as “a new way of weighing decisions on a daily basis… it goes from high level things like what are we building? to low level things - what should I name this variable to make it easier for somebody in the future to understand?” Distributing power through diversity The context into which Compassionate Coding has entered the world is complex. High profile scandals need attention and action, but they are only the tip of the iceberg. They are symptomatic of low-lying problems that often pass unnoticed. Diversity is a good example of this. Although it’s often framed in the somewhat prosaic context of equal opportunities, it’s actually a powerful way of breaking apart privilege and the concentration of power that allows harmful products to be released and discrimination to find its way into organizational practices. By bringing people from a diverse range of backgrounds with different experiences into positions of authority and influence, the decisions that are made at all levels are supported by a greater awareness of context. In effect, decision making becomes more rigorous. Similarly, organizations themselves become safer and more welcoming places for employees from minority backgrounds because networks of support can form, making challenging malpractice or even abuse less of a risk professionally. This is something Wensel is well aware of. She takes umbrage with the concept of ‘diversity of thought’ which she sees as a way to mask a lack of genuine diversity. “A lot of companies claim they have diversity of thought…” she says, “that are all white men.” “You can’t really have true diversity of thought if everybody has come from the same background and hasn’t had any of the challenges that people from minority backgrounds might face.” The barriers to diversity are largely structural problems that can be felt far beyond tech. But according to Wensel, there are nevertheless cultural issues unique to the industry that are compounding the problem: “If you say you value diversity but really one of your values is the efficiency or perceived efficiency that comes when everyone thinks the same way then you have to realise that you’re gonna have to make some concessions in terms of creating a bit of discomfort when people are debating issues… because there is going to be some conflict when you create these diverse spaces.” Put another way, in an industry where you’re expected to move quickly and adapt, where you’re constantly looking for efficiency, diversity is always going to be an issue. It brings friction. For Wensel, the role Compassionate Coding can play in supporting diversity and inclusion is one where it helps to shift the industry mindset away from one that is scared of friction, to one where friction is vital if we’re to build better, safer, and more secure software. She points out that diversity isn’t just an initiative, it must be something that is constantly practiced: “Inclusion has to be a daily practice and so you need somebody who is in a position of power who can help establish inclusive practices,” she says. But it also needs to be something organizations need to invest in: “companies need to be paying people to do this because a lot of times the burden falls on underrepresented groups in the company and that’s not right.” Read next: Github Sponsors: Could corporate strategy eat FOSS culture for dinner? The problem with meritocracy If diversity can help unlock a better way of working in the tech industry, there are still other industry shibboleths that need to be slayed. According to Wensel, one of these is meritocracy. It is, she argues, often used as cover by those that are resistant to genuine diversity. “A lot of time in tech people want to talk about a meritocracy… [Recode co-founder] Kara Swisher says tech is more like a mirrortocracy because the people who succeed look like the ones who are already in the industry.” https://youtu.be/ng4sbQHCGLQ But what makes this problem worse is the fact that tech’s meritocracy is haunted by stereotypes and assumptions about what it means to be a developer. She points to a study done by IBM in the sixties that aimed to find out “what makes a good, strong programmer.” “They found among other things that programmers like puzzles, and they don’t like people… So it created a stereotype of what it means to be a good developer, and part of that was not liking people. And the reason that was so important - even though it was back in the sixties - is that IBM was a very influential company in terms of establishing tech culture,” Wensel says. Stack Overflow’s negative impact on tech culture What has further exacerbated this issue is how influential figures have helped to reinforce these stereotypes, effectively buying into the image of a programmer put forward in IBM’s research. In particular, Wensel calls out Stack Overflow and its founders Joel Spolsky and Jeff Atwood. “If you read through some of their old blogs from the early 2000s,” she says, “you can see a lot of the elements of the toxic culture that I talk about in so much of my work. Things like... hyper-competition… an over focus on aggressive competition… things like zero sum thinking. There’s an elitism - there’s not enough for everybody and some people are better than others.” Wensel suggests the attitudes of Atwood and Spolsky have been instrumental in forming the worst elements of the website “where the focus is not on helping people, but on accumulating points in the game of stack overflow.” Wensel detailed her experiences of Stack Overflow and offered an incisive critique of the website in a post on Medium in 2018. She reveals that although she has used Stack Overflow since its launch in 2008 (the year she graduated from her Computer Science class) “the condescending and blatantly rude responses on the site” have dissuaded her from ever actually creating an account. Although the Compassionate Code founder can see that the site is trying to change things, she believes it can still do a lot more (in her post she adds this response from Stack overflow employee Joe Friend). The problem, however, is that this would be a risk for the company. “They really have to be willing to alienate their audience - the ones who are contributing to the toxic culture.” Ultimately this highlights the problem facing many companies and communities in the tech industry - inclusivity and diversity aren’t things that can simply be integrated into established patterns and beliefs. Those beliefs and values need to change too. Which can, of course be painful. Dismantling the hierarchy of tech skills Again, it’s important to note that Wensel’s criticisms aren’t just on the grounds of civility or accessibility. It’s ultimately bad for the industry as a whole and bad for users. It helps to cultivate an engineering culture where certain skills are overvalued while others are excluded. This has consequences for how we view ourselves in the industry (we're never good enough, and we constantly have to compete), but it also means the sort of work and thought that should go into building and delivering software is viewed as less important. “None of this is productive and none of this is creating value. We need people doing all of these roles, and so which one of these has more prestige shouldn’t be an issue” Wensel argues. “That’s why one of very clear indications that there’s a problem in the culture is the fact that we are obsessed with the need to rank skills... software projects are failing for people reasons. And yet people who are good with people and technology are seen as too soft… they’re put in a box of not being technical.” Wensel argues that we need to stop worrying about who is and who isn’t a developer. “There’s no such thing as a real developer. If you write code you’re a developer... that’s enough… Developers are no better than designers, or product managers, or salespeople… that hierarchy is even more entrenched because it’s often reflected in salaries - so developers get paid disproportionately more than all these other roles.” The myth of scarcity and the tech skills gap What’s more, Wensel believes this hierarchy of programming skills is actually helping to perpetuate the notion of a tech skills gap. She believes the idea that there is a scarcity of “tech talent” is a “myth.” “I think there’s tons of talent in tech that’s being overlooked for reasons of unconscious bias, stereotypes…” she explains. “Once we start to bring in these people to the table who are out there already - very talented, very skilled - it will start to melt away this whole putting developers on a pedestal… developers don’t belong on a pedestal, they’re just doing a job like anybody else.” Wensel believes we will - and need to - move towards a world where programming skills lose their “prestige”. Having Python or React on your CV, for example, should really be no different to saying you know how to use Excel. “As these skills become seen for what they are, which is just something that anybody can learn if they put in the time, then I think that the prestige around them will be reduced.” How Agile is changing what it means to be a developer We’re moving towards a world where the solipsism of the valorization of technical skill becomes outdated thanks to broader industry trends. With DevOps forcing developers to become accountable for the full lifecycle of their code, and distributed systems engineering requiring a holistic awareness of a complex network of dependencies, it’s clear that more sensitivity about how your code is interacting with and impacting users in the real world is more important in software engineering than it ever has. “Over and over again I see both in formal studies and anecdotally… what’s causing software projects to fail or to be delayed... are people problems. Coordination problems, planning problems resourcing, all of that - not purely technical problems,” says Wensel. That said, Wensel nevertheless views Agile as a trend that’s positive for the industry. “A lot of the ideas behind agile software development are really positive in a lot of ways I see it as the first step in bringing emotional intelligence to the software team because you’re asked to consider the end user…” Read next: DevOps Engineering and Full-Stack Development – 2 Sides of the Same Agile Coin However, she also says that software engineering practices and philosophies like Agile only go so far. “The problem is that they [proponents of Agile] didn’t bring in the ethics there. So you can still create a lot of value very efficiently with agile development without considering the long term impact.” Agile is a good context for Wensel to drive her mission forward - but it can’t improve things on it own. Read next: Honeycomb CEO Charity Majors discusses observability and dealing with “the coming armageddon of complexity” [Interview] Putting Compassionate Coding into practice It’s clear that Compassionate Coding is needed in today’s software industry. Yes, tech culture’s toxicity is damaging and dangerous for everyone, but it’s also not fit for purpose. It’s stopping us from evolving and building the software people actually need. Think of it this way: it’s stopping us from putting users first at a time when the very idea of the individual feels vulnerable, thanks to a whirlwind of reactionary politics and rampant, unsustainable capitalism. However, it’s important that we actually see Compassionate Coding as something that can be practiced, both by individuals and organizations. The 4 levels of compassionate coding Wensel explained compassionate coding as involving 4 key ‘levels’. These levels turn the concept into something practical, that every individual and team can actually go and do themselves. “It’s how you treat yourself with compassion, how you treat your coworkers, your collaborators with compassion, how you treat your direct users of the software you’re creating… and how you treat the community at large who may or may not be people who use your product,” she says. Wensel is not only continuing to deliver training sessions and keynotes for her clients, but is also writing a book which will make her ideas more accessible. I asked her what advice she would offer individuals and businesses that want to follow her lead now. “The biggest thing people can do,” she says, “is to analyze their own thinking… Do a bit of meta-cognition to understand how do I think? Where do I have biases? At an organizational level, businesses should be “prioritizing talking about these issues, making it safe to talk about these issues, hiring people who understand these issues and can improve your company in these ways” she says. The importance of the individual in tackling tech's toxicity But Wensel still believes in the importance of individuals in enacting change. “It’s humans all the way down and all the way up… Leadership in a company and [the issue of] who makes decisions is just... another set of humans, and so I think changing individuals is really powerful.” Her approach is ultimately one that espouses the values of Compassionate Coding. “You can’t control the outcome but you can control the actions you take. So I have a lot of faith in the change that motivated individuals can make.” If everyone in the industry could adopt that attitude we’d surely be some way towards not better professional lives and better experiences and products for users. Follow April on Twitter: @aprilwensel  Other projects that are making the tech industry better April cited a number of organizations that she believes are doing great and important work across the tech industry: Project Include, an organization that wants to accelerate diversity in the industry. Black Girls Code, which aims to improve the number of women of color in the digital sector. Elephant in the Valley, which is tackling gender disparity in Silicon Valley. Kapor Center, removing barriers for underrepresented groups in tech. Learn more about the issues they're helping to solve, and support them if you can.
Read more
  • 0
  • 0
  • 5396
article-image-red-badger-tech-director-viktor-charypar-talks-monorepos-lifelong-learning-and-the-challenges-facing-open-source-software-interview
Richard Gall
10 May 2019
7 min read
Save for later

Red Badger Tech Director Viktor Charypar talks monorepos, lifelong learning, and the challenges facing open source software [Interview]

Richard Gall
10 May 2019
7 min read
Back in February, Viktor Charypar, Tech Director at Red badger explained the benefits of using a monorepo. For many teams, especially those without the resources or a highly developed and well-supported engineering culture, the idea of a monorepo might sound a little strange - following on from this piece, I spoke to Viktor to get a little bit more detail on the benefits of a monorepo and why engineering teams should seriously consider using them. But I didn't just speak to him about monorepos - I was also interested in how Red Badger builds a forward thinking engineering culture that can empower its clients, and how the team embraces continuous learning to ensure everyone is on top of the trends and tools that are going to be impacting digital transformation in the future. So, let's take a look at what Viktor had to say... Why monorepos now? Richard Gall: Why a monorepo now? If you’re dealing with multiple microservices doesn’t it make sense to separate source code? Viktor Charypar: It would seem to make sense, but microservices architecture actually introduces a new level of complexity that needs to be managed, which monorepos make much easier. The main issue is in dependencies between the services and the contracts they agree to exchange data. If one side of the contract changes in an incompatible way without the other side adapting to that change, the system no longer works. This problem grows with the number of services in the system and so it’s especially prominent in microservices architectures. Managing each service’s source code in a separate repository makes it more difficult to understand its ties to the rest of the system. This forces you to adopt some kind of external versioning scheme, such as semantic versioning, to express which revisions of services work together as things change. These versions are decided by engineers manually as changes are made according to a set of rules, and then the dependent services are updated to refer to the latest version of the service they consume, when they are changed to be compatible. This is time-consuming and error-prone. In a monorepo, all the components of the system are versioned together and changes can be made across the system. This not only means an external versioning scheme is not required, but it also makes it easier to test and enforce contracts between services. Monorepos really come to their own when they are coupled with a Continuous Integration system aware of the dependencies between components in the repo. Given a change made by a developer and the knowledge of the dependency “graph”, we can deterministically decide which system components can be affected by the change and therefore need to be retested. It is then up to owners of each service to do a level of testing of their dependencies to make sure their behaviour didn’t change significantly enough to break their own functionality. All this is automated and can be executed without human intervention. Humans just make changes to the software and express expectations on their dependencies in terms of contracts and tests. Read next: Mozilla’s updated policies will ban extensions with obfuscated code Digital transformation challenges RG: What common problems are clients coming to you with? VC: In general, our clients recognise they need help with their digital product capabilities, i.e. delivering interesting propositions to their customers as digital products, typically websites and mobile apps. In large enterprise companies, this ability is generally predicated on going through a digital transformation - adopting agile delivery methods, breaking down functional silos and working in cross-discipline, vertically aligned teams that can decide things quickly and adapt to how customers respond to their product offering. Our clients typically come to us with one of a few problems ranging from needing help with product strategy, i.e. what to offer their customers and how to find which of the many ideas have a market fit. Through knowing what to do but struggling to deliver it at pace, all the way to already having a digital product offering, but one which doesn’t perform as expected. Either from the perspective of customer behaviour (e.g. low conversion rate) or from a technical quality perspective, i.e. the website is unstable, struggles under high load, there are long outages, etc. While our strength is traditionally in fast product delivery and quality, we can help across the board, from product strategy to what we call empower and embed - demonstrating how to deliver digital products quickly, sustainably and with high quality, helping to build internal capability and then handing over to them. Essentially we want to help our clients build sustainable businesses. Read next: Linux forms Urban Computing Foundation: Set of open source tools to build autonomous vehicles and smart infrastructure Learning and assessing new software and tools RG: How do you stay on top of new tools? Do you have a learning culture at Red Badger? VC: We absolutely do, from simple day to day things like all engineers being encouraged to pair program to learn from each other or everyone in the company having a yearly training budget as one of the benefits, to doing things like a yearly internal mini conference called Tech Lab for all the engineers to get together and share latest learnings and general experience from projects. We have actually recently published a report which started with an activity at the last Tech Lab, which answers a lot of the questions above. It’s available on our website here (and we’ll also follow it with a series of events). We also run a few regular meetups in London, the biggest being the London React Meetup, which we’ve been hosting regularly for about four years. RG: How do you assess tools? VC: There are a few things we generally look at. The first is obviously experimenting with the tool to work out what it does and how. We’re in a privileged position of starting new projects, often greenfield ones, fairly regularly. We typically use about 80% of tools we know and trust and about 20% of new ones, which we want to try out “in anger” and learn about. We also look at who is behind these tools, which are generally open source, and whether there is momentum behind them and support from the community. Open source software typically goes through a period of rapid innovation and competition in a certain area and then, eventually, the community settles on a few options that work the best and fit the different problems people are trying to solve. The future of open source - is it sustainable? RG: How do you see the future of open source - is it sustainable on its current model? VC: That’s an interesting thing to think about! It seems like the open source model is widely misunderstood as software being built by dedicated developers in their free time. But in reality, most large, popular open source projects are backed by large software companies and people maintain them as their day job - for example, Linux, Kubernetes, React. Even the web standards are set by standard bodies comprised of professionals supported by the major browser vendors. I think the model with a sole maintainer working on something in their spare time doesn’t really work if their project gets very popular and the demand on their time grows. We all know how people tend to behave on the internet and software industry is no exception, so maintainers who do it as a hobby are at a pretty high risk of burning out. For the major open source projects, this seems to be more of an exception, as they are typically maintained by a team of people employed by a company invested in the project. The sponsor benefits from the community contributions and, if the project gets popular, from controlling the direction of a de facto standard and the community benefits from someone else doing the lion’s share of the work. I look at it as being similar to science, where different people publicly contribute to push the boundaries of knowledge, just because pooling resources makes more sense and doesn’t stop any individual contributor from profiting on the results. In that sense, I think it’s a pretty sustainable model and leads to better quality, more versatile software.
Read more
  • 0
  • 0
  • 3575

article-image-site-reliability-engineering-nat-welch-on-what-it-is-and-why-we-need-it-interview
Richard Gall
26 Sep 2018
4 min read
Save for later

Site reliability engineering: Nat Welch on what it is and why we need it [Interview]

Richard Gall
26 Sep 2018
4 min read
At a time when software systems are growing in complexity, and when the expectations and demands from users have never been more critical, it's easy to forget that just making things work can be a huge challenge. That's where site reliability engineering (SRE) comes in; it's one of the reasons we're starting to see it grow as a discipline and job role. The central philosophy behind site reliability engineering can be seen in trends like chaos engineering. As Gremlin CTO Matt Fornaciari said, speaking to us in June, "chaos engineering is simply part of the SRE toolkit." For site reliability engineers, software resilience isn't an optional extra - it's critical. In crude terms, downtime for a retail site means real monetary losses, but the example extends beyond that. Because people and software systems are so interdependent, SRE is a useful way for thinking about how we build software more broadly. To get to the heart of what site reliability engineering is, I spoke to Nat Welch, an SRE currently working at First Look Media, whose experience includes time at Google and Hillary Clinton's 2016 presidential campaign. Nat has just published a book with Packt called Real-World SRE. You can find it here. Follow Nat on Twitter: @icco What is site reliability engineering? Nat Welch: The idea [of site reliability engineering] is to write and modify software to improve the reliability of a website or system. As a term and field, it was founded by Google in the early 2000s, and has slowly spread across the rest of the industry. Having engineers dedicated to global system health and reliability, working with every layer of the business to improving reliability for systems. "By building teams of engineers focused exclusively on reliability, there can be someone arguing for and focusing on reliability in a way to improve the speed and efficiency of product teams." Why do we need site reliability engineering? Nat Welch: Customers get mad if your website is down. Engineers often were having trouble weighing system reliability work versus new feature work. Because of this, product feature work often takes priority, and reliability decisions are made by guess work. By building teams of engineers focused exclusively on reliability, there can be someone arguing for and focusing on reliability in a way to improve the speed and efficiency of product teams. Why do we need SRE now, in 2018? Nat Welch: Part of it is that people are finally starting to build systems more like how Google has been building for years (heavy use of containers, lots of services, heavily distributed). The other part is a marketing effort by Google so that they can make it easier to hire. What are the core responsibilities of an SRE? How do they sit within a team? Nat Welch: SRE is just a specialization of a developer. They sit on equal footing with the rest of the developers on the team, because the system is everyone's responsbility. But while some engineers will focus primarily on new features, SRE will primarily focus on system reliability. This does not mean either side does not work on the other (SRE often write features, product devs often write code to make the system more reliable, etc), it just means their primary focus when defining priorities is different. What are the biggest challenges for site reliability engineers? Nat Welch: Communication with everyone (product, finance, executive team, etc.), and focus - it's very easy to get lost in fire fighting. What are the 3 key skills you need to be a good SRE? Nat Welch: Communication skills, software development skills, system design skills. You need to be able to write code, review code, work with others, break large projects into small pieces and distribute the work among people, but you also need to be able to take a system (working or broken) and figure out how it is designed and how it works. Thanks Nat! Site reliability engineering, then, is a response to a broader change in the types of software infrastructure we are building and using today. It's certainly a role that offers a lot of scope for ambitious and curious developers interested in a range of problems in software development, from UX to security. If you want to learn more, take a look at Nat's book.
Read more
  • 0
  • 0
  • 4233