Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Author Posts - Application Development

8 Articles
article-image-understanding-memory-allocation-and-deallocation-in-the-net-common-language-runtime-clr
Trevoir Williams
29 Oct 2024
10 min read
Save for later

Understanding Memory Allocation and Deallocation in the .NET Common Language Runtime (CLR)

Trevoir Williams
29 Oct 2024
10 min read
IntroductionThis article provides an in-depth exploration of memory allocation and deallocation in the .NET Common Language Runtime (CLR), covering essential concepts and mechanisms that every .NET developer should understand for optimal application performance. Starting with the fundamentals of stack and heap memory allocation, we delve into how the CLR manages different types of data and the roles these areas play in memory efficiency. We also examine the CLR’s generational garbage collection model, which is designed to handle short-lived and long-lived objects efficiently, minimizing resource waste and reducing memory fragmentation. To help developers apply these concepts practically, the article includes best practices for memory management, such as optimizing object creation, managing unmanaged resources with IDisposable, and leveraging profiling tools. This knowledge equips developers to write .NET applications that are not only memory-efficient but also maintainable and scalable.Understanding Memory Allocation and Deallocation in the .NET Common Language Runtime (CLR) Memory management is a cornerstone of software development, and in the .NET ecosystem, the Common Language Runtime (CLR) plays a pivotal role in how memory is allocated and deallocated. The CLR abstracts much of the complexity involved in memory management, enabling developers to focus more on building applications than managing resources.  Understanding how memory allocation and deallocation work under the hood can help you write more efficient and performant .NET applications. Memory Allocation in the CLR When you create objects in a .NET application, the CLR allocates memory. This process involves several key components, including the stack, heap, and garbage collector. In .NET, memory is allocated in two main areas: the stack and the heap. Stack Allocation: The stack is a Last-In-First-Out (LIFO) data structure for storing value types and method calls. Variables stored on the stack are automatically managed, meaning that when a method exits, all its local variables are popped off the stack, and the memory is reclaimed. This process is very efficient because the stack operates linearly and predictably. Heap Allocation: On the other hand, the heap is used for reference types (such as objects and arrays). Memory on the heap is allocated dynamically, meaning that the size and lifespan of objects are not known until runtime. When you create a new object, memory is allocated on the heap, and a reference to that memory is returned to the stack where the reference type variable is stored. When a .NET application starts, the CLR reserves a contiguous block of memory called the managed heap. This is where all reference-type objects are stored. The managed heap is divided into three generations (0, 1, and 2), which are part of the Garbage Collector (GC) strategy to optimize memory management: Generation 0: Short-lived objects are initially allocated here. This is typically where small and temporary objects reside. Generation 1: Acts as a buffer between short-lived and long-lived objects. Objects that survive a garbage collection in Generation 0 are promoted to Generation 1. Generation 2: Long-lived objects like static data reside here. Objects that survive multiple garbage collections are eventually moved to this generation. When a new object is created, the CLR checks the available space in Generation 0 and allocates memory for the object. If Generation 0 is full, the GC is triggered to reclaim memory by removing objects that are no longer in use. Memory Deallocation and Garbage Collection The CLR’s garbage collector is responsible for reclaiming memory by removing inaccessible objects in the application. Unlike manual memory management, where developers must explicitly free memory, the CLR automatically manages this through garbage collection, which simplifies memory management but requires an understanding of how and when this process occurs. Garbage collection in the CLR involves three main steps: Marking: The GC identifies all objects still in use by following references from the root objects (such as global and static references, local variables, and CPU registers). Any objects not reachable from these roots are considered garbage. Relocating: The GC then updates the references to the surviving objects to ensure that they point to the correct locations after compacting memory. Compacting: The memory occupied by the unreachable (garbage) objects is reclaimed, and the remaining objects are moved closer together in memory. This compaction step reduces fragmentation and makes future memory allocations more efficient. The CLR uses the generational approach to garbage collection in .NET, designed to optimize performance by reducing the amount of memory that needs to be examined and reclaimed.  Generation 0 collections occur frequently but are fast because most objects in this generation are short-lived and can be quickly reclaimed. Generation 1 collections are less frequent but handle objects that have survived at least one garbage collection. Generation 2 collections are the most comprehensive and involve long-lived objects that have survived multiple collections. These collections are slower and more resource-intensive. Best Practices for Managing Memory in .NET Understanding how the CLR handles memory allocation and deallocation can guide you in writing more efficient code. Here are a few best practices: Minimize the Creation of Large Objects: Large objects (greater than 85,000 bytes) are allocated in a special section of the heap called the Large Object Heap (LOH), which is not compacted due to the overhead associated with moving large blocks of memory. Large objects should be used judiciously because they are expensive to allocate and manage.  Use `IDisposable` and `using` Statements: Implementing the `IDisposable` interface and using `using` statements ensures that unmanaged resources are released promptly. Profile Your Applications: Regularly use profiling tools to monitor memory usage and identify potential memory leaks or inefficiencies. Conclusion Mastering memory management in .NET is essential for building high-performance, reliable applications. By understanding the intricacies of the CLR, garbage collection, and best practices in memory management, you can optimize your applications to run more efficiently and avoid common pitfalls like memory leaks and fragmentation. Effective .NET Memory Management, written by Trevoir Williams, is your essential guide to mastering the complexities of memory management in .NET programming. This comprehensive resource equates developers with the tools and techniques to build memory-efficient, high-performance applications.  The book delves into fundamental concepts like: Memory Allocation and Garbage Collection Memory profiling and Optimization Strategies  Low-level programming with Unsafe Code Through practical examples and best practices, you’ll learn how to prevent memory leaks, optimize resource usage, and enhance application scalability. Whether you’re developing desktop, web, or cloud-based applications, this book provides the insights you need to manage memory effectively and ensure your .NET applications run smoothly and efficiently. Author BioTrevoir Williams, a passionate software and system engineer from Jamaica, shares his extensive knowledge with students worldwide. Holding a Master’s degree in Computer Science with a focus on Software Development and multiple Microsoft Azure Certifications, his educational background is robust. His diverse experience includes software consulting, engineering, database development, cloud systems, server administration, and lecturing, reflecting his commitment to technological excellence and education. He is also a talented musician, showcasing his versatility. He has penned works like Microservices Design Patterns in .NET and Azure Integration Guide for Business. His practical approach to teaching helps students grasp both theory and real-world applications.
Read more
  • 0
  • 0
  • 949

article-image-why-asp-net-core-is-the-best-choice-to-build-enterprise-web-applications-interview
Vincy Davis
30 Dec 2019
9 min read
Save for later

Why ASP.Net Core is the best choice to build enterprise web applications [Interview]

Vincy Davis
30 Dec 2019
9 min read
ASP.NET Core, the cross-platform and open-source framework is developed by Microsoft for building modern, cloud-based, and internet-connected applications. Designed to enable runtime components, APIs, compilers, and languages to evolve quickly, it runs on macOS, Linux, and Windows on the .NET Core or .NET Framework. To know more about the development cycle of ASP.NET Core and to gain knowledge of its future design directions, we interviewed Kenneth Y. Fukizi, the author of the book ‘Learn ASP.NET Core 3.0, Second edition’, published by Packt Publishing. He has more than 14 years of professional experience and is working as a software engineering contractor/consultant for client organizations based in South Africa, Australia, U.S.A, and Canada. Kenneth believes that the current performance of ASP.NET Core is a lot more superior than its predecessors and its competitor frameworks. He prefers to use ASP.Net Core to build enterprise web applications due to the flexibility that comes with it. He is also excited that .Net 5 will have more interoperability with other programming languages. When asked about his thoughts on Microsoft supporting the open source platform Pulumi, Kenneth says it will definitely help developers in building modern cloud applications. If you are an ASP.NET Core user, you should definitely read part 1 of our interview with ‘Kenneth Fukizi on the new Blazor framework, gRPC support, and other exciting features in ASP.NET Core 3.0’. In this interview, he shares his impressions on all the new exciting features in the ASP.NET Core 3.0 release and explains why all ASP.Net Core users should be looking forward to the high performance and scalability that comes with gRPC in this new release.   Here is the full interview with Kenneth on ASP.Net Core. On why ASP.NET Core is the best option for web application development What makes .NET Core, one of the best general-purpose development platforms? How does ASP.NET Core enhance the performance of web applications? What do you think are the key benefits of Asp.net Core for enterprise web application development? With .Net Core as a platform, you can develop Web applications, Desktop Applications, Cloud-native applications, mobile applications, gaming applications, Internet of Things (IoT) applications, and Artificial Intelligence (AI) applications, and you probably can’t ask for more from a development platform. ASP.Net Core has gone a long way in making sure that web application performance is enhanced compared to its predecessors or indeed some of its competitor frameworks, for example, by making full use of asynchronous programming models, in which ASP.Net Core has pretty much eliminated the need to have computer processing unit (cycles) that need to be waiting for database queries, web service calls, and IO Operations, and thereby wasting precious resources.  ASP.Net Core was designed from the ground up, unifying both the MVC and WebAPI frameworks. It has removed the dependency on IIS, removed several other excess baggage, including a preload of third party libraries, and as a result, it is much more lightweight and fast, gaining performance along the way. We can say a lot of things on performance including its improved capability with output caching, and other features, but the truth of the matter is the fact that it is getting more performant by the day. There is actually a tool you can use to track its performance metrics through TechEmpower benchmarks publicly available through the web. ASP.Net Core is my choice to build enterprise web applications on, mainly because of its flexibility that comes from it being cross-platform. It starts all the way from the tooling available to be able to develop ASP.Net Core applications using Visual Studio, or Visual Studio Code on either Windows or Mac operating systems, even on Linux.  Within an enterprise, you will have people with different roles working on an enterprise application, and the wide tooling available just makes it convenient to cater to a diverse group of project members.  ASP.Net Core has such a vibrant community that it is always allowed to give their input. The fact that it is open source actually paves way for faster improvements and applicability across industries. Apart from the development environment, when ASP.Net Core applications are ready to be deployed into production, you can do so internally in your organization, or just about any other worthwhile cloud hosting service provider including Azure and AWS. (Read chapter 3 of my book for more details on creating a continuous integration pipeline with Azure DevOps). It’s easy from ASP.Net Core to interact with other applications developed with other external tech stacks, and typically an enterprise application will need to talk to several other applications and I’m personally excited with the fact that a future version of the .Net Core runtime that ASP.Net Core runs on, that will be called .Net 5, is slated to have more interoperability with other languages like Java, Objective C, and Swift. There are many more advantages of using ASP.Net Core that comes to mind, and we can take the whole day discussing them, but to cut a long story short, ASP.Net Core will not disappoint you, and it’s moving fast in terms of improvements on where it is lacking. Recently, Microsoft announced that .NET Core will support the open source platform Pulumi for building modern cloud applications. This aims to help developers to declare cloud infrastructure including all of Azure such as Kubernetes and CosmosDB using any  .NET languages like C#, VB.NET, and F#. To what extent and how do you think Pulumi with .NET will help developers?  There are those that are not so familiar or not so comfortable navigating the cloud infrastructure, or just can’t be bothered to learn something new, and instead of getting out of their comfort zone that is within the code base, they can declare everything through code, for example, resource groups and everything else that makes up the cloud infrastructure. Pulumi just makes everything a bit easier, abstracts away everything and replaces the need to use different tools to create our cloud infrastructure. For example, to come up with JSON, YAML files or coming up with a cloud Domain Specific Language (DSL). Instead of all that we can just declare it using the language that we are already known as developers. It will definitely be handy.  On ASP.NET Core’s longevity and future design directions   At the NDC conference held recently, Ryan Nowak, a Microsoft developer and architect on ASP.NET Core shared the details of many future projects like BedRock, Houdini and SMALL FAST.NET Server. The common goal of these projects is to simplify cross-platform compatibility among different environments. How do you think these projects will help in shaping the future design directions of .NET 5 and ensure the longevity of ASP.NET Core?  .Net 5 is already in the process of being put together, with the full knowledge of what is happening around project Bedrock, project Houdini and SMALL FAST.NET server. What I can personally see from project Bedrock is the fact that starting at the lowest layer there is going to be more prominence of .Net Sockets in dealing with Network I/O at the expense of Libuv borrowed from NodeJS for its cross-platform capabilities. Obviously .Net Sockets will learn a thing or two from how Libuv has been operating and implement the lessons learned so that it works seamlessly with .Net technologies, and .Net 5 will stand to benefit a lot from the improvements.  I personally see .Net 5 being influenced to cater for more protocols like MQTT, AMQP, HTTP3, and QUIC, and I wouldn’t be surprised to even see a bit more interoperability with other programming languages on .Net 5. ASP.Net Core is there to stay as it is designed to work exclusively on the .Net Core runtime, which is transitioning into .Net 5 soon. I can see a lot of improvements on ASP.Net Core 3.0 especially from the point of view of taking a bit more responsibility off the MVC framework onto ASP.Net Core as a platform. This will allow reuse of functionality across different frameworks like SignalR, gRPC services, Blazor, Controllers, and Pages. This is already happening as is evident in the use of endpoint routing, which is catering for all of what I call the big 5 frameworks on project Houdini, mentioned above. Taking away responsibility from MVC to a lower layer actually makes it more lightweight and developer-friendly, and does not actually kill MVC as you can see it is pretty much still alive, and all this restructuring, in general, makes it more flexible to deal with change in the future, that is characteristic of different platforms, and actually makes it more ready in becoming truly cross-platform. About the Book  Get your hands on the book ‘Learn ASP.NET Core 3.0, Second edition’ by Packt Publishing to become highly efficient in developing and maintaining powerful web applications. It will also guide you on how to deploy and monitor your applications using Microsoft Azure, AWS, and Docker. This book will take you through realistically practical ASP.Net Core MVC application helping to give you a feel of how they would work in a real-life scenario. About the Author Kenneth Y. Fukizi is a solutions architect, consultant, software developer and engineer with more than 14 years of professional experience. He is a Microsoft Certified Trainer®, Microsoft Certified Solutions Developer®, Microsoft Certified Solutions Associate®, Microsoft Certified Professional®, among other professional and technical certifications.  Kenneth also lectures and mentors computer science degree students in programming. He has spent most of his professional life working as a software engineering contractor/consultant on various projects for client organizations based in South Africa, Australia, U.S.A, and Canada. .NET Core 3.0 is now available with C# 8, F# 4.7, ASP.NET Core 3.0 and general availability of EF Core 3.0 and EF 6.3 Inspecting APIs in ASP.NET Core [Tutorial] How to call an Azure function from an ASP.NET Core MVC application
Read more
  • 0
  • 0
  • 16132

article-image-kenneth-fukizi-on-the-new-blazor-framework-grpc-support-and-other-exciting-features-in-asp-net-core-3-0
Vincy Davis
30 Dec 2019
9 min read
Save for later

Kenneth Fukizi on the new Blazor framework, gRPC support, and other exciting features in ASP.NET Core 3.0

Vincy Davis
30 Dec 2019
9 min read
The open-source framework ASP.NET Core is one of the most popular web frameworks, developed by Microsoft and its community. The modular framework runs on both the full .NET Framework, on Windows, and the cross-platform .NET Core. One of the most captivating features of this framework is its performance. It is not only faster than other web frameworks but is also perfectly suited for Docker containers. In November, Microsoft released the new version of ASP.NET Core with many exciting features. To understand the advances in ASP.NET Core, more closely, we interviewed Kenneth Y. Fukizi, the author of the book ‘Learn ASP.NET Core 3.0, Second edition’.  Along with ASP.NET Core, Kenneth has also shared his thoughts on the latest version of .NET Core, and the now available Entity Framework Core 3.0 and Entity Framework 6.3. He says that he is personally most excited about the new Blazor framework as it allows him to avoid JavaScript. He also opines that Blazor will give developers a chance to specialize in Microsoft technologies. Kenneth also adds that ASP.Net Core users should be looking forward to the high performance and scalability that comes with gRPC in this new release. Kenneth’s take on the features in ASP.NET Core 3.0 In your book, ‘Learning ASP.NET Core 3.0, Second edition’, you say that ‘Model View Controller’ and ‘Entity Framework Core 3’ are the most widely used frameworks in ASP.NET Core. Can you elaborate on this? How do these frameworks enhance web application performance? Pretty much every worthwhile application out there will need to have some interface for user interaction and persist some form of data.  The Model View Controller is almost a default choice for may web application developers, mainly for its tried and tested versatility and its ability to separate concerns, allowing front-end developers to work on the views while back end developers are busy with their part. This allows for fast application development.  When persisting some data, Entity Framework core 3.0 is often the choice for most application developers on ASP.Net Core, because it is a safer option from a horde of other OR/M engines, since it is developed and maintained by Microsoft itself. Even though some time back it was found wanting in some areas compared to versatile OR/Ms like Nhibernate, Entity Framework Core has addressed its gaps quickly and effectively continues to grow in areas it was behind on. Its seamless integration with the rest of the framework makes it an easy choice for many application developers. ASP.Net Core MVC supports asynchronous calls, thereby releasing unnecessarily held resources and in turn increasing application performance. Since everything is logically separated into groups, in other words, high cohesion and low coupling, it increases the application performance on the whole.  If we talk about the advantages it gives to the application developer, there are many, and most of these allow for a developer to have code that does not repeat itself unnecessarily, code that has low coupling, allowing for everything to be tested individually. In general, it allows for an application to have efficient code more easily and in turn, makes the application more performant. (Read chapter 4 and 5 of my book for understanding all the basic concepts of ASP.NET Core 3.0). ASP.NET Core 3.0 has introduced a new framework called Blazor for building interactive client-side web UI with .NET. What are your thoughts on it? This is definitely a game-changer. I’m personally excited that I have an option to use Blazor instead of JavaScript. It gives a chance for a developer to specialize in Microsoft technologies and be truly full-stack, all within the MS tech stack.  For a typical back end C# developer, when they get familiar with C# syntax, type safety, and the environment in general, it becomes easier to just work with one language across the stack, whether it is backend, or frontend. Client-side Blazor with its Ahead Of Time (AOT) Compilation directly to WebAssembly will make client applications super fast, and it’s something to definitely look out for. (Chapter 6 of my book gives a detailed explanation on Blazor). The latest release of ASP.NET Core also supports gRPC and ships with templates and tools for building gRPC services. What are the advantages and disadvantages that gRPC will offer to ASP.NET Core 3 users? Also, what are your thoughts on the gRPC built-in security features? ASP.Net Core users should be looking forward to the high performance and scalability that comes with gRPC. With gRPCs, a contract-first approach to API development is possible, documentation for projects will become easier, making it easier for different teams on the same project to be able to communicate better, with consistent API models as well. Reusability will be enhanced and encouraged by the gRPC templates. For applications using Microservices, it will be advantageous in terms of security and performance to use gRPC mainly for communication. Since gRPC uses protocol buffers, as opposed to REST services, they need to be exposed to users in JSON or XML via HTTP.  The HTTP/2 protocol that gRPC uses is more efficient than its counterpart HTTP 1.1. With gRPC templates, we are able to have messages that flow in a bi-directional way, increasing efficiency, and in case of an event, we can do cancellations of sent requests. There are indeed some disadvantages to using gRPC templates including the fact that it now becomes a duty to maintain specification files, which are an integral part of the template. If you have different teams you will have to agree first on specifications before you even start to implement anything, and that can make things slow in terms of getting the application development done. It is great to see that gRPC has visibly built-in considerations for TLS/ SSL and that it makes sure that all its communications are first authenticated and encrypted. This goes a long way in not only preventing but acts as a deterrent to intended attacks on application services. Have you had a chance to explore .NET Core 3.0, and the now available Entity Framework Core 3.0 and Entity Framework 6.3? What are you most excited about this new release? Yes, I have managed to explore .Net Core 3.0, and admittedly with great pleasure to do so. Some of the most exciting features I have seen the introduction of support for Cosmos DB and C# 8, and that in itself makes a whole lot of development easier for global applications. LINQ has always been quite handy to me and am sure many developers, but there have been scenarios where some queries have not been as efficient I’d want them, and to hear and see that it has been re-architected in many ways to produce more efficient queries, it is absolutely exciting, and I would love to uncover more of its query capabilities aimed to be improved upon in the future.  Nullable reference types introduced for both C#8 and Entity Framework Core 3.0, will make my life simpler as a developer. I’m also excited that Entity Framework 6.3 is the first version of Entity Framework 6 that will be able to run on .Net Core, and this will make it simpler for migrating older applications that were using Entity Framework 6, onto the .Net Core platform. (Chapter 9 of the book gives details about accessing data using Entity Framework Core 3). We also asked Kenneth about his preferred way of improving the speed and rendering performance of any web application. Bundling which combines multiple files into a single file reduces the size of the JavaScript or CSS file by removing white space and commented code without altering any functionality. On the other hand, minification can perform multiple varieties of different code optimizations to scripts and CSS, thus resulting in smaller payloads.  When combined together, they both can improve the load time performance of an application by reducing the number of requests to the server and reducing the size of the requested assets.  My personal way of doing things, when I need to use some functionality from a package, is first to look internally within the development framework. Only when an implementation of that functionality cannot be found, or when it is evident of its deficiencies with regards to the task at hand, do I go outside of the primary provider Microsoft. Therefore, it is only natural that I prefer to use the out of the box solution provided by both the MVC and Razor pages that makes use of bundleconfig.json I have at times gone for the sophistication that is in Grunt, and in Webpack, but there being relatively a bit more complex naturally makes the inbuilt functionality as the first option, as long as it doesn’t horribly fail. About the Book  ‘Learn ASP.NET Core 3.0, Second edition’ will help you become highly efficient in developing and maintaining powerful web applications. It will also guide you to deploy and monitor the applications using Microsoft Azure, AWS, and Docker. About the Author Kenneth Y. Fukizi is a solutions architect, consultant, software developer and engineer with more than 14 years of professional experience. He is a Microsoft Certified Trainer®, Microsoft Certified Solutions Developer®, Microsoft Certified Solutions Associate®, Microsoft Certified Professional®, among other professional and technical certifications.  Kenneth also lectures and mentors computer science degree students in programming. He has spent most of his professional life working as a software engineering contractor/consultant on various projects for client organizations based in South Africa, Australia, U.S.A, and Canada. .NET Core 3.0 is now available with C# 8, F# 4.7, ASP.NET Core 3.0 and general availability of EF Core 3.0 and EF 6.3 Inspecting APIs in ASP.NET Core [Tutorial] An introduction to TypeScript types for ASP.NET core [Tutorial]
Read more
  • 0
  • 0
  • 6577

article-image-gabriel-baptista-on-how-to-build-high-performance-software-architecture-systems-with-c-and-net-core
Vincy Davis
11 Dec 2019
10 min read
Save for later

Gabriel Baptista on how to build high-performance software architecture systems with C# and .Net Core

Vincy Davis
11 Dec 2019
10 min read
A software architecture refers to the fundamental structure of a software system that serves as a blueprint to manage the system complexity. It is also used to maintain a coordination mechanism among the various components of the software. One of the popular combinations of tools that are used for building sustainable software architecture solutions are the general-purpose C# programming language and the open-source .NET Core computer software framework. This year, C# and .Net Core brought in some exciting features to help developers design a high-performance software system. To understand how C# and .Net Core aid in building software architecture systems, we interviewed Gabriel Baptista, one of the authors of the book ‘Hands-On Software Architecture with C# 8 and .NET Core 3’. Gabriel is a Software Architect, a specialist in Azure PaaS solutions and also the co-founder of a startup for developing mobile applications. According to Gabriel, the new features in C# 8 like async streams and nullable reference types are good to detect errors quickly and maintain the high quality of code programming respectively. When asked about the comparison between Visual Studio Code and Visual Studio for C# development, Gabriel insists that the productivity offered by Visual Studio is the best choice for C#. He is also of the opinion that Microsoft developed C# has a better roadmap than Java. On the applications of a microservice architecture and how .Net and C# enable code reusability In your book, ‘Hands-On Software Architecture with C# 8 and .NET Core 3’, you have demonstrated how microservice architecture can be applied to an enterprise application like microservice logging. Apart from the use cases in your book, what other applications can microservice architecture be used for? Microservices are being applied in a bunch of scenarios, due to the facilities they bring, like enabling different programming languages in different teams for the same enterprise App. Transversal aspects of software, like the Logging that we have as an example of the book, and Security, are quite simple to think about as microservices. However, the complexity increases when you think about functional requirements, like Customer Management, Logistics, or Inventory, this is a bit confusing. There is where Domain-Driven Design will help you with, since DDD is about the construction of a unique domain model, keeping the views as separate models. This is helpful because you will be able to create a domain characterized by the language spoken by the experts, that is what we call the Bounded Context Principle of DDD. Now, think about each of these domains as a microservice. This will surely facilitate your understanding of how to organize them. You can read Chapter 5 of my book to know how to apply a microservice architecture to your enterprise application. You also say in your book that code reusability is one of the most important features in Software Architecture. How does the .NET standard help in managing and maintaining a reusable library? Also, how does C# enable code reuse? Code reuse is for sure what differs the velocity of development between two great companies. The one that reuses more certainly is faster and more profit. .NET enables you to reuse code from many platforms by defining the .NET standard as the core of a class library. With .NET Standard, you can write a class library that runs in Windows, Linux and Android, for a Desktop App, a Mobile App, and Azure Function and a Web App! This is amazing! Besides, .NET itself has many opportunities for code reuse by giving us a dozen of already done classes due to its framework. To finish is good to remember that C# is an Object-Oriented Programming Language, which enables the principles of Abstraction, Polymorphism, Inheritance, and, Encapsulation, that are really useful for code reuse. Check out Chapter 11 of my book to learn how to create reusable libraries. One of the main tasks for a developer is to choose a suitable architecture that will provide the desired functionality to the software. With the many varieties of software architectural patterns available today, how should a user approach them and choose the best one? What aspects should they look at when comparing software architectures? When you need to choose a suitable architecture for a system, my first recommendation is to start the process with a specific goal – keep it simple. The more complex your architecture, the worse the path you are going to. If you stop and think a bit about the most complex solutions we have nowadays, you will find something in common and interesting in all of them. They are made by many small simpler parts. Thanks to the cloud and the bunch of APIs we have nowadays, you can design really simple solutions focused on your business. Gabriel’s views on the latest advancements in C# 8 In its latest release, C# 8 brings features like async streams, nullable reference types, and new indices/ranges. What were you most excited about in this release and why? How do you think C# 8 will help in improving the overall quality of the delivered software? I am almost sure that NullReferenceException is one of the main reasons why C# Apps crash. Then, when it comes to improving quality, for sure nullable reference types will help a lot since null reference exceptions are not detected in compilation time. With this feature, you will be able to get the errors at this point and the theory of software development says that the earlier you get a bug solved, the better and cheaper. Next, I believe that async programming is amazing to make your apps work more seamlessly since it mimics the behavior of classical synchronous code while keeping most of the performance advantages of general parallel programming. For this reason, async streams will be a good opportunity delivered, since we will be able to get the advantages of async programming in foreach loops, enabling a push-programming in this kind of loop. For instance, we will be able to program an asynchronous data pull that will not block the client. Entity Framework Core 3.0 and Entity Framework 6.3 are now generally available with C# 8. How do you think EF Core 3.0 and EF 6.3 can take advantage of the new features in C# 8? Well, the two features that I mostly enjoyed are the ones that EF Core and EF 6.3 have implemented too: nullable reference types and async streams. Reducing bugs for not having null type reference is always good! The possibilities given by async streams together with EF Core are great. So, with them, EF will be even more powerful. Another feature that it is good to know is that now they support the connection to Cosmos DB.  Read Chapter 6 of my book to understand the interactions of data in C# using Entity Framework Core. In your opinion, is C# a better programming language than Java? Which language do you think has a better future, C# or Java? As a software architect, you need to understand that the programming languages evolve. In other words, the programming language itself is not the most important part, whereas the fundamentals are the essence of the process of building systems. Considering this approach, I cannot say that one language is better than the other. The best programming language is the one that will give you the best result in the fastest time with the team you are working with. What I could say about C# and Java is that both were, are, and are going to be incredibly important to the evolution of humanity. Right now, I consider that the C# has a better roadmap than Java. The reason why I believe it is that Microsoft is always ahead of other companies when it comes to productivity. On why Visual Studio is the best option for C# development Why do most C# developers prefer Visual Studio? Can you elaborate on how VSCode differs from the other source code editors? How difficult is it to develop C# applications using Visual Studio Code? To me, Visual Studio is the most powerful development environment we have for programming nowadays. You can write code on so many platforms and for so many different solutions with incredible debugging environment, connectivity to the cloud and facility to manage your code whatever Version Control System you decide to use. With Visual Studio you have the opportunity to start any project related to C# and even more, it gives you the possibility to debug your different projects in many ways. For instance, debugging Threads or Windows Services is not easy, but with VS we find different ways to do so, which at the end causes an acceleration of development. The best answer that I always give to someone who asks me why Visual Studio is productivity. I really don’t think C# developers prefer Visual Studio Code. VS Code is really useful if you are running a different OS than Windows or if your writing code in other programming languages like NodeJS. However, when it comes to C# development, for sure Visual Studio is more powerful. Gabriel on learning curves and best practices for beginners You are a Software Architect with experience working in diverse projects for retail and industry. How much does the role differ between industries and sectors? How does the learning curve look like for beginners to become an expert in building enterprise applications with the .NET Stack? The role itself does not change due to the different sectors. Time-to-market, performance, security, reliability, and quality are requirements that will be asked for any customer you have, no matter the size they are, no matter the sector they work for. The learning curve starts by understanding the principles of .NET and C#, that means, the Object-Oriented Principles. Any developer needs to understand the process of creating software and software engineering will give them this background. To finish, I am totally sure that a person who wants to be in the development world of the 21st century needs to understand Cloud Computing, especially PaaS – Platform as a Service. And in this world, Azure is the best one for giving the results the sectors need. Can you suggest some best practices that every developer should follow for a safe and maintainable code in C#? Yes, developers should be vigilant about the following: Never leave a catch statement blank. Do not write big methods. Methods need to have a single responsibility. Every time you are not sure if there is an already done class for the code you are working to, first try to find it. Chances are that you already have this done. No matter the number of developers you have in your team, even if your team is only you, do write code the simpler you can. Threads are great if you really know what you’re doing. So before implementing them, study the topic a lot. If you want to develop highly scalable enterprise-ready apps that meet customers’ business needs, read Gabriel’s book ‘Hands-On Software Architecture with C# 8 and .NET Core 3’. This software architecture book will give you a hands-on approach to learn various architectural methods that will help you deliver high-quality products. About the Author Gabriel Baptista is a Software Architect in the R&D department of Toledo do Brasil. He leads a team who delivers weighting solutions software to retail and industry customers. Gabriel is a specialist in Azure PaaS solutions. He is also a Professor at Salvador Arena Foundation Educational Center in their Computing Engineering College Course, where he is responsible for the disciplines of Programming Language and Software Architecture. You can find him on Linkedin. You can now use WebAssembly from .NET with Wasmtime! Exploring .Net Core 3.0 components with Mark J. Price, a Microsoft specialist Microsoft announces .NET Jupyter Notebooks .NET Core 3.0 is now available with C# 8, F# 4.7, ASP.NET Core 3.0 and general availability of EF Core 3.0 and EF 6.3 Introducing ‘ixy’, a simple user-space network driver written in high-level languages like Rust, Go, and C#, among others
Read more
  • 0
  • 0
  • 10308

article-image-why-become-an-advanced-salesforce-administrator-enrico-murru-salesforce-mvp-solution-and-technical-architect-interview
Fatema Patrawala
14 Nov 2019
12 min read
Save for later

Why become an advanced Salesforce administrator: Enrico Murru, Salesforce MVP, Solution and Technical Architect [Interview]

Fatema Patrawala
14 Nov 2019
12 min read
As per a recent IDC study, the forecast for new jobs demanding Salesforce skills shows a huge surge from last year. The numbers reveal that the demand is set to create 3.3 million jobs in the Salesforce ecosystem by 2022.  Additionally, among Indeed’s top 10 best jobs include Salesforce-specific, Salesforce Administrator ranking 4th and Salesforce Developer ranking at 6th place. Though Salesforce admins are not developers, but they create easy-to-use dashboards, intelligent workflows and applications for any project. They keep the Salesforce users happy and business processes smart, hence they are high in demand. Companies, especially in the US, know the potential and value Salesforce admins bring and are making serious human capital investments. We recently interviewed, Enrico Murru, a Solution and Technical Architect, a platinum Salesforce partner and Salesforce MVP to discuss the Salesforce ecosystem, his Salesforce expert journey, various certifications for Salesforce admins, and how they enhance their careers. Enrico is the author of the latest edition of our book, Salesforce Advanced Administrator Guide. This guide extends beyond being an administrator certification and covers advanced platform features and functions such as configuration, automation, security, and customization. It is packed with exam-oriented questions and mock tests to help you earn advanced administrator credentials. On the Salesforce ecosystem and Enrico’s journey to becoming a Salesforce MVP As per a recent 10K Advisors research, the Salesforce ecosystem is innovating faster than the talent can keep pace. This has resulted in great career opportunities but also introduced challenges for Salesforce end-users. How is Salesforce dealing with the challenges? How can administrators and developers leverage growth opportunities in Salesforce? When I started working with Salesforce about 10 years ago, I had never heard about the Salesforce ecosystem in my life: honestly Italy was not a hot market at that time, that’s why my (small at the time) company had a chance to work with big customers...we were among the few Salesforce system integrators in our whole country, after all. About 4 to 5 years ago things changed dramatically and Italy finally aligned with the rest of the world: Salesforce was in high demand among all kinds of companies (small or huge, no difference). The Italian market is one of the fastest growing; we started growing more and more due to increasing number of customers joining us but we started suffering from lack of professionals. We built an internal academy but it wasn’t enough, we still needed (and currently need) more developers, administrators and business analysts, the demand has exceeded the supply! The amount of “free to access documentation” is huge, the Salesforce Ohana has produced tons of content with blogs, webinars and tens of books. When Salesforce delivered Trailhead to the world we all had a boost in training: learning Salesforce became ever easier! No surprise the number of people getting certified has increased drastically, and it’s not uncommon now to see people with 5, 10 or 20 certifications on their career backpack: you don’t need to stay hours and hours with your head in a book, now you can learn 15 minutes a time when you are free between your working tasks. This is a HUGE revolution: learn a bit often and you keep yourself always on the trail, for free! From now on, anyone can become a Salesforce trailblazer and start building their trail: a lot of people have decided to change jobs and dipped into the Salesforce world with few to no experience in computer science. However when it’s time to get a certification, especially when it is your first certification, Trailhead is not enough: you need some real-world experience (no Trailhead can prepare you enough, experience is an amazing fuel for increasing your overall knowledge). A book can be a good compromise to boost your knowledge while giving you the right amount of experience that the author melt on each topic, and that’s why I chose to start this amazing trail with Packt: I wanted to do something I’ve never done before (writing a book) while delivering then Ohana more chances to pass a certification...I guess this is a win-win situation! How did you start your journey of becoming a Salesforce expert? Did being a Java developer, help you in some way? What motivated you to make the choice? Good question and the answer is that I have to thank the randomness that we can encounter daily on our lives (we can call it destiny, if you prefer). I started working as a Java developer (I came from an Electronic Engineering MSc) for a small company in my local town (Cagliari, Italy). After a while I got bored of what I was doing (boredom is a fuel for me) then I decided to move to Ireland. I got immediately the day after I landed in Cork a new job with a great income (compared to what I was earning in Italy)...but I was not 100% sure if I wanted to move abroad and that’s why I rejected that position and got back to Italy (some say it was an act of cowardice, I partially agree but I was not ready to change my life so much at that time). After just 2 months from my return home, my boss told me about a new opportunity: moving to northern Italy to join WebResults, a small company (we were just 15 people, including the CEO and CTO) that worked with something called “Salesforce”. I accepted the challenge and moved for 6 months with my spouse-to-be to WebResults headquarters: I discovered the world of Salesforce and I immediately fell in love with it. In a few weeks I learnt all that I needed to start my journey as a Salesforce developer. Years to come, I’m still working with WebResults (that in the meanwhile has been acquired by Engineering Spa, the greatest Italian consultant company) as a Salesforce Solution and Technical Architect (the amount of time I spend on coding at work has dramatically dropped unfortunately) and with the honorable Salesforce MVP title I try to evangelize my company and all the Salesforce Ohana buddies anyway I can! So if you ask me if my Java dev position helped me to arrive where I am, the answer is “definitely yes” but there is a lot more in the story! On various Salesforce certifications and why he wrote a book There are many certifications available for beginners as well as for experienced CRM developers. How should one go about choosing them? How do different Salesforce certification programs enhance a developer’s career? If you want to start your journey with Salesforce you have to choose primarily among the following paths (more details at https://trailhead.salesforce.com/credentials/administratoroverview, but you can build your own trail!): Administrator Developer Marketer Consultant Architect In my experience any aspiring Salesforce consultant should start from bases, even though she is a skilled business analysts with 20 years of experience: you need to know how the Salesforce Lightning Platform works and the best way is to get your hands dirty. Whether you wanna start as an administrator or a developer, I always recommend you face administrator skills at the beginning: a good developer should be a good administrator as well! As far as Marketer and Consultant paths are concerned, they are more related to your knowledge of specific products of the platform such as Marketing Cloud, Pardot, Field Service, Community Cloud, Einstein Analytics and many others. The Architect path brings you to the Mount Olympus of all certifications - the Technical Architect certification, which any Salesforce trailblazer aspire to get one day (and I’m one of them). Some think that owning a Salesforce certification doesn’t necessarily indicate your proficiency in the technologies involved but I do not agree with them. When I tried to get the Salesforce Advanced Administrator exam I really thought I had the required skills to pass but I failed...why? Because I didn’t study some of the topics and I wasn’t that skilled on such topics either (you’ll read this story in the book as well). That’s why I needed hours of study to pass the exam, and thanks to that deep study I learnt new Salesforce stuff and increased my proficiency in features I hadn’t actually ever used, making me the “most skilled” guy in my company regarding Omni-Channel or Salesforce Knowledge. This is an absolute win for both you and your company: certifications are meant to make you a trailblazer. Needless to say headhunters really love Salesforce certifications (my owning 20 certifications  attracts tens of contact requests on my social channels). Your book, Salesforce Advanced Administrator Certification Guide promises to give administrators a deeper knowledge of advanced Salesforce features for administrators. Why should one read this book? How is it different from other available Salesforce certification guides in the market? At first I want to say that the Salesforce Advanced Administrator Certification is a bit mistreated by administrators (as far as I’ve seen in my career): it is usually considered too hard or too complex for the skills you earn…”after all I’m already an administrator why should I become an advanced administrator”? You should my friend, the amount of things you learn is really huge, you’ll keep playing with features such as Lightning Knowledge, Omni-Channel, Live Chat, Lightning Content, features that maybe you’ve never used before, or exploring in depth the world of Salesforce automation with Process Builder, Lightning Flows, Entitlements and Approvals or knowing everything related to security and sharing of records (and many many more). Why should you choose this book? It covers extensively all required topics for the Salesforce Advanced Administrator certification keeping in mind the requirements for the exam as well. While the number of topics is too large for us to cover anything and everything for each topic, you’ll get a good amount of knowledge, suggestions and external references to ensure you reach the Salesforce Advanced Administrator certification goal. On the challenges faced by Salesforce administrators What are some of the challenges faced by Salesforce administrators today? How is Salesforce as a platform helping overcome these challenges? Can Salesforce administrators become developers too and vice versa? What is next for Salesforce? The biggest challenge that Salesforce admins face day after day is keeping pace with the extraordinarily growing Salesforce ecosystems: new companies join the Lightning Platform and new features are delivered release after release. It is more than mandatory that consultant companies and, in general, IT divisions reserve a percentage of their employees time for continuous learning, to allow Salesforce admins and devs to stay on track with the changing environment. Learning is a cost for sure, when you study you are not productive, but the benefits of a skilled and always on top employee overtakes its cost. And I see no obstacles for administrators to start their developer path as well: all they need is passion, curiosity and patience, Rome wasn’t built in a day and your developer skills won’t for sure. Trailhead is the starting point for any career path and I guess in the coming years we’ll see artificial intelligence absolutely stealing the show in Salesforce world and so admins should be prepared for the revolution that is taking place year after year. On making an impact in the Salesforce community You have created highly popular Salesforce browser extensions like ORGanizer. Tell us about how this came about? What does it take to build such successful products? Are you working on or planning to work on similar projects now? I said that boredom is my fuel: when I get bored I usually start a new project or a new hobby, and ORGanizer for Salesforce Chrome & Firefox extension (available at https://organizer.enree.co) is no different. It started as a personal project to ease my daily work with Salesforce projects, by adding little features that could speed up my administrative and coding tasks, while increasing my overall productivity. Then I thought, why not deliver this cool thing to my Salesforce Ohana? That’s where I believe the community took notice of me and it has remained one of the main reasons for my Salesforce MVP nomination. After the cool experience of writing a book, which is something that has been on my check list since I was a child, I have a few side projects related to Salesforce with some trailblazer friends, that I believe will have a great impact on the Ohana. And, why not, perhaps another book in 2020? Author Bio Enrico Murru is a Solution and Technical Architect at WebResults (an engineering company), an Italian platinum Salesforce partner, and an Independent Software Vendor (ISV). He has completed his MSc in Electronic Engineering at the University of Cagliari in 2007. In 2013, he launched a blog named Nerd @ Work. In 2016, he was nominated as the first Italian Salesforce MVP for his commitment to the Salesforce community. Then over the course of 3 years, Murru gained 20 Salesforce certifications, including the Salesforce Technical Architect certification. In 2016, he started one of the most popular projects, the ORGanizer for Salesforce Chrome and Firefox extension. You can follow him on Twitter @Enreeco, LinkedIn, GitHub, Trailblazer Community as well as on his personal blog page. Are you planning to embark on the journey of being a Salesforce Advanced Administrator? Confused about the various Salesforce certification programs and don’t know what to choose? Grab this book right now! The Salesforce Advanced Administrator Certification Guide will help you master data access security, monitoring and auditing, and understanding best practices for handling change management and data across organizations. What makes Salesforce Lightning Platform a powerful, fast and intuitive user interface What are the challenges of adopting AI-powered tools in Sales? How Salesforce can help Salesforce is buying Tableau in a $15.7 billion all-stock deal Salesforce’s open sourcing Centrifuge: A library for accelerating JVM restarts Build a custom Admin Home page in Salesforce CRM Lightning Experience
Read more
  • 0
  • 0
  • 4292

article-image-fastly-cto-tyler-mcmullen-on-lucet-and-the-future-of-webassembly-and-rust-interview
Bhagyashree R
09 Jul 2019
11 min read
Save for later

Fastly CTO Tyler McMullen on Lucet and the future of WebAssembly and Rust [Interview]

Bhagyashree R
09 Jul 2019
11 min read
Around this time in 2015, W3C introduced WebAssembly, a small binary format that promises to bring near-native performance to the web. Since then it has been well received by web developers, with some going as far as to say that the "death of JavaScript is near." It is also supported in all the major browsers including Mozilla, Chrome, Safari, and Edge. While WebAssembly was initially designed with the web in mind, it would be a waste not to take its performance and security benefits to go “beyond the web” environments as well. This year we are seeing many initiatives pushing WebAssembly beyond the web. One of them is by Fastly, an edge cloud platform provider. Beginning this year, Fastly open sourced its WebAssembly compiler and runtime, named Lucet. With Lucet, Fastly’s edge cloud can execute tens of thousands of WebAssembly programs simultaneously. We had a great opportunity to interview Fastly’s CTO Tyler McMullen, who gave us insight into why and how they came up with Lucet, what sets it apart from other WebAssembly compilers, the inner workings and design decisions behind Lucet, and more.   Here are some of the highlights from the interview: Benefits of WebAssembly beyond the Web It is exciting to think that we will be able to experience near-native experience on the web. But WebAssembly also aims to solve another major concern of today’s times: security. “WebAssembly was designed for performance, and also for security. WebAssembly programs carry much stronger security guarantees than native code, with comparable performance. That makes it a great candidate for the edge cloud, where we can use the Lucet compiler and runtime to execute WebAssembly programs in isolation from each other, at a much lower resource and performance cost than competing approaches to multi-tenant isolation of native code, like processes, containers, or virtual machines.” Along with these security and performance benefits, the growing support for WebAssembly by compilers like LLVM (since its version 8 release) also makes it suitable for non-web environments. McMullen adds, “Besides security, the other aspect that makes WebAssembly attractive beyond the browser is maturing support by compilers, most notably the LLVM toolchain, used by the Clang C compiler and Rust language compiler, among others. Rather than having to build a new language, or a new compiler, to emit code with the security guarantees we need, we can use the WebAssembly output of any compiler. And it means that tons of existing programs can be compiled to WebAssembly with minimal modification.” How Lucet ensures security With security being one of the major focus areas of Lucet, we asked McMullen how security in Lucet works. “WebAssembly provides a set of guarantees about the security and safety of the code that can be verified during compilation. But those guarantees only hold if verification and compilation are done correctly. Those guarantees also require the runtime to cooperate. So there are a lot of moving pieces here that need to work in concert with each other. Lucet takes a security-by-contract approach to this problem. The compilation phase builds up a set of constraints for the runtime. Those constraints get embedded into the compiled artifact. The runtime then picks up those constraints and enforces them while loading and running the module. This lets us enforce things like which functions a module will be allowed to import for the embedding program, how much memory it will attempt to use, as well as the layout of that memory. So, the security guarantees that Lucet provides end up being enforced with a combination of the compiler, runtime, and the embedding program.” Compilation in Lucet Lucet is designed to compile a code written in C/Rust to WebAssembly and then compile this to native. So, why can’t we directly compile code written in C/Rust to native code? McMullen says that this will give you control over the behavior of the generated code. “If you used a typical C or Rust compiler you’d have relatively little in the way of guarantees about the behavior of the generated code. With Rust you’d have a bit more in that you could guarantee memory safety, but that’s not sufficient by itself. On the other hand, we could certainly create a new C or Rust compiler that guaranteed all the safety guarantees we’ve already discussed, but that would be a tremendous amount of work and would require still more work for each language you wanted to safely compile. We chose WebAssembly because it provides many of the safety and performance guarantees we’re looking for and -- just as importantly -- also has community support. Rather than reinventing the wheel over and over again, we as a community can work together toward a common goal.” Lucet is still in its early stages of development. McMullen shares what the Lucet team is up to now: “Prior to open sourcing Lucet, we focused on WebAssembly programs emitted by a couple of compilers - LLVM via Clang and Rustc, and AssemblyScript. Supporting that subset of WebAssembly was sufficient to launch Terrarium late last year, where users can create complex web services that are compiled and deployed on demand. Since the Lucet announcement, we’ve seen interest and contributions from other languages, including Swift, Golang, Zig, and Wam. We’ve fixed a bunch of the spec compliance issues that blocked these users, and are actively working on fixing the remaining ones now.” To support, or not to support JavaScript, that is the question While building WebAssembly runtimes today, developers have two paths to choose from: either supporting JavaScript or not. Lucet follows the latter one, which helps it be simple yet performant. "Security and resource consumption also drove our design here. Modern, fast JavaScript engines are quite complex, require lots of RAM, startup time, and -- in order to make them fast -- highly advanced JIT compilers. These requirements run counter to what Fastly does. By dropping JavaScript, we can dramatically reduce the complexity and increase the performance of our system. To be clear, reducing complexity isn’t just about making life easier on ourselves. By cutting out the massive complexity of JavaScript we can also reduce the attack surface and increase confidence in our safety guarantees." In the myriad of WebAssembly runtimes, what sets Lucet apart There are currently quite a few WebAssembly runtimes, for instance, Nebulet, Wasmjit, Life, including the ones very similar to Lucet like Wasmer and Wasmtime. We were curious to know what differences Lucet brings to the table. “Lucet was designed from the ground up for multi-tenant, highly concurrent use cases, which matches the runtime requirements of Fastly’s edge cloud. The major design decisions that differentiate it are all focused on performance and resource consumption in our use case, where we need to launch WebAssembly instances for each request our edge cloud handles. Adam Foltzer, a senior software engineer at Fastly, wrote a detailed post on our design and benchmarked its performance here. Lucet shares a major component with the Wasmtime runtime, the Cranelift code generation engine. Wasmtime is currently designed for a single-tenant use case, and supports in-process compilation of WebAssembly, often called JIT. We are collaborating with the maintainers of Wasmtime on Cranelift, and on runtime implementations of the WebAssembly System Interface (WASI).” Why Fastly chose Rust for implementing Lucet Looking at Rust’s memory and thread safety guarantees, a supportive community, and a quickly evolving toolchain, many major projects are being written or rewritten in Rust. One of them is Servo, an HTML rendering engine that will eventually replace Firefox’s rendering engine. Mozilla is also using Rust to rewrite many key parts of Firefox under Project Quantum. More recently, Facebook chose Rust to implement its controversial Libra blockchain. And Fastly’s decision to choose Rust as Lucet’s implementation language was focused on security: “As for why we chose to write Lucet in Rust, the biggest reason was again safety. Writing compilers is complex work. Rust lets us take much of that complexity, describe it with types, and let the Rust compiler check our work in much deeper ways than other languages allow. It lets us focus on the problem we’re trying to solve, rather than the incidental issues of complex software.” Fastly on the future of Rust and WebAssembly In the past few years, Fastly seems to be focusing on Rust and WebAssembly. McMullen believes these languages will be central to the future and will impact key domains in tech. While Rust enables developers to write both highly efficient and safe code, WebAssembly gives you the flexibility of writing code in your choice of language and platform. “With our role in the internet, efficiency is of utmost importance. That’s why, traditionally, the type of software we build has been done with lower level languages like C and C++. We still, today, write and maintain quite a bit of software in C. There are some problems where C is still the correct option. That domain of C -- and to a lesser extent, processor-specific assembly code -- has been largely unassailable for decades as we’ve developed languages that make writing software faster and easier, but at the cost of efficiency. That’s been a great detriment to the entire industry because of how easy it is to write unsafe C code. We believe that Rust has finally been the language to change that. It allows us to write highly efficient code while also providing incredible safety. Now, WebAssembly. WebAssembly has the potential to provide something that we’ve never, in the history of computing, managed to accomplish: a common platform. It was designed to run in a browser, but manages to provide the other components that are needed: efficiency, safety, and platform-independence. We imagine a future in which a WebAssembly module can be run in a browser, on your watch, on your phone, on your TV, in the games you play, and inside server software. We’re still a ways off from that and many pieces are still needed. Lucet is our attempt at providing a WebAssembly compiler and runtime that is made to be used across many different use cases. The first one is Fastly’s edge, but we want to see many more.” Fastly on its other products and projects Limitations in the legacy CDNs that Fastly’s edge cloud platform addresses A CDN or Content Delivery Network consists of a geographically distributed group of servers that work together to ensure that content requested by a user reaches to them as fast as possible. However, it has many limitations like bulky XML based configuration files and specifications. McMullen adds, “Legacy CDNs suffer from a number of technical limitations that make them particularly ill-equipped to address changing consumer expectations, not to mention, developer and enterprise requirements. We’ve all had those online experiences when a site crashes or is non-responsive when we need it most, and our mission is to fuel the next modern digital experience, an experience that’s fast, secure, and reliable. By and large, traditional CDNs are black box solutions that are limited in their ability to provide real-time visibility and control, largely as a result of their outdated architecture, which adds cost and limits developers’ flexibility to expand on functionality.” Fastly’s edge cloud platform is not that -- rather, it aims to address these limitations by bringing data closer to the user. “As a result, developers have not been truly empowered to pursue digital transformations, despite many attempts for improvement within the industry,” he adds. What other projects by Fastly we should look forward to Fastly is continuously contributing towards making the internet better and safer by getting involved in projects like QUIC, Encrypted SNI, and standardizing WASI. Last year Fastly made three of its projects available on Fastly Labs: Terrarium, Fiddle, and Insights. When asked what else it is working on, McMullen shared, “Fastly Labs is heavily dependent on experimentation. If the experiment goes well and we think it’ll be useful for others, then we release it. We have quite a few experiments currently underway, and many of them are around the items listed in the question: ESNI, QUIC, WASI, as well as others like DNS-over-HTTPS. More iteration on what we have now is also in the cards. Lucet has come a long way, but it still has so much room to grow. Expect to see some pretty compelling developments in performance, safety, and features there.” Follow Tyler McMullen on Twitter: @tbmcmullen Learn more about Fastly and its edge-cloud platform at Fastly’s official website. Fastly open sources Lucet, a native WebAssembly compiler and runtime Fastly, edge cloud platform, files for IPO Rust’s original creator, Graydon Hoare on the current state of system programming and safety
Read more
  • 0
  • 0
  • 6632
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-everybody-can-benefit-odoo-development-an-interview-yenthe-van-ginneken
Sugandha Lahoti
29 Jun 2018
9 min read
Save for later

“Everybody can benefit from adopting Odoo, whether you’re a small start-up or a giant tech company” - An interview with Odoo community hero, Yenthe Van Ginneken

Sugandha Lahoti
29 Jun 2018
9 min read
Odoo is one of the fastest growing open source, business application development software products available. It comes with: Powerful GUI, Performance optimization, Integrated in-app purchase features Fast-growing community to transform and modernize businesses We recently interviewed Yenthe Van Ginneken, an Odoo developer, highly active in the Odoo community and recipient of Odoo best contributor of the year 2016 and Odoo community hero 2017. He spoke to us about his journey with Odoo, his thoughts on Odoo’s past, present and future, and the Odoo community. Expert's Bio Yenthe Van Ginneken, currently the technical team leader at Odoo Experts, has been an Odoo developer for over four years. He has won two awards, “Best contributor of the year 2016” and the “Odoo community hero” award in 2017. He loves improving software and teaching other people the best practices for Odoo development on his blog. You can read his Odoo blog, follow him on Twitter or reach out to him on LinkedIn. Key Takeaways Odoo is scalable and flexible to the extent that everyone, from a small startup to a giant tech company can benefit from it. It is ahead of quite a lot of ERP systems with its clean UI, advanced modules integration and the flexibility of its technical framework. Python is the preferred language of choice among most developers that want to use the Odoo framework, especially for automating and scaling tasks. The Odoo community is diverse and vast. By contributing and regularly interacting with other members, you will gain deeper insights into many different aspects of Odoo development. A great way to learn to develop in Odoo and quickly grow is actually by helping in the community. Odoo 12 will reportedly improve data processing, better report insights, and support for OCR (Optical Character Recognition) for handling documents among other exciting updates. Full Interview On who should use Odoo Odoo is more than an ERP tool. According to you, What is Odoo? Who will benefit from adopting Odoo? What made you choose Odoo?   For me, Odoo is more than an ERP. Odoo literally allows me to make any module or functionality that I can think of. Since Odoo is so flexible and scalable I believe that almost everybody can benefit from adopting Odoo. Whether you’re a small start-up or a giant tech company. The most important part to be able to benefit from adopting Odoo is adjusting the processes and mindset to use Odoo, not adjusting Odoo for the company. The projects that work the best and have the best benefit are those that don’t over-engineer and try to focus on the main company processes. I personally chose Odoo after I got an opportunity to become an Odoo developer at a company in Belgium. After the job offer I visited Odoo.com and saw the massive amount of functionalities in Odoo (while being free!) and I was genuinely amazed. After looking at the technical framework and all the default options provided by the framework I was sure that I would love to develop and implement in Odoo. Since that day I never stopped working with Odoo. On journey from OpenERP to Odoo Odoo started off as OpenERP and then in 2014, it moved beyond just ERP and was renamed Odoo. How has Odoo’s journey been so far since then? What do you think are the key milestones achieved by Odoo till date? Since the renaming from OpenERP to Odoo the company has seen a rapid growth. A bit after changing the name Odoo also introduced the enterprise version which was, in my opinion, the turning point for Odoo S.A. It allowed Odoo to keep its open source strength and market share while also gathering funds to fund the ongoing growth of the product. The big investments that are being made in the Research and Development team allow them to keep improving year after year. The main strengths and key milestones from Odoo are absolutely its flexibility, a great framework and the fact that most of the possibilities are already in Odoo by default. On the drive behind contributing to the Odoo community You are highly active in the Odoo community. How did you get into contributing for Odoo? How has this experience improved you as a developer? According to you, what are the key challenges the Odoo community is facing currently? My very first contribution started in the second half of 2014 and weren’t very significant at first. I noticed that Odoo 8, at that point the newest version, was not very well translated and had a lot of inconsistency so I started translating it in Dutch. From there on I noticed that it could have had quite a big impact and in fact could improve the ERP. It didn’t take long before I started contributing in other ways. Reporting issues, fixing bugs, maintaining bug reports and helping other people on the official help forums. By contributing to all these different subjects I got introduced to more domains and gained more insights. Thanks to my involvement with the community, I’ve learned that there is more than one side to developing and implementing projects. I believe it made me a better programmer and made me think a lot more about ways to code custom development for projects. Without being active in a community and contributing you’ll be blindsided by your own perspective. It is a great way to get challenged and you’ll see more cases by being active in the community than you could ever see on your own. The Odoo community faces a few challenges at this point. It is difficult to maintain the right balance between the enterprise version and community (free) version. There are not a lot of very active contributors to the official Odoo code and Odoo is behind on handling fixes/bug reports made by community members. This results in some community members not feeling appreciated or heard. Hiring a second community manager might be a good way to resolve these issues though. The most difficult challenge for both Odoo and the Odoo community is to make everybody feel heard and give every person the ability to contribute in the way he or she can. When there is enough help from Odoo and the community feels supported there is a possibility for a great and thriving community. On how to learn Odoo effectively As a person who has a strong hold over Odoo development, what is the typical learning curve for someone getting into Odoo, as a consultant? What is the best way to start developing in Odoo? What should one watch out for while learning? The learning curve can be quite long and can have its challenges. Usually, if you don’t have any experience with Odoo and only know basic Python it’ll take about six months before you really get to know the ins and outs of Odoo. The best way to learn to develop Odoo is probably the same as with most things in technology: dive in! Make sure you get the basics right and understand how the main functionalities work before going deeper. A great way to learn to develop in Odoo and to quickly grow is actually by helping in the community. You can get insight and help from experienced developers while also contributing to the community, it’s a win-win. Start small and build your way up to the details. It is important to find good documentation and tutorials though. At the moment there are still quite some blog posts and tutorials that are from quite a low quality. Because of this I actually started writing my own tutorials, which explain concepts step by step with samples. You can find it at https://odoo.yenthevg.com Editor’s note: Check out our collection of Odoo Books and Videos to master Odoo development. On the upcoming Odoo 12 release Odoo 12 is expected to be released later this year. What’s got you excited about this new release? Quite a lot! Every release has loads of new features that are announced and it’s an exciting time, every time. The introduction of a report designer for functional people is one of the best (known) new features. The improved reporting tools for data insight will become a great improvement too. The biggest announcements are made at Odoo Experience in October and are not publicly available yet so we’ll have to wait for that. On the future of ERP There is a lot happening in the area of ERP and BI: self-service analytics, real-time analytics, agile BI development etc. Where do you foresee the ERP market headed? We've seen ERP/CRM systems getting powerful inbuilt analytics systems, what do you think is next for the industry? What is Odoo’s role here? As with any sector in IT, a lot is becoming very data-driven. In the future integration and usage of data will only grow. I expect the combination of BI and AI to become a powerful way to process and handle data on unseen scales. Odoo itself has already hinted at improved data processing, better report insights and support for OCR (Optical Character Recognition) for handling documents. Odoo has been ahead of quite a lot of ERP systems with its clean UI, advanced modules integration and the flexibility of its technical framework for years. I expect Odoo will also be leading the way for handling all this data and getting important statistics out of it. I’m quite sure it is only a matter of time before Odoo starts working on even better BI reporting and tools. On Python and automation Automation is everywhere today and becoming an integral part of organizations and processes. Python and automation have gone hand in hand since Python’s early days. Today Python is one of the top programming languages. How do you see Python’s evolution over the years in the area of automation? What are the top ways you use Python for automation, today? It is for a reason that Python is so popular. It is flexible, quite quick to program with and the options are virtually endless. In the next years, Python will only become more popular and this will also be the case for automation projects made with Python. I personally use the Odoo framework with Python as a backbone for nearly everything that I automate (and in fact also for non-automated tasks). The projects vary from automatically handling stock moves to automatically updating remote instances to automatically getting full diagnostic reports. The combination of the programming language and the framework from Odoo allows me to automate tasks and deploy them on a big scale. ERP tool in focus: Odoo 11 How to Scaffold a New module in Odoo 11 A step by step guide to creating Odoo Addon Modules
Read more
  • 0
  • 0
  • 5849

article-image-functional-python-programming-interview-steven-lott
Aaron Lazar
17 May 2018
8 min read
Save for later

Why functional programming in Python matters: Interview with best selling author, Steven Lott

Aaron Lazar
17 May 2018
8 min read
Python is currently one of the most popular and desired programming languages. Primarily for its simplicity, agility and ability to be used across a variety of software development projects. Although an Object Oriented language, Python supports an array of programming paradigms, including Functional Programming. Functional Programming is a paradigm that treats computation as the evaluation of math functions. It’s quite advantageous as it allows for efficient parallel programming and error-free code. We recently interviewed Steven Lott, a true Python professional and best-selling author of a number of Python books. Steven talks a bit about modern Python and how the language adapts well to the Functional paradigm, offering developers a range of solutions to build modern, cloud based applications. He also talks about his most recent book, the second edition of his best seller, Functional Python Programming, and how it will benefit developers looking to enter the world of functional programming with Python. About Steven Steven F. Lott has been programming since the 70s, when computers were large, expensive, and rare. As a contract software developer and architect, he has worked on hundreds of projects, from very small to very large. He's been using Python to solve business problems for over 10 years. Steven is a technomad who lives in various places on the east coast of the U.S. Follow his technology blog to stay updated on the latest trends in tech. You may also connect with him on LinkedIn. Key Takeaways Why learn Python? One of the major reasons developers appreciate Python, is because it’s simple, and has the ability to create succinct and expressive programs. For example, data scientists prefer Python because they can build sophisticated analytical tools using simple functions and produce useful results. Why Functional programming? The functional programming paradigm forms the perfect foundation for developers and architects to build and design modern architectures like Serverless. But Python isn’t inherently functional. Although an object oriented language at heart, Python can create higher order functions and other functional features. Python 3 has made this easier. Steven’s Python 4 wish list: In future versions of Python, Steven hopes to see a wider use of Unicode operator characters like × in addition, * for multiplication, and ÷ in addition to / for division. Also, he expects PyPy and RPython projects to be more widely used; future Pythons versions will benefit from optimizations and restructuring the interpreter. One of the most helpful features of Python 3 are type hints, which allow one to write clear and implicitly documented code while preventing the invoking of methods with wrong data types. Steven’s latest edition of Functional programming with Python, explores type hints in depth along with core language features such as lambdas, generator expressions, functions, and callable objects. Full Interview Python is one of the top programming languages. List down top 3 features of Python that make programmers love it.. It seems like programmers love Python primarily because it allows them to create succinct, expressive programs. Before long, they learn the vast library of code - is another reason for Python's immense popularity. Many people adopt Python because of the low barrier to entry: it really is as simple as download and start working. You've been working with Python for over a decade. How has your experience been with Python as a primary development language? Over my 40-year career, I've used a variety of languages. And I've found Python to be extremely productive. A team can build and deploy microservices-based applications at a tremendous pace. Data scientists can build sophisticated analytical tools using simple functions to produce useful results without the overheads of complex compile and build environments. Back when Python 3 just came into existence, we saw certain resistance to the notion of Python being apt for functional programming. How would you say Python has progressed since then? At its core, Python is an object-oriented language. Consequently some functional features aren't central. One of the essential functional design tools -- creating higher-order functions -- has always been part of Python. The wider use of generator functions in Python 3 has made functional Python programming much more common. Many Python applications are hybrids, mixing object-oriented and functional features of the language. Now that type hints are available, it becomes practical to use mypy to confirm that the code is very likely to work properly. For a developer who's picking up the Functional Programming paradigm for the first time, what do you think are the prerequisites? Functional programming is closely aligned to the core mathematical ideas of functions and functional composition. As a consequence, a minimal background in programming could be advantageous to help leverage essential function definitions and avoid needless state change. For programmers already heavily invested in procedural programming, it may be helpful to set the idea of stateful objects aside. In the above context, how does your book, Functional Python Programming, Second Edition, prepare its readers to be industry ready? What are the key takeaways for readers from your title and how does it help with the learning curve? The examples in the book are related to exploratory data analysis, an important skill in the broader area of data science. They also focus on the standard library, allowing someone to apply the functional design approach to other libraries and tools. I think a focus on the core language features (e.g., lambdas, generator expressions, functions, callable objects) provide a foundation that allows a programmer to apply the core ideas more widely to different kinds of problems and other software packages. What new and updated content is available in this edition, for developers who've purchased your previous book? Almost all of the examples have been rewritten to include type hints. This can be an important quality check helping to ensure the Python code works. When used with doctest examples, it becomes relatively easy to provide reliable, correct code. In a few cases, external packages (i.e., the pymonad library) don't have type hints and the examples reflect this gap. Can you throw some light on Functional Reactive Programming and how FRP with Python is boosting the implementation of modern architectures like Cloud Native and Serverless? The central idea of serverless programming -- a collection of isolated functions -- fits the functional programming paradigm very elegantly. The processing is generally stateless, with stand-alone functions waiting for their inputs. Ideas like "choreography" of web services work with this idea of stateless functions that respond to an input by producing an output. This leads to careful separation of persistence and state change from the other transformational processing. This helps create software with easy-to-understand behavior and implementation code that's very expressive of the algorithm. As Python inches towards a 4.0, what do you think should/can be changed/rectified in the language, for the better? At some point, I expect the PyPy and RPython projects to create some optimizations leading to a fundamental restructuring of the interpreter. Perhaps these changes could remove the need for the GIL (Global Interpreter Lock) by exposing a minimal kernel of code that requires exclusive access to internal data structures. Of more general interest, I'd hope to see wider use of Unicode operator characters like × in addition to * for multiplication, and ÷ in addition to / for division. Perhaps ∻ could be adopted for truncated division. This could also lead to use of ℝ instead of float and ℤ instead of int providing a more mathematical look to type hints. It would be nice to expand the use of the Unicode character set to create more readable programs. List down 3 reasons for developers to choose your book as the book of choice for Functional programming in Python. Programmers who want to create succinct and expressive code often find functional design to help them fulfill this. The Functional Python Programming book provides extensive examples that show multiple ways to achieve this goal. In many cases, generator functions and lazy processing can be a large performance improvement. A generator function can use less memory than a large data collection, and this change can be helpful. This book will provide number of examples of lazy processing to avoid creating large, in-memory collections. It can be difficult to get started with type hints. The use case in this book show type hints in somewhat more complex real-world situations. If you enjoyed this interview, head over to check out Steven’s latest edition of Functional Python Programming, He is leveraging Python to implement microservices and ETL pipelines. His other titles with Packt Publishing include Python Essentials, Mastering Object-Oriented Python, Functional Python Programming, and Python for Secret Agents. What is the difference between functional and object oriented programming? Building functional programs with F# Seven wrongs don’t make the one right: Solving a problem with Functional Javascript
Read more
  • 0
  • 0
  • 5080