Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Programming

1083 Articles
article-image-mastering-the-api-life-cycle-a-comprehensive-guide-to-design-implementation-release-and-maintenance
Bruno Pedro
06 Nov 2024
15 min read
Save for later

Mastering the API Life Cycle: A Comprehensive Guide to Design, Implementation, Release, and Maintenance

Bruno Pedro
06 Nov 2024
15 min read
This article is an excerpt from the book, "Building an API Product", by Bruno Pedro. Build cutting-edge API products confidently, excelling in today's competitive market with this comprehensive guide on API fundamentals, inner workings, and steps for successful API product development.Introduction The life of an API product consists of a series of stages. Those stages form a cycle that starts with the initial conception of the API product and ends with the retirement of the API. The name of this sequence of stages is called a life cycle. This term started to gain popularity in software and product development in the 1980s. It’s used as a common framework to align the different participants during the life of a software application or product. Each stage of the API life cycle has specific goals, deliverables, and activities that must be completed before advancing to the next stage. There are many variations on the concept of API life cycles. I use my own version to simplify learning and focus on what is essential. Over the years, I have distilled the API life cycle into four easy-to-understand stages.  They are the design, implementation, release, and maintenance stages. Keep reading to gain an overview of what each of the stages looks like.  Figure 4.1 – The API life cycle The goal of this chapter is to provide you with a global overview of what an API life cycle is. You will see each one of the stages of the API life cycle as a transition and not simply an isolated step. You will first learn about the design stage and understand how it’s foundational to the success of an API product. Th en, you’ll continue o n to the implementation stage, where you’ll learn that a big part of an API server can be generated. After that, the chapter explores the release stage, where you’ll learn the importance of finding the right distribution model. Finally, you’ll understand the importance of versioning and sunsetting your API in the maintenance stage. After reading the chapter, you will understand and be able to recognize the API life cycle’s diff erent stages. You will understand how each API life cycle stage connects to the others. You will also know the participants and stakeholders of each stage of the API life cycle. Finally, you will know the most critical aspects of each stage of the API life cycle. In this article, you’ll learn about the four stages of the API life cycle: Design Implement Release Maintain  Design The first stage of the API life cycle is where you decide what you will build. You can view the design stage as a series of steps where your view of what your API will become gets more refined and validated. At the end of the design stage, you will be able to confidently implement your API, knowing that it’s aligned with the needs of your business and your customers. The steps I take in the design stage are as follows: Ideation Strategy Definition Validation Specification These steps help me advance in holistically designing the API, involving as many different stakeholders as possible so I get a complete alignment. I usually start with a rough idea of what the ideal API would look like. Then I start asking different stakeholders as many questions as possible to understand whether my initial assumptions were correct. Something I always ask is why an API should be built. Even though it looks like a simple question, its answer can reveal the real intentions behind building the API. Also, the answer is different depending on whom you ask the question. Your job is to synthesize the information you gather and document pieces of evidence that back up the decisions you make about the API design. You will, at this stage, interview as many stakeholders as possible. They can include potential API users, engineers who work with you, and your company’s leadership team. The goal is to find out why you’re building the API and to document it. Once you know why you’re building the API, you’ll learn what the API will look like to fit the needs of potential users. To learn what API users need, identify the personas you want to serve and then put yourself in their shoes. You’ve already seen a few proto-personas in Chapter 2. In this API life cycle stage, you draw from those generic personas and identify your API users. You then contact people representing your API user personas and interview them. During the interviews, you should understand their JTBDs, the challenges they face during their work, and the tools they use. From the information you obtain, you can infer the benefits they would get from the API you’re building and how they would use the API. This last piece of information is critical because it lets you define the architectural style of the API. By knowing what tools your user personas use daily, you can make an informed decision about the architectural style of your API. Architectural styles are how you identify the technology and type of communication that the API will use. For example, REST is one architectural style that lets API consumers interact with remote resources by executing one of the HTTP verbs. Among those verbs, there’s one that’s natively supported by web browsers—HTTP GET. So, if you identify that a user persona wants to use a web browser to consume your API, then you will want to follow the REST architectural style and limit it to HTTP GET. Otherwise, that user persona won’t be able to use your API directly from their tool of choice. Something else you’ll want to define is the capabilities your API will offer users. Defining capabilities is an exercise that combines the information you gathered from interviews. You translate JTBDs, benefits, and behaviors into a set of capabilities that your API will have. Ideally, those capabilities will cover all the needs of the users whom you interviewed. However, you might want to prioritize the capabilities according to their degree of urgency and the cost of implementation. In any case, you want to validate your assumptions before investing in actually implementing the API. Validation of your API design happens first at a high level, and after a positive review, you attempt a low-level validation. High-level validation involves sharing the definition of the architectural style and capabilities that you have created with the API stakeholders. You present your findings to the stakeholders, explain how you came up with the definitions, and then ask for their review. Sometimes the feedback will make you question your assumptions, and you must refine your definitions. Eventually, you will get to a point where the stakeholders are all aligned with what you think the API should be. At that point, you’re ready to attempt a low-level validation. The difference between a high-level and a low-level validation is the amount of detail you share with your stakeholders and how technical the feedback you expect needs to be. While in high-level validation, you mostly expect an opinion about the design of the API, in low-level validation, you actually want the stakeholders to test the API before you start building it. You do that by creating what is called an API mock server. It allows anyone to make real API requests to a server as if they were making requests to the real API. The mock server responds with data that is not real but has the same shape that the responses of the real API would have. Stakeholders can then test making requests to the mock server from their tools of choice to see how the API would work. You might need to make changes during this low-level validation process until the stakeholders are comfortable with how your API will work. After that, you’re ready to translate the API design into a machine-readable definition document that will be used during the implementation stage of the API life cycle. The type of machine-readable definition depends on the architectural style identified earlier. If, for example, the architectural style is REST, then you’ll create an OpenAPI document. Otherwise, you will work with the type of machine-readable definition most appropriate for the architectural style of the API. Once you have a machine-readable API definition, you’re ready to advance to the implementation stage of the API life cycle. Implementation Having a machine-readable API definition is halfway to getting an entire API server up and running. I won’t focus on any particular architectural style, so you can keep all options open at this point. The goal of the machine-readable definition is to make it easy to generate server code and configuration and give your API consumers a simple way to interact with your API. Some API server solutions require almost no coding as long as you have a machine-readable definition. One type of coding you’ll need to do—or ask an engineer to do—is the code responsible for the business logic behind each API capability. While the API itself can be almost entirely generated, the logic behind each capability must be programmed and linked to the API. Usually, you’ll start with a first version of your API server that can run locally and will be used to iteratively implement all the business logic behind each of the capabilities. Later, you’ll make your API server publicly available to your API consumers. When I say publicly available, I mean that your API consumers should be able to securely make requests. One of the elements of security that you should think about is authentication. Many APIs are fully open to the public without requiring any type of authentication. However, when building an API product, you want to identify who your users are. Monetization is only possible if you know who is making requests to your API. Other security factors to consider have already been covered in Chapter 3. They include things such as logging, monitoring, and rate limiting. In any case, you should always test your API thoroughly during the implementation stage to make sure that everything is working according to plan. One type of test that is particularly useful at this stage is contract testing. This type of test aims to verify whether the API responses include the expected information in the expected format. The word contract is used to describe the API definition as something that both you—the API producers—and your consumers agree to. By performing a contract test, you’ll verify whether the implementation of the API has been done according to what has been designed and defined in the machine-readable document. For example, you can verify whether a particular capability is responding with the type of data that you defined. Before deploying your API to production, though, you want to be more thorough with your testing. Other types of tests that are well suited to be performed at this stage are functional and performance testing. Functional tests, in particular, can help you identify areas of the API that are not behaving as functionally as intended. Testing different elements of your API helps you increase its quality. Nevertheless, there’s another activity that focuses on API quality and relies on tests to obtain insights. Quality assurance, or QA, is one type of activity where you test your API capabilities using different inputs and check whether the responses are the expected ones. QA can be performed manually or  automatically by following a programmable script. Performing API QA has the advantage of improving the quality of your API, its overall user experience, and even the security of the product. Since a QA process can identify defects early on during the implementation stage of an API product, it can reduce the cost of fi xing those defects if they’re found when consumers are already using the API. While contract and functional tests provide information on how an API works, QA off ers a broader perspective on how consumers experience the API. A QA process can be a part of the release process of your API and can determine whether the proposed changes have production quality. Release In soft ware development, you can say that a release happens whenever you make your soft ware available to users. Diff erent release environments target diff erent kinds of users. You can have a development environment that is mostly used to share your soft ware with other developers and to make testing easy. Th ere can also be a staging environment where the soft ware is available to a broader audience, and QA testing can happen. Finally, there is a production environment where the soft ware is made available generally to your customers. Releasing soft ware—and API products—can be done manually or automatically. While manual releases work well for small projects, things can get more complicated if you have a large code base and a growing team working on the project. In those situations, you want to automate the release as much as possible with something called a build process. During implementation, you focus on developing your API and ensuring you have all tests in place. If those tests are all fully automated, you can make them run every time you try to release your API. Each build process can automatically run a series of steps, including packaging the soft ware, making it available on a mock server, and running tests. If any of the build steps fail, you can consider that the whole build process failed, and the API isn’t released. If the build process succeeds, you have a packaged API ready to be deployed into your environment of choice. Deploying the API means it will become available to any users with access to the environment where you’re doing the release. You can either manage the deployment process yourself, including the servers where your API will run, or use one of the many available API gateway products. Either way, you’ll want to have a layer of control between your users and your API. If controlling how users interact with your API is important, knowing how your API is behaving is also fundamental. If you know how your API behaves, you can understand whether its behavior is aff ecting your users’ experience. By anticipating how users can be negatively aff ected, you can proactively take measures and improve the quality of your API. Using an API monitor lets you periodically receive information about the behavior and quality of your API. You can understand whether any part of your API is not working as expected by using a solution such as a Postman Monitor. Diff erent solutions let you gather information about API availability, response times, and error rates. If you want to go deeper and understand how the API server is performing, you can also use an Application Performance Monitor (APM). Services such as New Relic give you information about the performance and error rate of the server and the code that is running your API. Another area that you want to pay attention to during the release stage of the API life cycle is documentation. While you can have an API reference automatically built from your machine-readable defi nition, you’ll want to pay attention to other aspects of documentation. As you’ve seen in Chapter 2, good API documentation is fundamental to obtaining a good user experience. In Chapter 3, you learned how documentation can enhance support and help users get answers to their questions when interacting with your API. Documentation also involves tutorials covering the JTBDs of the API user personas and clearly showing how consumers can interact with each API feature. To promote the whole API and the features you’re releasing, you can make an announcement to your customers and the community. Announcing a release is a good idea because it raises the general public’s awareness and helps users understand what has changed since the last release. Depending on the size of your company, your available marketing budget, and the importance of the release, you choose the media where you make the announcement. You could simply share the news on your blog, or go all the way and promote the new version of your API with a marketing campaign. Your goal is always to reach the existing users of your API and to make the news available to other potential users. Sharing news about your release is a way to increase the reach of your API. Another way is to distribute your API reference in existing API marketplaces that already have their own audience. Online marketplaces let you list your API so potential users can fi nd it and start using it. Th ere are vertical marketplaces that focus on specifi c sectors, such as healthcare or education. Other marketplaces are more generic and let you list any API. Th e elements you make available are usually your API reference, documentation, and pointers on signing up and starting to use the API. You can pick as many marketplaces as you like. Keep in mind that some of the existing solutions charge you for listing your API, so measure each marketplace as a distribution channel. You can measure how many users sign up and use your API across the marketplaces where your API is listed. Over time, you’ll understand which marketplaces aren’t worth keeping, and you can remove your API from those. Th is measurement is part of API analytics, one of the activities of the maintenance stage of the API life cycle. Keep rea ding to learn more about it. Maintenance You’re now in the last stage of the API life cycle. This is the stage where you make sure that your API is continuously running without disturbances. Of all the activities at this stage, the one where you’ll spend the most time will be analyzing how users interact with your API. Analytics is where you understand who your users are, what they’re doing, whether they’re being successful, and if not, how you can help them succeed. The information you gather will help you identify features that you should keep, the ones that you should improve, and the ones that you should shut down. But analytics is not limited to usage. You can also obtain performance, security, and even business metrics. For example, with analytics, you can identify the customers who interact with the top features of your API and understand how much revenue is being generated. That information can tell you whether the investment in those top features is paying off. You can also understand what errors are the most common and which customers are having the most difficulties. Being able to do that allows you to proactively fix problems before users get in touch with your support team. Something to keep in mind is that there will be times when users will have difficulties working with your API. The issues can be related to your API server being slow or not working at all. There can be problems related to connectivity between some users and your API. Alternatively, individual users can have issues that only affect them. All these situations usually lead to customers contacting your support team. Having a support system in place is important because it increases the satisfaction of your users and their trust in your product. Without support, users will feel lost when they have difficulties. Worse, they’ll share their problems publicly without you having a chance to help. One situation where support is particularly requested is when you need to release a new version of your API. Versioning happens whenever you introduce new features, fix existing ones, or deprecate some part of your API. Having a version helps your users know what they should expect when interacting with your API. Versioning also enables you to communicate and identify those changes in different categories. You can have minor bug fixes, new features, or breaking changes. All those can affect how customers use your API, and communicating them is essential to maintaining a good experience. Another aspect of versioning is the ability to keep several versions running. As the API producer, running more than one version can be helpful but can increase your costs. The advantage of having at least two versions is that you can roll back to the previous version if the current one is having issues. This is often considered a good practice. Knowing when to end the life of your entire API or some of its features is a simple task, especially when there are customers using your API regularly. First of all, it’s essential that you have a communication plan so your customers know in advance when your API will stop working. Things to mention in the communication plan include a timeline of the shutdown and any alternative options, if available, even from a competitor of yours. A second aspect to account for is ensuring the API sunset is done according to existing laws and regulations. Other elements include handling the retention of data processed or generated by usage of the API and continuing to monitor accesses to the API even after you shut it down. ConclusionAt this point, you know how to identify the different stages of the API life cycle and how they’re all interconnected. You also understand which stakeholders participate at each stage of the API life cycle. You can describe the most important elements of each stage of the API life cycle and know why they must be considered to build a successful API product. You first learned about my simplified version of the API life cycle and its four stages. You then went into each of them, starting with the design stage. You learned how designing an API can affect its success. You understood the connection between user personas, their attributes, and the architectural type of the API that you’re building. After that, you got to know what high and low-level design validations are and how they can help you reach a product-market fit. You then learned that having a machine-readable definition enables you to document your API but is also a shortcut to implementing its server and infrastructure. Afterward, you learned about contract testing and QA and how they connect to the implementation and release stages. You acquired knowledge about the different release environments and learned how they’re used. You knew about distribution and API marketplaces and how to measure API usage and performance. Finally, you learned how to version and eventually shut down your API. Author BioBruno Pedro is a computer science professional with over 25 years of experience in the industry. Throughout his career, he has worked on a variety of projects, including Internet traffic analysis, API backends and integrations, and Web applications. He has also managed teams of developers and founded several companies, including tarpipe, an iPaaS, in 2008, and the API Changelog in 2015. In addition to his work experience, Bruno has also made contributions to the API industry through his written work, including two published books on API-related topics and numerous technical magazine and web articles. He has also been a speaker at numerous API industry conferences and events from 2013 to 2018.
Read more
  • 0
  • 0
  • 771

article-image-mastering-code-generation-exploring-the-llvm-backend
Kai Nacke, Amy Kwan
24 Oct 2024
10 min read
Save for later

Mastering Code Generation: Exploring the LLVM Backend

Kai Nacke, Amy Kwan
24 Oct 2024
10 min read
This article is an excerpt from the book, Learn LLVM 17 - Second Edition, by Kai Nacke, Amy Kwan. Learn how to build your own compiler, from reading the source to emitting optimized machine code. This book guides you through the JIT compilation framework, extending LLVM in a variety of ways, and using the right tools for troubleshooting.Introduction Generating optimized machine code is a critical task in the compilation process, and the LLVM backend plays a pivotal role in this transformation. The backend translates the LLVM Intermediate Representation (IR), derived from the Abstract Syntax Tree (AST), into machine code that can be executed on target architectures. Understanding how to generate this IR effectively is essential for leveraging LLVM's capabilities. This article delves into the intricacies of generating LLVM IR using a simple expression language example. We'll explore the necessary steps, from declaring library functions to implementing a code generation visitor, ensuring a comprehensive understanding of the LLVM backend's functionality. Generating code with the LLVM backend The task of the backend is to create optimized machine code from the LLVM IR of a module. The IR is the interface to the backend and can be created using a C++ interface or in textual form. Again, the IR is generated from the AST. Textual representation of LLVM IR Before trying to generate the LLVM IR, it should be clear what we want to generate. For our example expression language, the high-level plan is as follows: 1. Ask the user for the value of each variable. 2. Calculate the value of the expression. 3. Print the result. To ask the user to provide a value for a variable and to print the result, two library functions are used: calc_read() and calc_write(). For the with a: 3*a expression, the generated IR is as follows: 1. The library functions must be declared, like in C. The syntax also resembles C. The type before the function name is the return type. The type names surrounded by parenthesis are the argument types. The declaration can appear anywhere in the file: declare i32 @calc_read(ptr) declare void @calc_write(i32) 2. The calc_read() function takes the variable name as a parameter. The following construct defines a constant, holding a and the null byte used as a string terminator in C: @a.str = private constant [2 x i8] c"a\00" 3. It follows the main() function. The parameter names are omitted because they are not used.  Just as in C, the body of the function is enclosed in braces: define i32 @main(i32, ptr) { 4. Each basic block must have a label. Because this is the first basic block of the function, we name it entry: entry: 5. The calc_read() function is called to read the value for the a variable. The nested getelemenptr instruction performs an index calculation to compute the pointer to the first element of the string constant. The function result is assigned to the unnamed %2 variable.  %2 = call i32 @calc_read(ptr @a.str) 6. Next, the variable is multiplied by 3:  %3 = mul nsw i32 3, %2 7. The result is printed on the console via a call to the calc_write() function:  call void @calc_write(i32 %3) 8. Last, the main() function returns 0 to indicate a successful execution:  ret i32 0 } Each value in the LLVM IR is typed, with i32 denoting the 32-bit bit integer type and ptr denoting a pointer. Note: previous versions of LLVM used typed pointers. For example, a pointer to a byte was expressed as i8* in LLVM. Since L LVM 16, opaque pointers are the default. An opaque pointer is just a pointer to memory, without carrying any type information about it. The notation in LLVM IR is ptr.Previous versions of LLVM used typed pointers. For example, a pointer to a byte was expressed as i8* in LLVM. Since L LVM 16, opaque pointers are the default. An opaque pointer is just a pointer to memory, without carrying any type information about it. The notation in LLVM IR is ptr. Since it is now clear what the IR looks like, let’s generate it from the AST.  Generating the IR from the AST The interface, provided in the CodeGen.h header file, is very small:  #ifndef CODEGEN_H #define CODEGEN_H #include "AST.h" class CodeGen { public: void compile(AST *Tree); }; #endifBecause the AST contains the information, the basic idea is to use a visitor to walk the AST. The CodeGen.cpp file is implemented as follows: 1. The required includes are at the top of the file: #include "CodeGen.h" #include "llvm/ADT/StringMap.h" #include "llvm/IR/IRBuilder.h" #include "llvm/IR/LLVMContext.h" #include "llvm/Support/raw_ostream.h" 2. The namespace of the LLVM libraries is used for name lookups: using namespace llvm; 3. First, some private members are declared in the visitor. Each compilation unit is represented in LLVM by the Module class and the visitor has a pointer to the module called M. For easy IR generation, the Builder (of type IRBuilder<>) is used. LLVM has a class hierarchy to represent types in IR. You can look up the instances for basic types such as i32 from the LLVM context. These basic types are used very often. To avoid repeated lookups, we cache the needed type instances: VoidTy, Int32Ty, PtrTy, and Int32Zero. The V member is the current calculated value, which is updated through the tree traversal. And last, nameMap maps a variable name to the value returned from the calc_read() function: namespace { class ToIRVisitor : public ASTVisitor { Module *M; IRBuilder<> Builder; Type *VoidTy; Type *Int32Ty; PointerType *PtrTy; Constant *Int32Zero; Value *V; StringMap<Value *> nameMap;4. The constructor initializes all members: public: ToIRVisitor(Module *M) : M(M), Builder(M->getContext()) { VoidTy = Type::getVoidTy(M->getContext()); Int32Ty = Type::getInt32Ty(M->getContext()); PtrTy = PointerType::getUnqual(M->getContext()); Int32Zero = ConstantInt::get(Int32Ty, 0, true); }5. For each function, a FunctionType instance must be created. In C++ terminology, this is a function prototype. A function itself is defined with a Function instance. The run() method defines the main() function in the LLVM IR first:  void run(AST *Tree) { FunctionType *MainFty = FunctionType::get( Int32Ty, {Int32Ty, PtrTy}, false); Function *MainFn = Function::Create( MainFty, GlobalValue::ExternalLinkage, "main", M); 6. Then we create the BB basic block with the entry label, and attach it to the IR builder: BasicBlock *BB = BasicBlock::Create(M->getContext(), "entry", MainFn); Builder.SetInsertPoint(BB);7. With this preparation done, the tree traversal can begin:     Tree->accept(*this); 8. After the tree traversal, the computed value is printed via a call to the calc_write() function. Again, a function prototype (an instance of FunctionType) has to be created. The only parameter is the current value, V:  FunctionType *CalcWriteFnTy = FunctionType::get(VoidTy, {Int32Ty}, false); Function *CalcWriteFn = Function::Create( CalcWriteFnTy, GlobalValue::ExternalLinkage, "calc_write", M); Builder.CreateCall(CalcWriteFnTy, CalcWriteFn, {V});9. The generation finishes  by returning 0 from the main() function: Builder.CreateRet(Int32Zero); }10. A WithDecl node holds the names of the declared variables. First, we create a function prototype for the calc_read() function:  virtual void visit(WithDecl &Node) override { FunctionType *ReadFty = FunctionType::get(Int32Ty, {PtrTy}, false); Function *ReadFn = Function::Create( ReadFty, GlobalValue::ExternalLinkage, "calc_read", M);11. The method loops through the variable names:  for (auto I = Node.begin(), E = Node.end(); I != E; ++I) { 12. For each  variable, a string  with a variable name is created:  StringRef Var = *I; Constant *StrText = ConstantDataArray::getString( M->getContext(), Var); GlobalVariable *Str = new GlobalVariable( *M, StrText->getType(), /*isConstant=*/true, GlobalValue::PrivateLinkage, StrText, Twine(Var).concat(".str"));13. Then the IR code to call the calc_read() function is created. The string created in the previous step is passed as a parameter:  CallInst *Call = Builder.CreateCall(ReadFty, ReadFn, {Str});14. The returned value is stored in the mapNames map for later use:  nameMap[Var] = Call; }15. The tree traversal continues with the expression:  Node.getExpr()->accept(*this); };16. A Factor node is either a variable name or a number. For a variable name, the value is looked up in the mapNames map. For a number, the value is converted to an integer and turned into a constant value: virtual void visit(Factor &Node) override { if (Node.getKind() == Factor::Ident) { V = nameMap[Node.getVal()]; } else { int intval; Node.getVal().getAsInteger(10, intval); V = ConstantInt::get(Int32Ty, intval, true); } };17. And last, for a BinaryOp node, the right calculation operation must be used:  virtual void visit(BinaryOp &Node) override { Node.getLeft()->accept(*this); Value *Left = V; Node.getRight()->accept(*this); Value *Right = V; switch (Node.getOperator()) { case BinaryOp::Plus: V = Builder.CreateNSWAdd(Left, Right); break; case BinaryOp::Minus: V = Builder.CreateNSWSub(Left, Right); break; case BinaryOp::Mul: V = Builder.CreateNSWMul(Left, Right); break; case BinaryOp::Div: V = Builder.CreateSDiv(Left, Right); break;    }       };       };       }18. With this, the visitor class is complete. The compile() method creates the global context and the  module, runs the tree traversal, and dumps the generated IR to the console:  void CodeGen::compile(AST *Tree) { LLVMContext Ctx; Module *M = new Module("calc.expr", Ctx); ToIRVisitor ToIR(M); ToIR.run(Tree); M->print(outs(), nullptr); }We now have implemented the frontend of the compiler, from reading the source up to generating the IR. Of course, all these components must work together on user input, which is the task of the compiler driver. We also need to implement the functions needed at runtime. Both are topics of the next section covered in the book. Conclusion In conclusion, the process of generating LLVM IR from an AST involves multiple steps, each crucial for producing efficient machine code. This article highlighted the structure and components necessary for this task, including function declarations, basic block management, and tree traversal using a visitor pattern. By carefully managing these elements, developers can harness the power of LLVM to create optimized and reliable machine code. The integration of all these components, alongside user input and runtime functions, completes the frontend implementation of the compiler. This sets the stage for the next phase, focusing on the compiler driver and runtime functions, ensuring seamless execution and integration of the compiled code. Author BioKai Nacke is a professional IT architect currently residing in Toronto, Canada. He holds a diploma in computer science from the Technical University of Dortmund, Germany. and his diploma thesis on universal hash functions was recognized as the best of the semester. With over 20 years of experience in the IT industry, Kai has extensive expertise in the development and architecture of business and enterprise applications. In his current role, he evolves an LLVM/clang-based compiler.nFor several years, Kai served as the maintainer of LDC, the LLVM-based D compiler. He is the author of D Web Development and Learn LLVM 12, both published by Packt. In the past, he was a speaker in the LLVM developer room at the Free and Open Source Software Developers’ European Meeting (FOSDEM).Amy Kwan is a compiler developer currently residing in Toronto, Canada. Originally, from the Canadian prairies, Amy holds a Bachelor of Science in Computer Science from the University of Saskatchewan. In her current role, she leverages LLVM technology as a backend compiler developer. Previously, Amy has been a speaker at the LLVM Developer Conference in 2022 alongside Kai Nacke.
Read more
  • 0
  • 0
  • 371

article-image-learning-essential-linux-commands-for-navigating-the-shell-effectively
Expert Network
16 Aug 2021
9 min read
Save for later

Learning Essential Linux Commands for Navigating the Shell Effectively 

Expert Network
16 Aug 2021
9 min read
Once we learn how to deploy an Ubuntu server, how to manage users, and how to manage software packages, we should take a moment to learn some important concepts and commands that will allow us to build more of the foundational knowledge that will serve us well while understanding the advanced concepts and treading the path of expertise. These foundational concepts include core Linux commands for navigating the shell.  This article is an excerpt from the book, Mastering Ubuntu Server, Third Edition by Jeremy “Jay” La Croix – A hands-on book that will teach you how to deploy, maintain and troubleshoot Ubuntu Server.    Learning essential Linux commands Building a solid competency on the command line is essential and effectively gives any system administrator or engineer superpowers. Our new abilities won’t allow us to leap tall buildings in a single bound, but will definitely enable us to execute terminal commands as if we’re ninjas. While we won’t master the art of using the command line in this section (that can only come with years and experience), we will definitely become more confident.  First, let’s talk about moving from one place to another within the Linux filesystem. Specifically, by “Linux filesystem”, I’m referring to the default structure of the various folders (also referred to as “directories”) contained within your Ubuntu installation. The Linux filesystem contains many important directories, each with their own designated purpose, which we’ll talk about in more detail in the book. Before we can explore that further, we’ll need to learn how to navigate from one directory to another. The first command we’ll cover in this section relative to navigating the filesystem will clarify the directory you’re currently working from. For that, we have the pwd command. The pwd command pwd stands for print working directory, and shows you where you currently are in the filesystem. If you run it, you may see output such as this:  Figure 4.1: Viewing the current working directory  In this example, when I ran pwd, the output informed me that my current working directory is /home/jay. This is known as your home directory and, by default, every user has one. This is where all the files for your user account will reside by default. Sure, you can create files anywhere you’d like, even outside your home directory if you have permission to do so or you use sudo. But just because you can doesn’t mean you should. As you’ll learn in this article, the Linux filesystem has a designated place for just about everything. But your home directory, located at /home/<username>, is yours. You own it, you control it—it’s your home on the server. In the early 2000s, Linux installations with a graphical user interface even depicted your home directory with an icon of a house.  Typically, files that you create in your home directory will have permission string similar to this:  -rw-rw-r-- 1 jay  jay      0 Jul  5 14:10 testfile.txt  You can see by default, files you create in your home directory are owned by your user, your group, and are readable by all three categories (user, group, and other).  The cd command To change our current directory and navigate to another, we can use the cd command along with a path we’d like to move to:  cd /etc  Now, I haven’t gone over the file and directory layout yet, so I just randomly picked the etc directory. The forward slash at the beginning designates the beginning of the filesystem. More on that later. Now, we’re in the /etc directory, and our command prompt has even changed as well:  Figure 4.2: Command prompt and pwd command after changing a directory  As you could probably guess, the cd command stands for change directory, and it’s how you move your working directory from one to another while navigating around. You can use the following command, for example, to return back to the home directory:  cd /home/<user>  In fact, there are several ways to return home, a few of which are demonstrated in the following screenshot:    Figure 4.3: Other ways of navigating to the home directory  The first command, cd -, doesn’t actually have anything to do with your home directory specifically. It’s a neat trick to return you to whatever directory you were in most previously. For me, the cd – command took me to the previous directory I was just in, which just so happened to be /home/jay. The second command, cd /home/jay, took me directly to my home directory since I called out the entire path. The last command, cd ~, also took me to my home directory. This is because ~ is shorthand for the full path to your home directory, so you don’t really ever have to type out the entire path to /home/<user>. You can just refer to that path simply as ~.  The ls command Another essential command is ls. The ls command lists the contents of the current working directory. We probably don’t have any contents in our home directory yet. But if we navigate to /etc by running cd /etc, as we did earlier, and then execute ls, we’ll see that the /etc</span> directory has a number of files in it. Go ahead and try it yourself and see:  cd /etc ls  We didn’t actually have to change our working directory to /etc just to list the contents. We could’ve just executed the following command:  ls /etc  Even better, we can run:  ls -l /etc  This gives us the contents in a long list, which I think is much easier to understand. It will show each directory or file entry on its own line, along with the permission string. But you probably already must be knowing ls as well as ls -l so I won’t go into too much more detail here. The -l portion of the ls command in that example is known as an argument. I’m not referring to an argument such as the ever-ensuing debate in the Linux community over which command-line text editor is the best between vim and emacs (it’s clearly vim). Instead, I’m referring to the concept of an argument in shell commands that allow you to override the defaults, or feed options to the command in some way, such as in this example, where we format the output of ls to be in a long list.  The rm command The rm command is another one that we touched on in, when we were discussing manually removing the home directory of a user that was removed from the system. So, at this point, you’re probably well aware of that command and what it does (it removes files and directories). It’s a potentially dangerous command, as you could use it to accidentally remove something that you shouldn’t have. We used the following command to remove the home directory of user dscully:  rm -r /home/dscully  As you can see, we’re using the -r argument to alter the behavior of the rm command, which, by default, doesn’t remove directories but only files. The -r argument instructs rm to remove everything recursively, even if it’s a directory. The -r argument will also remove subdirectories of the path as well, so you’ll definitely want to be careful with this command. As I’ve mentioned earlier in the book, if you use sudo with rm, you can hypothetically delete your entire Ubuntu installation!  Another option offered by rm is the -f argument which is short for force, and it tells rm not to prompt before removing things. This argument won’t be needed as often, and use cases for it are outside the scope of this article. But keep in mind that it exists, should you need it.  The touch command Another foundational command that’s good to know is touch, which actually serves two purposes. First, assuming you have permission to do so in your current working directory, the touch command will create an empty file if it doesn’t already exist. Second, the touch command will update the modification time of a file or directory if it does already exist:  Figure 4.4: Experimenting with the touch command  To illustrate this, in the related screenshot, I ran several commands. First, I ran the following command to create an empty file:  touch testfile.txt  That file didn’t exist before, so when I ran ls -l afterward, it showed the newly created file with a size of 0 bytes. Next, I ran the touch testfile.txt command again a minute later, and you can see in the screenshot that the modification time went from 15:12 to 15:13.  When it comes to viewing the contents of a file, we’ll get to that later on in the book, Mastering Ubuntu Server, Third Edition. And there are definitely more commands that we’ll need to learn to build the basis of our foundation. But for now, let’s take a break from the foundational concepts to understand the Linux filesystem layout better.  Summary There are more Linux commands than you will never be able to memorize. Most of us just memorize our favorite commands and variations of commands. You’ll develop your own menu of these commands as you learn and expand your knowledge. In this article, we covered many of the foundational commands that are, for the most part, essential. Commands such as grep, cat, cd, ls, and others were explored this time around.  About Jeremy “Jay” La Croix is a technologist and open-source enthusiast, specializing in Linux. Jay is currently the director of Cloud Services, Adaptavist. He has a net field experience of 20 years across different firms as a Solutions Architect and holds a master’s degree in Information Systems Technology Management from Capella University.     In addition, Jay also has an active Linux-focused YouTube channel with over 186K followers and 15.9M views, available at LearnLinux.tv, where he posts instructional tutorial videos and other Linux-related content.
Read more
  • 0
  • 0
  • 12347

article-image-gain-practical-expertise-latest-edition-software-architecture-with-c-sharp9-dotnet5
Expert Network
08 Jul 2021
3 min read
Save for later

Gain Practical Expertise with the Latest Edition of Software Architecture with C# 9 and .NET 5 

Expert Network
08 Jul 2021
3 min read
Software architecture is one of the most discussed topics in the software industry today, and its importance will certainly grow more in the future. But the speed at which new features are added to these software solutions keeps increasing, and new architectural opportunities keep emerging. To strengthen your command on this, Packt brings to you the Second Edition of Software Architecture with C# 9 and .NET 5 by Gabriel Baptista and Francesco Abbruzzese – a fully revised and expanded guide, featuring the latest features of .NET 5 and C# 9.  This book covers the most common design patterns and frameworks involved in modern cloud-based and distributed software architectures. It discusses when and how to use each pattern, by providing you with practical real-world scenarios. This book also presents techniques and processes such as DevOps, microservices, Kubernetes, continuous integration, and cloud computing, so that you can have a best-in-class software solution developed and delivered for your customers.   This book will help you to understand the product that your customer wants from you. It will guide you to deliver and solve the biggest problems you can face during development. It also covers the do's and don'ts that you need to follow when you manage your application in a cloud-based environment. You will learn about different architectural approaches, such as layered architectures, service-oriented architecture, microservices, Single Page Applications, and cloud architecture, and understand how to apply them to specific business requirements.   Finally, you will deploy code in remote environments or on the cloud using Azure. All the concepts in this book will be explained with the help of real-world practical use cases where design principles make the difference when creating safe and robust applications. By the end of the book, you will be able to develop and deliver highly scalable and secure enterprise-ready applications that meet the end customers' business needs.   It is worth mentioning that Software Architecture with C# 9 and .NET 5, Second Edition will not only cover the best practices that a software architect should follow for developing C# and .NET Core solutions, but it will also discuss all the environments that we need to master in order to develop a software product according to the latest trends.   This second edition is improved in code, and adapted to the new opportunities offered by C# 9 and .Net 5. We added all new frameworks and technologies such as gRPC, and Blazor, and described Kubernetes in more detail in a dedicated chapter.   To get the most out of this book, understand it as a guidance that you may want to revisit many times for different circumstances. Do not forget to have Visual Studio Community 2019 or higher installed and be sure that you understand C# .NET principles.
Read more
  • 0
  • 0
  • 8218

article-image-understanding-the-foundation-of-protocol-oriented-design
Expert Network
30 Jun 2021
7 min read
Save for later

Understanding the Foundation of Protocol-oriented Design

Expert Network
30 Jun 2021
7 min read
When Apple announced Swift 2 at the World Wide Developers Conference (WWDC) in 2016, they also declared that Swift was the world’s first protocol-oriented programming (POP) language. From its name, we might assume that POP is all about protocol; however, that would be a wrong assumption. POP is about so much more than just protocol; it is actually a new way of not only writing applications but also thinking about programming. This article is an excerpt from the book Mastering Swift, 6th Edition by Jon Hoffman. In this article, we will discuss a protocol-oriented design and how we can use protocols and protocol extensions to replace superclasses. We will look at how to define animal types for a video game in a protocol-oriented way. Requirements When we develop applications, we usually have a set of requirements that we need to develop against. With that in mind, let’s define the requirements for the animal types that we will be creating in this article: We will have three categories of animals: land, sea, and air. Animals may be members of multiple categories. For example, an alligator can be a member of both the land and sea categories. Animals may attack and/or move when they are on a tile that matches the categories they are in. Animals will start off with a certain number of hit points, and if those hit points reach 0 or less, then they will be considered dead. POP Design We will start off by looking at how we would design the animal types needed and the relationships between them. Figure 1 shows our protocol-oriented design: Figure 1: Protocol-oriented design In this design, we use three techniques: protocol inheritance, protocol composition, and protocol extensions. Protocol inheritance Protocol inheritance is where one protocol can inherit the requirements from one or more additional protocols. We can also inherit requirements from multiple protocols, whereas a class in Swift can have only one superclass. Protocol inheritance is extremely powerful because we can define several smaller protocols and mix/match them to create larger protocols. You will want to be careful not to create protocols that are too granular because they will become hard to maintain and manage. Protocol composition Protocol composition allows types to conform to more than one protocol. With protocol-oriented design, we are encouraged to create multiple smaller protocols with very specific requirements. Let’s look at how protocol composition works. Protocol inheritance and composition are really powerful features but can also cause problems if used wrongly. Protocol composition and inheritance may not seem that powerful on their own; however, when we combine them with protocol extensions, we have a very powerful programming paradigm. Let’s look at how powerful this paradigm is. Protocol-oriented design — putting it all together We will begin by writing the Animal superclass as a protocol: protocol Animal { var hitPoints: Int { get set } } In the Animal protocol, the only item that we are defining is the hitPoints property. If we were putting in all the requirements for an animal in a video game, this protocol would contain all the requirements that would be common to every animal. We only need to add the hitPoints property to this protocol. Next, we need to add an Animal protocol extension, which will contain the functionality that is common for all types that conform to the protocol. Our Animal protocol extension would contain the following code: extension Animal { mutating func takeHit(amount: Int) { hitPoints -= amount } func hitPointsRemaining() -> Int { return hitPoints } func isAlive() -> Bool { return hitPoints > 0 ? true : false } } The Animal protocol extension contains the same takeHit(), hitPointsRemaining(), and isAlive() methods. Any type that conforms to the Animal protocol will automatically inherit these three methods. Now let’s define our LandAnimal, SeaAnimal, and AirAnimal protocols. These protocols will define the requirements for the land, sea, and air animals respectively: protocol LandAnimal: Animal { var landAttack: Bool { get } var landMovement: Bool { get } func doLandAttack() func doLandMovement() } protocol SeaAnimal: Animal { var seaAttack: Bool { get } var seaMovement: Bool { get } func doSeaAttack() func doSeaMovement() } protocol AirAnimal: Animal { var airAttack: Bool { get } var airMovement: Bool { get } func doAirAttack() func doAirMovement() } These three protocols only contain the functionality needed for their particular type of animal. Each of these protocols only contains four lines of code. This makes our protocol design much easier to read and manage. The protocol design is also much safer because the functionalities for the various animal types are isolated in their own protocols rather than being embedded in a giant superclass. We are also able to avoid the use of flags to define the animal category and, instead, define the category of the animal by the protocols it conforms to. In a full design, we would probably need to add some protocol extensions for each of the animal types, but we do not need them for our example here. Now, let’s look at how we would create our Lion and Alligator types using protocol-oriented design: struct Lion: LandAnimal { var hitPoints = 20 let landAttack = true let landMovement = true func doLandAttack() { print(“Lion Attack”) } func doLandMovement() { print(“Lion Move”) } } struct Alligator: LandAnimal, SeaAnimal { var hitPoints = 35 let landAttack = true let landMovement = true let seaAttack = true let seaMovement = true func doLandAttack() { print(“Alligator Land Attack”) } func doLandMovement() { print(“Alligator Land Move”) } func doSeaAttack() { print(“Alligator Sea Attack”) } func doSeaMovement() { print(“Alligator Sea Move”) } } Notice that we specify that the Lion type conforms to the LandAnimal protocol, while the Alligator type conforms to both the LandAnimal and SeaAnimal protocols. As we saw previously, having a single type that conforms to multiple protocols is called protocol composition and is what allows us to use smaller protocols, rather than one giant monolithic superclass. Both the Lion and Alligator types originate from the Animal protocol; therefore, they will inherit the functionality added with the Animal protocol extension. If our animal type protocols also had extensions, then they would also inherit the function added by those extensions. With protocol inheritance, composition, and extensions, our concrete types contain only the functionality needed by the particular animal types that they conform to. Since the Lion and Alligator types originate from the Animal protocol, we can use polymorphism. Let’s look at how this works: var animals = [Animal]() animals.append(Alligator()) animals.append(Alligator()) animals.append(Lion()) for (index, animal) in animals.enumerated() { if let _ = animal as? AirAnimal { print(“Animal at \(index) is Air”) } if let _ = animal as? LandAnimal { print(“Animal at \(index) is Land”) } if let _ = animal as? SeaAnimal { print(“Animal at \(index) is Sea”) } } In this example, we create an array that will contain Animal types named animals. We then create two instances of the Alligator type and one instance of the Lion type that are added to the animals array. Finally, we use a for-in loop to loop through the array and print out the animal type based on the protocol that the instance conforms to. Upgrade your knowledge and become an expert in the latest version of the Swift programming language with Mastering Swift 5.3, 6th Edition by Jon Hoffman. About Jon Hoffman has over 25 years of experience in the field of information technology. He has worked in the areas of system administration, network administration, network security, application development, and architecture. Currently, Jon works as an Enterprise Software Manager for Syn-Tech Systems.
Read more
  • 0
  • 0
  • 6076

article-image-exploring-the-strategy-behavioral-design-pattern-in-node-js
Expert Network
02 Jun 2021
10 min read
Save for later

Exploring the Strategy Behavioral Design Pattern in Node.js

Expert Network
02 Jun 2021
10 min read
A design pattern is a reusable solution to a recurring problem. The term is really broad in its definition and can span multiple domains of an application. However, the term is often associated with a well-known set of object-oriented patterns that were popularized in the 90s by the book, Design Patterns: Elements of Reusable Object- Oriented Software, Pearson Education, by the almost legendary Gang of Four (GoF): Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides. This article is an excerpt from the book Node.js Design Patterns, Third Edition by Mario Casciaro and Luciano Mammino – a comprehensive guide for learning proven patterns, techniques, and tricks to take full advantage of the Node.js platform. In this article, we’ll look at the behavior of components in software design. We’ll learn how to combine objects and how to define the way they communicate so that the behavior of the resulting structure becomes extensible, modular, reusable, and adaptable. After introducing all the behavioral design patterns, we will dive deep into the details of the strategy pattern. Now, it's time to roll up your sleeves and get your hands dirty with some behavioral design patterns. Types of Behavioral Design Patterns The Strategy pattern allows us to extract the common parts of a family of closely related components into a component called the context and allows us to define strategy objects that the context can use to implement specific behaviors. The State pattern is a variation of the Strategy pattern where the strategies are used to model the behavior of a component when under different states. The Template pattern, instead, can be considered the "static" version of the Strategy pattern, where the different specific behaviors are implemented as subclasses of the template class, which models the common parts of the algorithm. The Iterator pattern provides us with a common interface to iterate over a collection. It has now become a core pattern in Node.js. JavaScript offers native support for the pattern (with the iterator and iterable protocols). Iterators can be used as an alternative to complex async iteration patterns and even to Node.js streams. The Middleware pattern allows us to define a modular chain of processing steps. This is a very distinctive pattern born from within the Node.js ecosystem. It can be used to preprocess and postprocess data and requests. The Command pattern materializes the information required to execute a routine, allowing such information to be easily transferred, stored, and processed. The Strategy Pattern The Strategy pattern enables an object, called the context, to support variations in its logic by extracting the variable parts into separate, interchangeable objects called strategies. The context implements the common logic of a family of algorithms, while a strategy implements the mutable parts, allowing the context to adapt its behavior depending on different factors, such as an input value, a system configuration, or user preferences. Strategies are usually part of a family of solutions and all of them implement the same interface expected by the context. The following figure shows the situation we just described: Figure 1: General structure of the Strategy pattern Figure 1 shows you how the context object can plug different strategies into its structure as if they were replaceable parts of a piece of machinery. Imagine a car; its tires can be considered its strategy for adapting to different road conditions. We can fit winter tires to go on snowy roads thanks to their studs, while we can decide to fit high-performance tires for traveling mainly on motorways for a long trip. On the one hand, we don't want to change the entire car for this to be possible, and on the other, we don't want a car with eight wheels so that it can go on every possible road. The Strategy pattern is particularly useful in all those situations where supporting variations in the behavior of a component requires complex conditional logic (lots of if...else or switch statements) or mixing different components of the same family. Imagine an object called Order that represents an online order on an e-commerce website. The object has a method called pay() that, as it says, finalizes the order and transfers the funds from the user to the online store. To support different payment systems, we have a couple of options: Use an ..elsestatement in the pay() method to complete the operation based on the chosen payment option Delegate the logic of the payment to a strategy object that implements the logic for the specific payment gateway selected by the user In the first solution, our Order object cannot support other payment methods unless its code is modified. Also, this can become quite complex when the number of payment options grows. Instead, using the Strategy pattern enables the Order object to support a virtually unlimited number of payment methods and keeps its scope limited to only managing the details of the user, the purchased items, and the relative price while delegating the job of completing the payment to another object. Let's now demonstrate this pattern with a simple, realistic example. Multi-format configuration objects Let's consider an object called Config that holds a set of configuration parameters used by an application, such as the database URL, the listening port of the server, and so on. The Config object should be able to provide a simple interface to access these parameters, but also a way to import and export the configuration using persistent storage, such as a file. We want to be able to support different formats to store the configuration, for example, JSON, INI, or YAML. By applying what we learned about the Strategy pattern, we can immediately identify the variable part of the Config object, which is the functionality that allows us to serialize and deserialize the configuration. This is going to be our strategy. Creating a new module Let's create a new module called config.js, and let's define the generic part of our configuration manager: import { promises as fs } from 'fs' import objectPath from 'object-path' export class Config { constructor (formatStrategy) {                           // (1) this.data = {} this.formatStrategy = formatStrategy } get (configPath) {                                       // (2) return objectPath.get(this.data, configPath) } set (configPath, value) {                                // (2) return objectPath.set(this.data, configPath, value) } async load (filePath) {                                  // (3) console.log(`Deserializing from ${filePath}`) this.data = this.formatStrategy.deserialize( await fs.readFile(filePath, 'utf-8') ) } async save (filePath) {                                  // (3) console.log(`Serializing to ${filePath}`) await fs.writeFile(filePath, this.formatStrategy.serialize(this.data)) } } This is what's happening in the preceding code: In the constructor, we create an instance variable called data to hold the configuration data. Then we also store formatStrategy, which represents the component that we will use to parse and serialize the data. We provide two methods, set()and get(), to access the configuration properties using a dotted path notation (for example, property.subProperty) by leveraging a library called object-path (nodejsdp.link/object-path). The load() and save() methods are where we delegate, respectively, the deserialization and serialization of the data to our strategy. This is where the logic of the Config class is altered based on the formatStrategy passed as an input in the constructor. As we can see, this very simple and neat design allows the Config object to seamlessly support different file formats when loading and saving its data. The best part is that the logic to support those various formats is not hardcoded anywhere, so the Config class can adapt without any modification to virtually any file format, given the right strategy. Creating format Strategies To demonstrate this characteristic, let's now create a couple of format strategies in a file called strategies.js. Let's start with a strategy for parsing and serializing data using the INI file format, which is a widely used configuration format (more info about it here: nodejsdp.link/ini-format). For the task, we will use an npm package called ini (nodejsdp.link/ini): import ini from 'ini' export const iniStrategy = { deserialize: data => ini.parse(data), serialize: data => ini.stringify(data) } Nothing really complicated! Our strategy simply implements the agreed interface, so that it can be used by the Config object. Similarly, the next strategy that we are going to create allows us to support the JSON file format, widely used in JavaScript and in the web development ecosystem in general: export const jsonStrategy = { deserialize: data => JSON.parse(data), serialize: data => JSON.stringify(data, null, '  ') } Now, to show you how everything comes together, let's create a file named index.js, and let's try to load and save a sample configuration using different formats: import { Config } from './config.js' import { jsonStrategy, iniStrategy } from './strategies.js' async function main () { const iniConfig = new Config(iniStrategy) await iniConfig.load('samples/conf.ini') iniConfig.set('book.nodejs', 'design patterns') await iniConfig.save('samples/conf_mod.ini') const jsonConfig = new Config(jsonStrategy) await jsonConfig.load('samples/conf.json') jsonConfig.set('book.nodejs', 'design patterns') await jsonConfig.save('samples/conf_mod.json') } main() Our test module reveals the core properties of the Strategy pattern. We defined only one Config class, which implements the common parts of our configuration manager, then, by using different strategies for serializing and deserializing data, we created different Config class instances supporting different file formats. The example we've just seen shows us only one of the possible alternatives that we had for selecting a strategy. Other valid approaches might have been the following: Creating two different strategy families: One for the deserialization and the other for the serialization. This would have allowed reading from a format and saving to another. Dynamically selecting the strategy: Depending on the extension of the file provided; the Config object could have maintained a map extension → strategy and used it to select the right algorithm for the given extension. As we can see, we have several options for selecting the strategy to use, and the right one only depends on your requirements and the tradeoff in terms of features and the simplicity you want to obtain. Furthermore, the implementation of the pattern itself can vary a lot as well. For example, in its simplest form, the context and the strategy can both be simple functions: function context(strategy) {...} Even though this may seem insignificant, it should not be underestimated in a programming language such as JavaScript, where functions are first-class citizens and used as much as fully-fledged objects. Between all these variations, though, what does not change is the idea behind the pattern; as always, the implementation can slightly change but the core concepts that drive the pattern are always the same. Summary In this article, we dive deep into the details of the strategy pattern, one of the Behavioral Design Patterns in Node.js. Learn more in the book, Node.js Design Patterns, Third Edition by Mario Casciaro and Luciano Mammino. About the Authors Mario Casciaro is a software engineer and entrepreneur. Mario worked at IBM for a number of years, first in Rome, then in Dublin Software Lab. He currently splits his time between Var7 Technologies-his own software company-and his role as lead engineer at D4H Technologies where he creates software for emergency response teams. Luciano Mammino wrote his first line of code at the age of 12 on his father's old i386. Since then he has never stopped coding. He is currently working at FabFitFun as principal software engineer where he builds microservices to serve millions of users every day.
Read more
  • 0
  • 0
  • 11264
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-rookout-and-appdynamics-team-up-to-help-enterprise-engineering-teams-debug-at-speed-with-deep-code-insights
Richard Gall
20 Feb 2020
3 min read
Save for later

Rookout and AppDynamics team up to help enterprise engineering teams debug at speed with Deep Code Insights

Richard Gall
20 Feb 2020
3 min read
It's not acknowledged enough that the real headache when it comes to software faults and performance problems isn't so much the problems themselves, but instead the process of actually identifying those problems. Sure, problems might slow you down, but wading though your application code to actually understand what's happened can sometimes grind engineering teams to a halt. For enterprise engineering teams, this can be particularly fatal. Agility is hard enough when you're dealing with complex applications and the burden of legacy software; but when things go wrong, any notion of velocity can be summarily discarded to the trashcan. However, a new partnership between debugging platform Rookout and APM company AppDynamics, announced at AppDynamics' Transform 2020 event, might just change that. The two organizations have teamed up, with Rookout's impressive debugging capabilities now available to AppDynamics customers in the form of a new product called Deep Code Insights. [caption id="attachment_31042" align="alignleft" width="696"] Live debugging of an application in production in Deep Code Insights[/caption]                 What is Deep Code Insights? Deep Code Insights is a new product for AppDynamics customers that combines the live-code debugging capabilities offered by Rookout with AppDynamic's APM platform. The advantage for developers could be substantial. Jerrie Pineda, Enterprise Software Architect at Maverik says that "Rookout helps me get the debugging data I need in seconds instead of waiting for several hours." This means, he explains, "[Maverik's] mean time to resolution (MTTR) for most issues is slashed up to 80%.” What does Deep Code Insights mean for AppDynamics? For AppDynamics, Deep Code Insights allows the organization to go one step further in its mission to "make it easier for businesses to understand their own software." At least that's how AppDynamics' VP of corporate development and strategy at Kevin Wagner puts it. "Together [with Rookout], we are narrowing the gaps between indicating a code-related problem impacting performance, pinpointing the direct issue within the line of code, and deploying a solution quickly for a seamless customer experience," he says. What does Deep Code Insights mean for Rookout? For Rookout, meanwhile, the partnership with AppDynamics is a great way for the company to reach out to a wider audience of users working at large enterprise organizations. The company received $8,000,000 in Series A funding back in August. This has provided a solid platform on which it is clearly looking to build and grow. Rookout's Co-Founder and CEO Or Weis describes the partnership as "obvious." "We want to bring the next-gen developer workflow to enterprise customers and help them increase product velocity," he says. Learn more about Rookout: www.rookout.com Learn more about AppDynamics: www.appdynamics.com  
Read more
  • 0
  • 0
  • 13022

article-image-eric-evans-at-domain-driven-design-europe-2019-explains-the-different-bounded-context-types-and-their-relation-with-microservices
Bhagyashree R
17 Dec 2019
9 min read
Save for later

Eric Evans at Domain-Driven Design Europe 2019 explains the different bounded context types and their relation with microservices

Bhagyashree R
17 Dec 2019
9 min read
The fourth edition of the Domain-Driven Design Europe 2019 conference was held early this year from Jan 31-Feb 1 at Amsterdam. Eric Evans, who is known for his book Domain-Driven Design: Tackling Complexity in Software kick-started the conference with a great talk titled "Language in Context". In his keynote, Evans explained some key Domain-driven design concepts including subdomains, context maps, and bounded context. He introduced some new concepts as well including bubble context, quaint context, patch on patch context, and more. He further talked about the relationship between the bounded context and microservices. Want to learn domain-driven design concepts in a practical way? Check out our book, Hands-On Domain-Driven Design with .NET Core by Alexey Zimarev. This book will guide you in involving business stakeholders when choosing the software you are planning to build for them. By figuring out the temporal nature of behavior-driven domain models, you will be able to build leaner, more agile, and modular systems. What is a bounded context? Domain-driven design is a software development approach that focuses on the business domain or the subject area. To solve problems related to that domain, we create domain models which are abstractions describing selected aspects of a domain. The terminology and concepts related to these models only make sense within a context. In domain-driven design, this is called bounded context.  Bounded context is one of the most important concepts in domain-driven design. Evans explained that bounded context is basically a boundary where we eliminate any kind of ambiguity. It is a part of the software where particular terms, definitions, and rules apply in a consistent way. Another important property of the bounded context is that a developer and other people in the team should be able to easily see that “boundary.” They should know whether they are inside or outside of the boundary.  Within this bounded context, we have a canonical context in which we explore different domain models, refine our language and develop ubiquitous language, and try to focus on the core domain. Evans says that though this is a very “tidy” way of creating software, this is not what we see in reality. “Nothing is that tidy! Certainly, none of the large software systems that I have ever been involved with,” he says. He further added that though the concept of bounded context has grabbed the interest of many within the community, it is often “misinterpreted.” Evans has noticed that teams often confuse between bounded context and subdomain. The reason behind this confusion is that in an “ideal” scenario they should coincide. Also, large corporations are known for reorganizations leading to changes in processes and responsibilities. This could result in two teams having to work in the same bounded contexts with an increased risk of ending up with a “big ball of mud.” The different ways of describing bounded contexts In their paper, Big Ball of Mud, Brian Foote and Joseph Yoder describe the big ball of mud as “a haphazardly structured, sprawling, sloppy, duct-tape and baling wire, spaghetti code jungle.” Some of the properties that Evans uses to describe it are incomprehensible interdependencies, inconsistent definitions, incomplete coverage, and risky to change. Needless to say, you would want to avoid the big ball of mud by all accounts. However, if you find yourself in such a situation, Evans says that building the system from the ground up is not an ideal solution. Instead, he suggests going for something called bubble context in which you create a new model that works well next to the already existing models. While the business is run by the big ball of mud, you can do an elegant design within that bubble. Another context that Evans explained was the mature productive context. It is the part of the software that is producing value but probably is built on concepts in the core domain that are outdated. He explained this particular context with an example of a garden. A “tidy young garden” that has been recently planted looks great, but you do not get much value from it. It is only a few months later when the plants start fruition and you get the harvest. Along similar lines, developers should plant seeds with the goal of creating order, but also embrace the chaotic abundance that comes with a mature system. Evans coined another term quaint context for a context that one would consider "legacy". He describes it as an old context that still does useful work but is implemented using old fashioned technology or is not aligned with the current domain vision. Another name he suggests is patch on patch context that also does something useful as it is, but its numerous interdependency “makes change risky and expensive.” Apart from these, there are many other types of context that we do not explicitly label. When you are establishing a boundary, it is good practice to analyze different subdomains and check the ones that are generic and ones that are specific to the business. Here he introduced the generic subdomain context. “Generic here means something that everybody does or a great range of businesses and so forth do. There’s nothing special about our business and we want to approach this is a conventional way. And to do that the best way I believe is to have a context, a boundary in which we address that problem,” he explains. Another generic context Evans mentioned was generic off the shelf (OTS), which can make setting the boundary easier as you are getting something off the shelf. Bounded context types in the microservice architecture Evans sees microservices as the biggest opportunity and risks the software engineering community has had in a long time. Looking at the hype around microservices it is tempting to jump on the bandwagon, but Evans suggests that it is important to see the capabilities microservices provide us to meet the needs of the business. A common misconception people have is that microservices are bounded context, which Evans calls oversimplification. He further shared four kinds of context that involve microservices: Service internal The first one is service internal that describes how a service actually works. Evans believes that this is the type of context that people think of when they say microservice is a bounded context. In this context, a service is isolated from other services and handled by an autonomous team. Though this definitely fits the definition of a bounded context, it is not the only aspect of microservices, Evans notes. If we only use this type, we would end up with a bunch of services that don't know how to interact with each other.  API of Service  The API of service context describes how a service talks to other services. In this context as well, an API is built by an autonomous team and anyone consuming their API is required to conform to them. This implies that all the development decisions are pretty much dictated by the data flow direction, however, Evans think there are other alternatives. Highly influential groups may create an API that other teams must conform to irrespective of the direction data is flowing. Cluster of codesigned services The cluster of codesigned services context refers to the cluster of services designed in close collaboration. Here, the bounded context consists of a cluster of services designed to work with each other to accomplish some tasks. Evans remarks that the internals of the individual services could be very different from the models used in the API. Interchange context The final type is interchange context. According to Evans, the interaction between services must also be modeled. The model will describe messages and definitions to use when services interact with other services. He further notes that there are no services in this context as it is all about messages, schemas, and protocols. How legacy systems can participate in microservices architecture Coming back to legacy systems and how they can participate in a microservices environment, Evans introduced a new concept called Exposed Legacy Asset. He suggests creating an interface that looks like a microservice and interacts with other microservices, but internally interacts with a legacy system. This will help us avoid corrupting the new microservices built and also keeps us from having to change the legacy system. In the end, looking back at 15 years of his book, Domain-Driven Design, he said that we now may need a new definition of domain-driven design. A challenge that he sees is how tight this definition should be. He believes that a definition should share a common vision and language, but also be flexible enough to encourage innovation and improvement. He doesn’t want the domain-driven design to become a club of happy members. He instead hopes for an intellectually honest community of practitioners who are “open to the possibility of being wrong about things.” If you tried to take the domain-driven design route and you failed at some point, it is important to question and reexamine. Finally, he summarized by defining domain-driven design as a set of guiding principles and heuristics. The key principles are focussing on the core domain, exploring models in a creative collaboration of domain experts and software experts, and speaking a ubiquitous language within a bounded context. [box type="shadow" align="" class="" width=""] “Let's practice DDD together, shake it up and renew,” he concludes. [/box] If you want to put these and other domain-driven design principles into practice, grab a copy of our book, Hands-On Domain-Driven Design with .NET Core by Alexey Zimarev. This book will help you discover and resolve domain complexity together with business stakeholders and avoid common pitfalls when creating the domain model. You will further study the concept of bounded context and aggregate, and much more. Gabriel Baptista on how to build high-performance software architecture systems with C# and .Net Core You can now use WebAssembly from .NET with Wasmtime! Exploring .Net Core 3.0 components with Mark J. Price, a Microsoft specialist
Read more
  • 0
  • 0
  • 7251

article-image-powershell-basics-for-it-professionals
Savia Lobo
16 Dec 2019
6 min read
Save for later

PowerShell Basics for IT Professionals

Savia Lobo
16 Dec 2019
6 min read
PowerShell is Microsoft’s automation platform for IT Pros. Of late, there have been a lot of questions around the complexity of this latest automation tool by Microsoft. At Microsoft Ignite 2018, Jason Himmelstein, Director of Technical Strategy and Strategic Partnerships, Office Apps & Services MVP, explained the basics of PowerShell and how to truly optimize your SharePoint implementation using this powerful IT pro toolset. While in this post we look at the big picture, you can check out the complete video here: ‘Introduction to PowerShell for the anxious IT pro’. Want to do more with PowerShell? After learning the basics, you can learn how to use PowerShell to automate complex Windows server tasks. You can also improve PowerShell's usability, and control and manage Windows-based environments by working through exciting recipes given in Windows Server 2019 Automation with PowerShell Cookbook - Third Edition written by Thomas Lee.  Himmelstein starts off by saying PowerShell isn’t a packaged executable, nor it is developer-centric that needs one to understand code, and it is easy for an IT Pro to understand. What is PowerShell? Windows PowerShell is Microsoft’s task automation framework, consisting of a command-line shell and associated scripting language built on .NET Framework. It provides full access to COM and WMI, enabling administrators to perform administrative tasks on both local and remote Windows systems. In simple words, PowerShell is an object-based, not a text-based, command-line interface for Microsoft Technologies. This means results in PowerShell can be acted upon and not just read from. One can cause huge damage to an environment using PowerShell as there is no back button in PowerShell. However, to check what must have gone wrong, you can check the logs but can not undo actions. Why PowerShell matters Regardless of the platform a person uses such as Office 365, Azure, etc., PowerShell can be easily implemented due to its cross-platform capability. Himmelstein also highlights one can also get started with Azure PowerShell by trying it out in an Azure Cloud Shell environment, an interactive, authenticated, browser-accessible shell for managing Azure resources.  Azure Cloud Shell comes equipped with commonly used CLI tools including Linux shell interpreters, PowerShell modules, Azure tools, text editors, source control, build tools, container tools, database tools and more. Cloud Shell also includes language support for several popular programming languages such as Node.js, .NET and Python. Cloud Shell also securely authenticates automatically for instant access to your resources through the Azure CLI or Azure PowerShell cmdlets. Users can use PowerShell in Cloud Shell. One can also develop applications using PowerShell or can use PowerShell via Source Control Management (SCM). Basics of PowerShell PowerShell Hardware There are two ways one can use PowerShell; one is via the PowerShell Console, which is similar to a command line. The other is PowerShell ISE (Integrated Scripting Environment). One thing Himmelstein encourages is, “we run PowerShell in the Console and we write PowerShell in the ISE.” The reason is there are certain functionalities that do not work in the ISE when one hits the ‘Run’ command. In such cases, the user will have to take that PowerShell out, copy it, save the file and run it in a command window. cmdlets Cmdlets are the main building blocks of PowerShell. These are mini commands that perform one action. These have the ability to pipe the output of one cmdlet into further cmdlets. These can also perform equality tests with expressions such as -eq, -lt, -match; one can diff easily within a PowerShell. Modules There are four types of Modules in PowerShell: Script: A Script module is a file (.psm1) that contains any valid Windows PowerShell code. Binary: A binary module is a .NET framework assembly (.dll) that contains compiled code. Manifest: A module Manifest is a Windows PowerShell data file (.psd1) that describes the contents of a module and determines how a module is processed. Dynamic: A dynamic module does not persist to disk. It is created using New Module, is intended to be short-lived, and cannot be accessed by Get-Module. Himmelstein prefers not to use the Dynamic module as it persists for just one session. Objects and Members Objects are instances of classes and have properties and methods. Members are properties and methods of an object. Properties define what an Object is and Methods define what you can do with the object. Himmelstein puts together all these terms in a simple way: Objects = stuff Cmdlets = things you can do with the stuff Modules = list of things you can do with the stuff Properties = details about the stuff Methods = instructions for things you can do with the stuff PipeLine Using PipeLines one can chain objects together for processing. The output of a pipelined object becomes the object itself. Functional Explanation Get-command: Gets all the cmdlet installed on your computer. Get-help: Displays additional information about a cmdlet Get-member: Listing the Properties and Methods of a Command or Object Get-verb: Gets approved Windows PowerShell verbs Start-transcript: Logs everything you do in that PowerShell window to a file Get- history: If you didn’t start transcript, you can still review your history before closing your Shell or ISE window. Tips for PowerShell beginners Use Variables: You can use any variables except the ones that are reserved by the system, which you will be prompted when you try to enter a reserved variable. Call one thing at a time Comment your scripts as this may save you a lot of time. Create scripts using an ISE/IDE, you can also use the Visual Studio Code and then execute in Shell. Dispose of your objects. Close the command window by typing Exit. Test before using in Production Write reusable scripts. What Powershell beginners should avoid Rewriting your variables Hard coding your scripts such as Password as it may get fired by PowerShell Taking code from the internet or vendor and just Run in your environment (You should read every code before you run it in your environment). Assuming the code is not harmful; it is. There is no back button in PowerShell and you cannot undo things. Running your code in an IDE/ISE and expect everything to work. PowerShell Syntax and Bracketology Syntax ‘#’ is for Comment ‘+’ is for Add ‘=’, ‘-eq’, are for Equal ‘!’, ‘-ne’, ‘-not’ are for ‘not equal’ Brackets ‘()’ Curved brackets also known as Parentheses are used for required options, compulsory arguments, or control structures. ‘{}’ Curly brackets are used for block expression within a command block and is also used to open a code block ‘[]’ Square brackets are used to denote optional elements or parameters and also used for match functions. Now that you know the basics of PowerShell, you can start performing key admin tasks on Windows Server 2019. To further learn how to employ best practices for writing PowerShell scripts and configuring Windows Server 2019 and leverage PowerShell to automate complex Windows server tasks, check out our book, Windows Server 2019 Automation with PowerShell Cookbook - Third Edition written by Thomas Lee. Weaponizing PowerShell with Metasploit and how to defend against PowerShell attacks [Tutorial] Scripting with Windows Powershell Desired State Configuration [Video] Automate tasks using Azure PowerShell and Azure CLI [Tutorial]
Read more
  • 0
  • 0
  • 6831

article-image-understanding-result-type-in-swift-5-with-daniel-steinberg
Sugandha Lahoti
16 Dec 2019
4 min read
Save for later

Understanding Result Type in Swift 5 with Daniel Steinberg

Sugandha Lahoti
16 Dec 2019
4 min read
One of the first things many programmers add to their Swift projects is a Result type. From Swift 5 onwards, Swift included an official Result type. In his talk at iOS Cong SG 2019, Daniel Steinberg explained why developers would need a Result type, how and when to use it, and what map and flatMap bring for Result. Swift 5, released in March this year hosts a number of key features such as concurrency, generics, and memory management. If you want to learn and master Swift 5, you may like to go through Mastering Swift 5, a book by Packt Publishing. Inside this book, you'll find the key features of Swift 5 easily explained with complete sets of examples. Handle errors in Swift 5 easily with Result type Result type gives a simple clear way of handling errors in complex code such as asynchronous APIs. Daniel describes the Result type as a hybrid of optionals and errors. He says, “We've used it like optionals but we've got the power of errors we know what went wrong and we can pull that error out at any time that we need it. The idea was we have one return type whether we succeeded or failed. We get a record of our first error and we are able to keep going if there are no errors.” In Swift 5, Swift’s Result type is implemented as an enum that has two cases: success and failure. Both are implemented using generics so they can have an associated value of your choosing, but failure must be something that conforms to Swift’s Error type. Due to the addition of Standard Library, the Error protocol now conforms to itself and makes working with errors easier. Image taken from Daniel’s presentation Result type has four other methods namely map(), flatMap(), mapError(), and flatMapError(). These methods enables us to do many other kinds of transformations using inline closures and functions. The map() method looks inside the Result, and transforms the success value into a different kind of value using a closure specified. However, if it finds failure instead, it just uses that directly and ignores the transformation. Basically, it enables the automatic transformation of a value (error) through a closure, but only in case of success (failure), otherwise, the Result is left unmodified. flatMap() returns a new result, mapping any success value using the given transformation and unwrapping the produced result. Daniel says, “If I need recursion I'm often reaching for flat map.” Daniel adds, “Things that can’t fail use map() and things that can fail use flatmap().” mapError(_:) returns a new result, mapping any failure value using the given transformation and flatMapError(_:) returns a new result, mapping any failure value using the given transformation and unwrapping the produced result. flatMap() (flatMapError()) is useful when you want to transform your value (error) using a closure that returns itself a Result to handle the case when the transformation fails. Using a Result type can be a great way to reduce ambiguity when dealing with values and results of asynchronous operations. By adding convenience APIs using extensions we can also reduce boilerplate and make it easier to perform common operations when working with results, all while retaining full type safety. You can watch Daniel Steinberg’s full video on YouTube where he explains Result Type with detailed code examples and points out common mistakes. If you want to learn more about all the new features of Swift 5 programming language then check out our book, Mastering Swift 5 by Jon Hoffman. Swift 5 for Xcode 10.2 is here! Developers from the Swift for TensorFlow project propose adding first-class differentiable programming to Swift Apple releases native SwiftUI framework with declarative syntax, live editing, and support of Xcode 11 beta.
Read more
  • 0
  • 0
  • 5201
article-image-openjdk-project-valhallas-head-shares-how-they-plan-to-enhance-the-java-language-and-jvm-with-value-types-and-more
Bhagyashree R
10 Dec 2019
4 min read
Save for later

OpenJDK Project Valhalla’s head shares how they plan to enhance the Java language and JVM with value types, and more

Bhagyashree R
10 Dec 2019
4 min read
Announced in 2014, Project Valhalla is an experimental OpenJDK project to bring major new language features to Java 10 and beyond. It primarily focuses on enabling developers to create and utilize value types, or non-reference values. Last week, the project’s head Brian Goetz shared the goal, motivation, current status, and other details about the project in a set of documents called “State of Valhalla”. Goetz shared that in the span of five years, the team has come up with five distinct prototypes of the project. Sharing the current state of the project, he wrote, “We believe we are now at the point where we have a clear and coherent path to enhance the Java language and virtual machine with value types, have them interoperate cleanly with existing generics, and have a compatible path for migrating our existing value-based classes to inline classes and our existing generic classes to specialized generics.” The motivation behind Project Valhalla One of the main motivations behind Project Valhalla was adapting the Java language and runtime to modern hardware. It’s almost been 25 years since Java was introduced and a lot has changed since then. At that time, the cost of a memory fetch and an arithmetic operation was roughly the same, but this is not the case now. Today, the memory fetch operations have become 200 to 1,000 times more expensive as compared to arithmetic operations. Java is often considered to be a pointer-heavy language as most Java data structures in an application are objects or reference types. This is why Project Valhalla aims to introduce value types to get rid of the type overhead both in memory as well as in computation. Goetz wrote, “We aim to give developers the control to match data layouts with the performance model of today’s hardware, providing Java developers with an easier path to flat (cache-efficient) and dense (memory-efficient) data layouts without compromising abstraction or type safety.” The language model for incorporating inline types Goetz further moved on to talk about how the team is accommodating inline classes in the language type system. He wrote, “The motto for inline classes is: codes like a class, works like an int; the latter part of this motto means that inline types should align with the behaviors of primitive types outlined so far.” This means that inline classes will enable developers to write types that behave more like Java's inbuilt primitive types. Inline classes are similar to current classes in the sense that they can have properties, methods, constructors, and so on. However, the difference that Project Valhalla brings is that instances of inline classes or inline objects do not have an identity, the property that distinguishes them from other objects. This is why operations that are identity-sensitive like synchronization are not possible with inline objects. There are a bunch of other differences between inline and identity classes. Goetz wrote, “Object identity serves, among other things, to enable mutability and layout polymorphism; by giving up identity, inline classes must give up these things. Accordingly, inline classes are implicitly final, cannot extend any other class besides Object...and their fields are implicitly final.” In Project Valhalla, the types are divided into inline and reference types where inline types include primitives and reference types are those that are not inline types such as declared identity classes, declared interfaces, array types, etc. He further listed a few migration scenarios including value-based classes, primitives, and specialized generics. Check out Goetz’s post to know more in detail about the Project Valhalla. OpenJDK Project Valhalla is ready for developers working in building data structures or compiler runtime libraries OpenJDK Project Valhalla’s LW2 early access builds are now available for you to test OpenJDK Project Valhalla is now in Phase III
Read more
  • 0
  • 0
  • 4947

article-image-microsoft-technology-evangelist-matthew-weston-on-how-microsoft-powerapps-is-democratizing-app-development-interview
Bhagyashree R
04 Dec 2019
10 min read
Save for later

Microsoft technology evangelist Matthew Weston on how Microsoft PowerApps is democratizing app development [Interview]

Bhagyashree R
04 Dec 2019
10 min read
In recent years, we have seen a wave of app-building tools coming in that enable users to be creators. Another such powerful tool is Microsoft PowerApps, a full-featured low-code / no-code platform. This platform aims to empower “all developers” to quickly create business web and mobile apps that fit best with their use cases. Microsoft defines “all developers” as citizen developers, IT developers, and Pro developers. To know what exactly PowerApps is and its use cases, we sat with Matthew Weston, the author of Learn Microsoft PowerApps. Weston was kind enough to share a few tips for creating user-friendly and attractive UI/UX. He also shared his opinion on the latest updates in the Power Platform including AI Builder, Portals, and more. [box type="shadow" align="" class="" width=""] Further Learning Weston’s book, Learn Microsoft PowerApps will guide you in creating powerful and productive apps that will help you transform old and inefficient processes and workflows in your organization. In this book, you’ll explore a variety of built-in templates and understand the key difference between types of PowerApps such as canvas and model-driven apps. You’ll learn how to generate and integrate apps directly with SharePoint, gain an understanding of PowerApps key components such as connectors and formulas, and much more. [/box] Microsoft PowerApps: What is it and why is it important With the dawn of InfoPath, a tool for creating user-designed form templates without coding, came Microsoft PowerApps. Introduced in 2016, Microsoft PowerApps aims to democratize app development by allowing users to build cross-device custom business apps without writing code and provides an extensible platform to professional developers. Weston explains, “For me, PowerApps is a solution to a business problem. It is designed to be picked up by business users and developers alike to produce apps that can add a lot of business value without incurring large development costs associated with completely custom developed solutions.” When we asked how his journey started with PowerApps, he said, “I personally ended up developing PowerApps as, for me, it was an alternative to using InfoPath when I was developing for SharePoint. From there I started to explore more and more of its capabilities and began to identify more and more in terms of use cases for the application. It meant that I didn’t have to worry about controls, authentication, or integration with other areas of Office 365.” Microsoft PowerApps is a building block of the Power Platform, a collection of products for business intelligence, app development, and app connectivity. Along with PowerApps, this platform includes Power BI and Power Automate (earlier known as Flow). These sit on the top of the Common Data Service (CDS) that stores all of your structured business data. On top of the CDS, you have over 280 data connectors for connecting to popular services and on-premise data sources such as SharePoint, SQL Server, Office 365, Salesforce, among others. All these tools are designed to work together seamlessly. You can analyze your data with Power BI, then act on the data through web and mobile experiences with PowerApps, or automate tasks in the background using Power Automate. Explaining its importance, Weston said, “Like all things within Office 365, using a single application allows you to find a good solution. Combining multiple applications allows you to find a great solution, and PowerApps fits into that thought process. Within the Power Platform, you can really use PowerApps and Power Automate together in a way that provides a high-quality user interface with a powerful automation platform doing the heavy processing.” “Businesses should invest in PowerApps in order to maximize on the subscription costs which are already being paid as a result of holding Office 365 licenses. It can be used to build forms, and apps which will automatically integrate with the rest of Office 365 along with having the mobile app to promote flexible working,” he adds. What skills do you need to build with PowerApps PowerApps is said to be at the forefront of the “Low Code, More Power” revolution. This essentially means that anyone can build a business-centric application, it doesn’t matter whether you are in finance, HR, or IT. You can use your existing knowledge of PowerPoint or Excel concepts and formulas to build business apps. Weston shared his perspective, “I believe that PowerApps empowers every user to be able to create an app, even if it is only quite basic. Most users possess the ability to write formulas within Microsoft Excel, and those same skills can be transferred across to PowerApps to write formulas and create logic.” He adds, “Whilst the above is true, I do find that you need to have a logical mind in order to really create rich apps using the Power Platform, and this often comes more naturally to developers. Saying that, I have met people who are NOT developers, and have actually taken to the platform in a much better way.” Since PowerApps is a low-code platform, you might be wondering what value it holds for developers. We asked Weston what he thinks about developers investing time in learning PowerApps when there are some great cross-platform development frameworks like React Native or Ionic. Weston said, “For developers, and I am from a development background, PowerApps provides a quick way to be able to create apps for your users without having to write code. You can use formulae and controls which have already been created for you reducing the time that you need to invest in development, and instead you can spend the time creating solutions to business problems.” Along with enabling developers to rapidly publish apps at scale, PowerApps does a lot of heavy lifting around app architecture and also gives you real-time feedback as you make changes to your app. This year, Microsoft also released the PowerApps component framework which enables developers to code custom controls and use them in PowerApps. You also have tooling to use alongside the PowerApps component framework for a better end to end development experience: the new PowerApps CLI and Visual Studio plugins. On the new PowerApps capabilities: AI Builder, Portals, and more At this year’s Microsoft Ignite, the Power Platform team announced almost 400 improvements and new features to the Power Platform. The new AI Builder allows you to build and train your own AI model or choose from pre-built models. Another feature is PowerApps Portals using which organizations can create portals and share them with both internal and external users. We also have UI flows, currently a preview feature that provides Robotic Process Automation (RPA) capabilities to Power Automate. Weston said, “I love to use the Power Platform for integration tasks as much as creating front end interfaces, especially around Power Automate. I have created automations which have taken two systems, which won’t natively talk to each other, and process data back and forth without writing a single piece of code.” Weston is already excited to try UI flows, “I’m really looking forward to getting my hands on UI flows and seeing what I can create with those, especially for interacting with solutions which don’t have the usual web service interfaces available to them.” However, he does have some concerns regarding RPA. “My only reservation about this is that Robotic Process Automation, which this is effectively doing, is usually a deep specialism, so I’m keen to understand how this fits into the whole citizen developer concept,” he shared. Some tips for building secure, user-friendly, and optimized PowerApps When it comes to application development, security and responsiveness are always on the top in the priority list. Weston suggests, “Security is always my number one concern, as that drives my choice for the data source as well as how I’m going to structure the data once my selection has been made. It is always harder to consider security at the end, when you try to shoehorn a security model onto an app and data structure which isn’t necessarily going to accept it.” “The key thing is to never take your data for granted, secure everything at the data layer first and foremost, and then carry those considerations through into the front end. Office 365 employs the concept of security trimming, whereby if you don’t have access to something then you don’t see it. This will automatically apply to the data, but the same should also be done to the screens. If a user shouldn’t be seeing the admin screen, don’t even show them a link to get there in the first place,” he adds. For responsiveness, he suggests, “...if an app is slow to respond then the immediate perception about the app is negative. There are various things that can be done if working directly with the data source, such as pulling data into a collection so that data interactions take place locally and then sync in the background.” After security, another important aspect of building an app is user-friendly and attractive UI and UX. Sharing a few tips for enhancing your app's UI and UX functionality, Weston said, “When creating your user interface, try to find the balance between images and textual information. Quite often I see PowerApps created which are just line after line of text, and that immediately switches me off.” He adds, “It could be the best app in the world that does everything I’d ever want, but if it doesn’t grip me visually then I probably won’t be using it. Likewise, I’ve seen apps go the other way and have far too many images and not enough text. It’s a fine balance, and one that’s quite hard.” Sharing a few ways for optimizing your app performance, Weston said, “Some basic things which I normally do is to load data on the OnVisible event for my first screen rather than loading everything on the App OnStart. This means that the app can load and can then be pulling in data once something is on the screen. This is generally the way that most websites work now, they just get something on the screen to keep the user happy and then pull data in.” “Also, give consideration to the number of calls back and forth to the data source as this will have an impact on the app performance. Only read and write when I really need to, and if I can, I pull as much of the data into collections as possible so that I have that temporary cache to work with,” he added. About the author Matthew Weston is an Office 365 and SharePoint consultant based in the Midlands in the United Kingdom. Matthew is a passionate evangelist of Microsoft technology, blogging and presenting about Office 365 and SharePoint at every opportunity. He has been creating and developing with PowerApps since it became generally available at the end of 2016. Usually, Matthew can be seen presenting within the community about PowerApps and Flow at SharePoint Saturdays, PowerApps and Flow User Groups, and Office 365 and SharePoint User Groups. Matthew is a Microsoft Certified Trainer (MCT) and a Microsoft Certified Solutions Expert (MCSE) in Productivity. Check out Weston’s latest book, Learn Microsoft PowerApps on PacktPub. This book is a step-by-step guide that will help you create lightweight business mobile apps with PowerApps.  Along with exploring the different built-in templates and types of PowerApps, you will generate and integrate apps directly with SharePoint. The book will further help you understand how PowerApps can use several Microsoft Power Automate and Azure functionalities to improve your applications. Follow Matthew Weston on Twitter: @MattWeston365. Denys Vuika on building secure and performant Electron apps, and more Founder & CEO of Odoo, Fabien Pinckaers discusses the new Odoo 13 framework Why become an advanced Salesforce administrator: Enrico Murru, Salesforce MVP, Solution and Technical Architect [Interview]
Read more
  • 0
  • 0
  • 6501

article-image-theres-more-to-learning-programming-than-just-writing-code
Richard Gall
15 Nov 2019
8 min read
Save for later

There's more to learning programming than just writing code

Richard Gall
15 Nov 2019
8 min read
Everyone should learn to code, right? If everyone learned programming not only would people have better jobs, the economy would be growing, and ultimately we’d all have far superior lives to the ones we lead now. Except - clearly - that’s just not true. Yes, perhaps that position is a bit of a caricature, but it’s one that isn’t that uncommon. Lawmakers talk about the importance of making programming and coding part of the curriculum, and are keen to make loud and enthusiastic noises about investing in STEM subjects. We need more engineers to power the digital economy, the thinking goes. While introducing children to code certainly isn’t a bad thing, this way of viewing the world is pretty damaging - not least to those already in engineering roles and those organizations that depend on them. This is because is reduces the activity of writing code to something simple. It turns programming, a complex and ultimately deeply human activity into something more machine like. It almost suggests it’s just a question of writing letters and numbers into a code editor and then just watching the whole thing run. Programming might involve working with machines, but in truth its anything but machine-like. For business leaders, failing to understand what programming actually involves can lead to a really poor engineering culture. A reductive view of the work that software engineers do mean increased pressure, more burnout and lower quality software being delivered. In turn, that has a negative impact on the bottom line. It might not be immediately apparent but poor software means code rewrites, poor user experiences, and high turnover of personnel. That costs money because organizations will be spending valuable time and energy trying to fix mistakes of the past. We need to keep an open mind about what it means to "learn programming" However, with a more open minded perspective on what it actually means to be a programmer, and what ‘learning programming’ actually means, you can build a much more productive engineering culture. This involves not only respecting the learning process, but also recognising that learning isn’t about just taking a course or doing a live coding exercise. It does, in fact, involve a much more diverse range of activities. Let’s look at what some of them are. Evaluating software One of the most important parts of a software developers work is evaluating software. This can happen in various ways. Most obviously, technology leaders (CTOs, Principal Architects, Development Leads) have to evaluate different tools and platforms before they implement a project. Questions here will revolve primarily around cost, but it certainly won't be the leadership team's sole concern. Other issues like integration, product capabilities, even the learning curve and level of complexity will need to be considered (will we need to hire specialist engineers or can our existing team pick it up quickly?). Perhaps that all sounds obvious, but too often we forget that this is work that needs to be done. To make these sorts of assessments - which are often business critical - individuals will need a high degree of knowledge. Without it they can’t be confident that they’re making the right decision for the business. In this sense then, learning about technologies is just as important as the process of learning how to use technologies. Some might say it’s even more important. It's not only senior developers and tech leaders that evaluate software Evaluating software is by no means a task limited to those in senior positions. Developers and engineers who spend the majority of their time shipping code will still need to learn about technologies too. They might not be responsible for architecting a new software system or purchasing PaaS products, but they will have to make personal decisions about what tools they use to solve specific problems. This might sometimes be about the tools they use to boost their productivity and better manage their development workflow, but it isn't limited to that. In broad terms it’s about having an open mind about the range of approaches that can be taken to new challenges. This means that all technology professionals need to learn about technologies - how they work, how they compare to one another, and even what the trade offs between them are. This shouldn’t be treated as an additional extra, but instead as a fundamental part of the learning process. Read next: Developers are today’s technology decision makers Programming techniques and design principles When talking about learning it’s easy to fall into a trap where we privilege practice over theory. Theory, certain lines of thinking go, is self-indulgent, unnecessary, and time-consuming. What’s really important is that people can simply start getting their hands dirty and learn by doing. While it’s true that the practical dimension of learning is vital - in technology or any other field - we overlook theory at our peril. In reality, theory and practice should go together. Practice should be a way of illuminating the theory and theory should be a way of explaining why something works the way it does, or why you should do something in a certain way. Think of it this way: if everyone only learned through practice, we’d all be incapable of applying our skills and knowledge to new problems and challenges. We’d be fixed in our mindset, more like machines than creative human beings. For developers and software engineers this is particularly true. By understanding the principles behind how something works, it becomes much easier to apply solutions to new contexts or even reconfigure them in ways that are appropriate and effective. Improving software with design-led principles Programming techniques and philosophies, like functional or object oriented programming, for example, can help developers and engineers to write code in a specific way, helping them to unlock greater performance and efficiency (both personally and from a technical perspective). Similarly, design patterns also provide a way for thinking about your code in a predetermined way in relation to various commonly occurring problems. It’s true that this still requires developers to get close to code. But this is actually a level of abstraction above the practice of writing code that allows developers to think critically about what they do. So, while a good way to learn these sorts of principles is to see what it looks like in practice, it’s still essential for developers to have a robust conceptual understanding of what this means in practice. Understanding users and business needs Software doesn’t exist in a vacuum. On one side there's the business, on the other there's a user. It sounds obvious, but it's essential that technology professionals are sensitive to these two contextual elements. Business needs and user needs are what ultimately make their work meaningful. In practice, this doesn’t mean people working in technology all need to go and take an MBA. But they do need to have a clear conceptual understanding of how software development and software systems should align with the needs of both internal stakeholders (ie. the business), and users. This isn’t always easy to learn, and there’s no manual for how it should be done. However, it fits across the two points we mentioned above. The software we decide to use, and the way we decide to use it will always be informed by the needs of both the business and users. What this means in practice, then, is that learning about software needs to be informed by the wider context of what that software is for, and what a business is trying to achieve. Some technology professionals enter the industry possessing this kind of awareness and sensitivity. Many others, however, do not, and for these people it’s essential that they have the space to understand how the various facets of the work they do are connected to real-life consequences. Writing code doesn’t help you to do that. Taking a step back and understanding the context in which that code is being written can and will. Read next: 6 reasons why employers should pay for their developers’ training and learning resources Conclusion: Great programming requires a combination of theoretical knowledge and practical talent The opposition between theory and practice is false. It doesn’t help anyone. A culture of ‘getting stuff done’ and shipping code regardless is not only bad for individual developers, it can also be damaging at an organizational level. Without a careful consideration of what you’re trying to achieve, how software can help you to do it, and what it requires to execute it effectively, organizations can become prone to error and mistakes. This leads to wasted time and, more importantly, wasted money. While Facebook’s mantra of ‘move fast and break things’ might sound like the defining phrase of the modern tech industry, good developers need both space and resources to think, plan, and conceptualize. This doesn’t mean we all need to go slow. Instead, it means we need to try to empower engineers to do the right thing, not the quick thing. Give your team access to a diverse range of resources to learn everything they need to build better software. Start a Packt for Teams subscription today.
Read more
  • 0
  • 0
  • 5086
article-image-github-universe-2019-github-for-mobile-github-archive-program-and-more-announced-amid-protests-against-githubs-ice-contract
Vincy Davis
14 Nov 2019
4 min read
Save for later

GitHub Universe 2019: GitHub for mobile, GitHub Archive Program and more announced amid protests against GitHub’s ICE contract

Vincy Davis
14 Nov 2019
4 min read
Yesterday, GitHub commenced its popular product conference GitHub Universe 2019 in San Francisco. The two-day annual conference celebrates the contribution of GitHub’s 40+ million developers and their contributions to the open source community. Day 1 of the conference had many interesting announcements like GitHub for mobile, GitHub Archive Program, and more. Let’s look at some of the major announcements at the GitHub Universe 2019 conference. GitHub for mobile iOS (beta) Github for mobile is a beta app that aims to give users the flexibility to work and interact with the team, anywhere they want. This will enable users to share feedback on a design discussion or review codes in a non-complex development environment. This native app will adapt to any screen size and will also work in dark mode based on the device preference. Currently available only on iOS, the GitHub team has said that it will soon come up with the Android version of it. https://twitter.com/italolelis/status/1194929030518255616 https://twitter.com/YashSharma___/status/1194899905552105472 GitHub Archive Program “Our world is powered by open source software. It’s a hidden cornerstone of our civilization and the shared heritage of all humanity. The mission of the GitHub Archive Program is to preserve it for generations to come,” states the official GitHub blog. GitHub has partnered with the Stanford Libraries, the Long Now Foundation, the Internet Archive, the Software Heritage Foundation, Piql, Microsoft Research, and the Bodleian Library to preserve all the available open source code in the world. It will safeguard all the data by storing multiple copies across various data formats and locations. This includes a “very-long-term archive” called the GitHub Arctic Code Vault which is designed to last at least 1,000 years. https://twitter.com/vithalreddy/status/1194846571835183104 https://twitter.com/sonicbw/status/1194680722856042499 Read More: GitHub Satellite 2019 focuses on community, security, and enterprise Automating workflows from code to cloud General availability of GitHub Actions Last year, at the GitHub Universe conference, GitHub Actions was announced in beta. This year, GitHub has made it generally available to all the users. In the past year, GitHub Actions has received contributions from the developers of AWS, Google, and others. Actions has now developed as a new standard for building and sharing automation for software development, including a CI/CD solution and native package management.GitHub has also announced the free use of self-hosted runners and artifact caching. https://twitter.com/qmixi/status/1194379789483704320 https://twitter.com/inversemetric/status/1194668430290345984 General availability of GitHub Packages In May this year, GitHub had announced the beta version of the GitHub Package Registry as its new package management service. Later in September, after gathering community feedback, GitHub announced that the service has proxy support for the primary npm registry. Since its launch, GitHub Package has received over 30,000 unique packages that served the needs of over 10,000 organizations. Now, at the GitHub Universe 2019, the GitHub team has announced the general availability of GitHub Packages and also informed that they have added support for using the GitHub Actions token. https://twitter.com/Chris_L_Ayers/status/1194693253532020736 These were some of the major announcements at day 1 of the GitHub Universe 2019 conference, head over to GitHub’s blog for more details of the event. Tech workers protests against GitHub’s ICE contract Major product announcements aside, one thing that garnered a lot of attention at the GitHub Universe conference was the protest conducted by the GitHub workers along with the Tech Workers Coalition to oppose GitHub’s $200,000 contract with Immigration and Customs Enforcement (ICE). Many high-profile speakers have dropped out of the GitHub Universe 2019 conference and at least five GitHub employees have resigned from GitHub due to its support for ICE. https://twitter.com/lily_dart/status/1194216293668401152 Read More: Largest ‘women in tech’ conference, Grace Hopper Celebration, renounces Palantir as a sponsor due to concerns over its work with the ICE Yesterday at the event, the protesting tech workers brought a giant cage to symbolize how ICE uses them to detain migrant children. https://twitter.com/githubbers/status/1194662876587233280 Tech workers around the world have extended their support to the protest against GitHub. https://twitter.com/ConMijente/status/1194665524191318016 https://twitter.com/CoralineAda/status/1194695061717450752 https://twitter.com/maybekatz/status/1194683980877975552 GitHub along with Weights & Biases introduced CodeSearchNet challenge evaluation and CodeSearchNet Corpus GitHub acquires Semmle to secure open-source supply chain; attains CVE Numbering Authority status GitHub Package Registry gets proxy support for the npm registry GitHub updates to Rails 6.0 with an incremental approach GitHub now supports two-factor authentication with security keys using the WebAuthn API
Read more
  • 0
  • 0
  • 4091

article-image-rust-1-39-releases-with-stable-version-of-async-await-syntax-better-ergonomics-for-match-guards-attributes-on-function-parameters-and-more
Vincy Davis
08 Nov 2019
4 min read
Save for later

Rust 1.39 releases with stable version of async-await syntax, better ergonomics for match guards, attributes on function parameters, and more

Vincy Davis
08 Nov 2019
4 min read
Less than two months after announcing Rust 1.38, the Rust team announced the release of Rust 1.39 yesterday. The new release brings the stable version of the async-await syntax, which will allow users to not only define async functions, but also block and .await them. The other improvements in Rust 1.39 include shared references to by-move bindings in match guards and attributes on function parameters. The stable version of async-await syntax The stable async function can be utilized (by writing async fn instead of fn) to return a Future when called. A Future is a suspended computation which is used to drive a function to conclusion “by .awaiting it.” Along with async fn, the async { ... } and async move { ... } blocks can also be used to define async literals. According to Nicholas D. Matsakis, a member of the release team, the first stable support of async-await kicks-off the commencement of a “Minimum Viable Product (MVP)”, as the Rust team will now try to improve the syntax by polishing and extending it for future operations. “With this stabilization, we hope to give important crates, libraries, and the ecosystem time to prepare for async /.await, which we'll tell you more about in the future,” states the official Rust blog. Some of the major developments in the async ecosystem The tokio runtime will be releasing a number of scheduler improvements with support to async-await syntax in this month. The async-std runtime library will be releasing their first stable release in a few days. The async-await support has already started to become available in higher-level web frameworks and other applications like the futures_intrusive crate. Other improvements in Rust 1.39 Better ergonomics for match guards In the earlier versions, Rust would disallow taking shared references to by-move bindings in the if guards of match expressions. Starting from Rust 1.39, the compiler will allow binding in the following two ways- by-reference: either immutably or mutably which can be achieved through ref my_var or ref mut my_var respectively. by-value: either by-copy, if the bound variable's type implements Copy or otherwise by-move. The Rust team hopes that this feature will give developers a smoother and consistent experience with expressions. Attributes on function parameters Unlike the previous versions, Rust 1.39 will enable three types of attributes on parameters of functions, closures, and function pointers. Conditional compilation: cfg and cfg_attr Controlling lints: allow, warn, deny, and forbid Helper attributes which are used for procedural macro attributes Many users are happy with the Rust 1.39 features and are especially excited about the stable version of async-await syntax. A user on Hacker News comments, “Async/await lets you write non-blocking, single-threaded but highly interweaved firmware/apps in allocation-free, single-threaded environments (bare-metal programming without an OS). The abstractions around stack snapshots allow seamless coroutines and I believe will make rust pretty much the easiest low-level platform to develop for.” Another comment read, “This is big! Turns out that syntactic support for asynchronous programming in Rust isn't just syntactic: it enables the compiler to reason about the lifetimes in asynchronous code in a way that wasn't possible to implement in libraries. The end result of having async/await syntax is that async code reads just like normal Rust, which definitely wasn't the case before. This is a huge improvement in usability.” Few have already upgraded to Rust 1.39 and shared their feedback on Twitter. https://twitter.com/snoyberg/status/1192496806317481985 Check out the official announcement for more details. You can also read the blog on async-await for more information. AWS will be sponsoring the Rust Project A Cargo vulnerability in Rust 1.25 and prior makes it ignore the package key and download a wrong dependency Fastly announces the next-gen edge computing services available in private beta Neo4j introduces Aura, a new cloud service to supply a flexible, reliable and developer-friendly graph database Yubico reveals Biometric YubiKey at Microsoft Ignite
Read more
  • 0
  • 0
  • 4891