Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
ASP.NET 8 Best Practices
ASP.NET 8 Best Practices

ASP.NET 8 Best Practices: Explore techniques, patterns, and practices to develop effective large-scale .NET web apps

eBook
€8.99 €23.99
Paperback
€29.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Table of content icon View table of contents Preview book icon Preview Book

ASP.NET 8 Best Practices

CI/CD – Building Quality Software Automatically

In my career, someone once said to me, “CI/CD is dead, long live CI/CD.” Of course, this phrase doesn’t mean it’s completely dead. It simply means CI/CD is now becoming the standard for software development, a common practice developers should adopt and learn during a software development life cycle. It is now considered part of your development process as opposed to being a shiny, new process.

In this chapter, we’ll review what Continuous Integration/Continuous Deployment (CI/CD) means and how to prepare your code for a pipeline. Once we’ve covered the necessary changes to include in your code, we’ll discuss what a common pipeline looks like for building software. Once we understand the pipeline process, we’ll look at two ways to recover from an unsuccessful deployment and how to deploy databases. We’ll also cover the three different types of cloud services available to you (on and off-premises and hybrid) and review a list of the top CI/CD providers on the internet. Finally, we’ll walk you through the process of creating a build for a sample application, along with other types of projects.

In this chapter, we will cover the following topics:

  • What is CI/CD?
  • Preparing your Code
  • Understanding the Pipeline
  • The Two “Falling” Approaches
  • Deploying Databases
  • The three Types of Build Providers
  • CI/CD Providers
  • Walkthrough of Azure Pipelines

After you’ve completed this chapter, you’ll be able to identify flaws in software when you’re preparing code for software deployment, understand what a common pipeline includes in producing quality software, identify two ways of recovering from an unsuccessful deployment, know how to deploy databases through a pipeline, understand the different types of CI/CD providers, and know some key players in the CI/CD provider space.

Finally, we’ll walk through a common pipeline in Azure Pipelines to encompass everything we’ve learned in this chapter.

Technical requirements

For this chapter, the only technical requirements include having access to a laptop and an account for one of the cloud providers mentioned in the CI/CD providers section (preferably Microsoft’s Azure Pipelines – don’t worry, it’s free).

Once you have reviewed how pipelines are created, you’ll be able to apply the same concepts to other cloud providers and their pipeline strategies.

What is CI/CD?

In this section, we’ll learn about what continuous integration and continuous deployment mean to developers.

Continuous Integration (CI) is the process of merging all developers’ code into a mainline to trigger an automatic build process so that you can quickly identify issues with a code base using unit tests and code analysis.

When a developer checks their code into a branch, it’s reviewed by peer developers. Once accepted, it’s merged into a mainline and automatically starts a build process. This build process will be covered shortly.

Continuous Deployment (CD) is the process of consistently creating software to deploy it at any time.

Once everything has been built through the automated process, the build prepares the compiled code and creates artifacts. These artifacts are used for consistent deployments across various environments, such as development, staging, and production.

The benefits of implementing a CI/CD pipeline outweigh not having one:

  • Automated Testing: When a commit is triggered, your tests are automatically executed along with your build. Think of this as someone always checking your code on commit.
  • Faster Feedback Loops: As a developer, it’s always great to receive immediate feedback to find out whether something works or not. If you receive an email where the build broke, you’re on your own.
  • Consistent Builds: Once you have a project being built on a build server, you can create builds on-demand – and consistently – with tests.
  • Collaboration Between Teams: We’re all in this together and CI/CD includes developers, system administrators, project managers/SCRUM masters, and QA testers, to name a few, to accomplish the goal of creating great software.

In this section, we reviewed the definition of what continuous integration and continuous deployment mean when developing software in an automated fashion and the benefits of implementing a CI/CD pipeline.

In the next section, we’ll learn about certain code practices to avoid when automating software builds.

Preparing your Code

In this section, we’ll cover certain aspects of your code and how they could impact the deployment of your software. Such software issues could include code not compiling (broken builds), avoiding relative path names, and making sure you wrote proper unit tests. These are a couple of the common errors I’ve experienced over the years; in this section, I’ll also provide solutions on how to fix them.

Before we review a CI pipeline, there are a few caveats we should address beforehand. Even though we covered a lot in the previous chapter regarding version control, your code needs to be in a certain state to achieve “one-button” builds.

In the following sections, you’ll learn how to prepare your code so that it’s “CI/CD-ready” and examine the problems you could experience when deploying your software and how to avoid them.

Building Flawlessly

If a new person is hired and starts immediately, you want them to hit the ground running and begin developing software without delay. This means being able to point them to a repository and pull the code so that you can immediately run the code with minimal setup.

I say “minimal setup” because there may be permissions involved to gain access to certain resources in the company so that they can be run locally.

Nevertheless, the code should be in a runnable state, send you to a simple screen of some kind, and notify the user to follow up on a permissions issue or provide some notification to resolve the problem.

In the previous chapter, we mentioned how the code should compile at all times. This means the following:

  • The code should always compile after a clone or checkout
  • Unit tests should be included with the build, not in separate projects
  • Your commit messages to version control should be meaningful (they may be used for Release Notes)

These standards allow your pipeline to fall into the pit of success. They help you create a build even faster and easier when your code is in a clean state.

Avoiding Relative Path Names with File-based Operations

One of the troublesome issues I’ve seen over the years when it comes to web applications is how files are accessed in a web application.

I’ve also seen file-based operations through a web page, where files were moved using relative paths and it went wrong. It involved deleting directories and it didn’t end well.

For example, let’s say you had a relative path to an image, as follows:

../images/myimage.jpg

Now, let’s say you’re sitting on a web page, such as https://localhost/kitchen/chairs.

If you went back one directory, you’d be in the kitchen with a missing image, not at the root of the website. According to your relative path, you’re looking for an image directory at https://localhost/kitchen/images/myimage.jpg.

To make matters worse, if you’re using custom routing, this may not even be the normal path, and who knows where it’s looking for the image.

The best approach when preparing your code is to use a single slash (/) at the beginning of your URL since it’s considered “absolute:”

/images/myimage.jpg

This makes it easier to navigate to the root when you’re locating files on a website, regardless of what environment you’re in. It doesn’t matter if you are on https://www.myfakewebsite.com/ or http://localhost/, the root is the root, and you’ll always find your files when using a single slash at the beginning of your sources.

Confirming that your Unit Tests are Unit Tests

Tests in your code are created to provide checks and balances so that your code works as expected. Each test needs to be examined carefully to confirm it isn’t doing anything out of the ordinary.

Unit tests are considered tests against code in memory, whereas integration tests are tests that require ANY external resources:

  • Do your tests access any files? Integration test.
  • Do you connect to a database to test something? Integration test.
  • Are you testing business logic? Unit test.

As you’re beginning to surmise, when you build your application on another machine, cloud services do not have access to your database server and also may not have the additional files you need for each test to pass.

If you are accessing external resources, it may be a better approach to refactor your tests into something a little more memory-driven. I’ll explain why in Chapter 7, when we’ll cover unit testing.

Creating Environment Settings

Whether you are in the middle of a project or are clicking Create New Project… for the first time, you need a way to create environment settings for your web application.

In ASP.NET Core applications, we are given appsettings.json and appsettings.Development.json configuration files out of the box. The appsettings.json file is meant to be a base configuration file and, depending on the environment, each appsettings file is applied and only existing properties are overwritten to the appsettings.json file.

One common example of this is connection strings and application paths. Depending on the environment, each file will have its own settings.

The environments need to be defined upfront as well. There will always be a development and release environment. There may be an option to create another environment called QA on another machine somewhere, so an appsettings.qa.json file would be required with its own environment-specific settings.

Confirm that these settings have been saved for each relevant environment since they are important in a CI/CD pipeline. These environment settings should always be checked into version control with your solution/project to assist the pipeline in deploying the right settings to the right environment.

In this section, we covered ways to prepare your code for a CI/CD pipeline by making sure we can build immediately after cloning or pulling the repository down locally, why we should avoid relative-based file paths, and confirmed we were using environment-specific application settings, making it easy to build and deploy our application.

With your code checked in, we can now move forward and describe all of the stages of a common pipeline.

Understanding the Pipeline

In this section, we’ll cover the steps of what a common pipeline includes for building software when using a CI/CD service. When you reach the end of this section, you’ll understand every step of the process in a common pipeline so that you can produce quality software.

A CI pipeline is a collection of steps required to code, build, test, and deploy software. Each step is not owned by a particular person but by a team working together and focusing on the goal to produce exceptional software. The good news is that if you followed the previous chapter’s recommendations, you’re already ahead of the game.

Each company’s pipeline can vary from product to product, but there will always be a common set of steps for a CI process. It depends on how detailed your pipeline becomes based on your needs. The stages in the pipelines can be influenced by each stakeholder involved in the process. Of course, pulling code and building and testing are required for the developers, but a QA team requires the finished product (artifact) to be sent to another server for test purposes.

Figure 2.1 shows one common pipeline:

Figure 2.1 – One example of a build pipeline

Figure 2.1 – One example of a build pipeline

As shown in Figure 2.1, the process is sequential when creating a software deployment. Here’s a summary of the steps:

  1. Pull code from a single repository.
  2. Build the application.
  3. Run unit tests/code analysis against the code that was built in step 2.
  4. Create the artifacts.
  5. Create a container (optional).
  6. Deploy the artifact(s) to a server (development/QA/staging/production).

Now that we’ve defined a common pipeline, let’s dig deeper into each step to learn what each process includes when you’re building your software.

In the following subsections, we’ll examine each process in detail based on the steps defined here.

Pulling Code

Before we build the application, we need to identify the project we’re building in our pipeline. The pipeline service requires a repository location. Once you’ve provided the repository URL, the service can prepare the repository for compilation on their server.

In the previous section, we mentioned why your code needs to compile flawlessly after cloning. The code is cloned and built on a completely different machine from yours. If the application only works on your computer and no one else’s, as the saying goes, “We’ll have to ship your computer to all of our users.” While this is a humorous saying in the industry, it’s generally frowned upon when writing and deploying software in the real world.

Each of the DevOps services has its benefits. For example, Azure Pipelines can examine your repository and make assumptions based on the structure of your project.

After analyzing the project, it uses a file format called YAML (pronounced Ya-mel) to define how the project should be built. While YAML is now considered a standard in the industry, we won’t deep-dive into everything YAML encompasses. YAML functionality could be a book on its own.

Azure takes your project’s assumptions and creates a YAML template on how it should build your application.

It knows how to compile the application, identify whether a container is included in the project, and also retrieve NuGet packages before performing the build.

One last thing to mention is that most DevOp services allow one repository per project. The benefits of this approach include the following:

  • Simplicity: It’s simpler to manage and build one application as opposed to orchestrating hundreds of applications in a project.
  • Collaboration: Instead of multiple teams focusing on one large project, it’s easier to have one or two smaller teams working on a single, more manageable project.
  • Faster builds: CI/CD pipelines are meant to provide fast feedback and even faster improvement. The smaller the project, the faster a build, test, and deployment will occur.

With that said, we are now ready to build the application.

Building the application

As mentioned previously, YAML files define how the service proceeds with building your application.

It’s always a good practice to confirm the YAML file contains everything you need before building. If you have a simple project, the boilerplate included in the wizard may be all you need, but it allows you to make updates in case additional files are required, or other application checks.

It may take a couple of attempts to massage the YAML file, but once you get the file in a stable state, it’s great to see everything work as expected.

Make sure you have retrieved all your code before building the application. If this step fails, the process kicks out of the pipeline.

If you checked in bad code and the build fails, the proper authorities (developers or administrators) will be notified based on the alert level and you’ll be given the dunce hat or the stuffed monkey for breaking the build until someone else breaks it.

Next, we’ll focus on running unit tests and other tests against the application.

Running Unit Tests/Code Analysis

Once the build is done, we can move forward with the unit tests and/or code analysis.

Unit tests should run against the compiled application. This includes unit tests and integration tests, but as we mentioned previously, be wary of integration tests. The pipeline services may not have access to certain resources, causing your tests to fail.

Unit tests, by nature, should be extremely fast. Why? Because you don’t want to wait for 30 minutes for unit tests to run (which is painful). If you have unit tests taking that long, identify the longest-running unit tests and refactor them.

Once the code has been compiled and loaded, unit tests should be running every 10-30 seconds as a general guideline since they are memory-based.

While unit and integration tests are common in most testing scenarios, there are additional checks you can add to your pipeline, which include identifying security issues and code metrics to generate reports at the end of your build.

Next, our build creates artifacts to be used for deployments.

Creating Artifacts

Once the build succeeds and all of the tests pass, the next step is to create an artifact of our build and store it in a central location.

As a general rule, it’s best to only create your binaries once. Once they’ve been built, they’re available at a moment’s notice. These artifacts can deploy a version to a server on a whim without going through the entire build process again.

The artifacts should be tamper-proof and never be modified by anyone. If there is an issue with the artifact, the pipeline should start from the beginning and create a new artifact.

Let’s move on to containers.

Creating a Container

Once you have created the self-contained artifact, an optional step is to build a container around it or install the artifact in the container. While most enterprises use various platforms and environments, such as Linux or Windows, “containerizing” an application with a tool such as Docker allows it to run on any platform while isolating the application.

With containers considered a standard in the industry, it makes sense to create a container so that it can easily be deployed to any platform, such as Azure, Amazon Web Services (AWS), or Google Cloud Provider. Again, this is an optional step, but it’s becoming an inevitable one in the industry.

When creating a new project with Visual Studio, you automatically get a container wrapper through a generated Docker file. This Dockerfile defines how the container will allow access to your application.

Once you’ve added the Dockerfile to your project, Azure identifies this as a container project and creates the container with the included project.

Lastly, we’ll examine deploying the software.

Deploying the software

Once everything has been generated, all we need to do is deploy the software.

Remember the environment settings in your appsettings.json file? This is where they come in handy for deployments.

Based on your environment, you can assign a task to merge the appropriate environment JSON file into the appsettings.json file on deployment.

Once you have your environment settings in order, you can define the destinations of your deployments any way you like.

Deployments can range from FTP-ing or WebDeploy-ing the artifact or pushing the container to a server somewhere. All of these options are available out of the box.

However, you must deploy the same way to every environment. The only thing that changes is the appsettings file.

After a successful (or unsuccessful) deployment, a report or notification should be sent to everyone involved in the deployment’s outcome.

In this section, we learned what a common pipeline includes and how each step relies on a successful previous step. If one step fails throughout the pipeline, the process immediately stops. This “conveyor belt” approach to software development provides repeatable steps, quality-driven software, and deployable software.

The Two “Falling” Approaches

In this section, we’ll learn about two ways to recover from a failed software deployment. After finishing this section, you’ll know how to use these two approaches to make a justified decision on recovering from a bad deployment.

In a standard pipeline, companies sometimes experience software glitches when deploying to a web server. Users may see an error message when they perform an action on the website.

What do you do when the software doesn’t work as expected? How does this work in the DevOps pipeline?

Every time you build software, there’s always a chance something could go wrong. You always need a backup plan before the software is deployed.

Let’s cover the two types of recovery methods we can use when software deployments don’t succeed.

Falling Backward (or fallback)

If various bugs were introduced into the product and the previous version doesn’t appear to have these errors, it makes sense to revert the software or fall back to the previous version.

In a pipeline, the process at the end creates artifacts, which are self-contained, deployable versions of your product.

Here is an example of falling backward:

  1. Your software deployment was a success last week and was marked as version 1.1 (v1.1).
  2. Over 2 weeks, development created two new features for the software and wanted to release them as soon as possible.
  3. A new build was created and released called version 1.3 (v1.3).
  4. While users were using the latest version (v1.3), they experienced issues with one of the new features, causing the website to show errors.
  5. Since the previous version (v1.1) doesn’t have this issue and the impact is not severe, developers can redeploy v1.1 to the server so that users can continue to be productive again.

This type of release is called falling backward.

If you have to replace a current version (v1.3) with a previous version (v1.1) (except for databases, which I’ll cover in a bit), you can easily identify and deploy the last-known artifact.

Falling Forward

If the fallback approach isn’t a viable recovery strategy, the alternative is to fall forward.

When falling forward, the product team accepts the deployment with errors (warts and all) and continues to move forward with newer releases while placing a high priority on these errors and acknowledging the errors will be fixed in the next or future release.

Here is a similar example of falling forward:

  1. Again, a software deployment was successful last week and was marked as version 1.5 (v1.5).
  2. Over another 2 weeks, development created another new large feature for the software.
  3. A new build was created and released called version 1.6 (v1.6).
  4. While users were using the latest version (v1.6), they experienced issues with one of the new features, causing the website to show errors.
  5. After analysis, the developers realized this was a “quick fix,” created the proper unit tests to show it was fixed, pushed a new release through the pipeline, and immediately deployed the fixed code in a new release (v1.7).

This type of release is called falling forward.

The product team may have to examine each error and make a decision as to which recovery method is the best approach for the product’s reputation.

For example, if product features such as business logic or user interface updates are the issue, the best recovery method may be to fall forward since the impact on the system is minimal and a user’s workflow is not interrupted and productive.

However, if code and database updates are involved, the better approach would be to fall back – that is, restore the database and use a previous version of the artifact.

If it’s a critical feature and reverting is not an option, a “hotfix” approach (as mentioned in the previous chapter) may be required to patch the software.

Again, it depends on the impact each issue has left on the system as to which recovery strategy is the best approach.

In this section, we learned about two ways to recover from unsuccessful software deployments: falling backward and falling forward. While neither option is a mandatory choice, each approach should be weighed heavily based on the error type, the recovery time of the fix, and the software’s deployment schedule.

Deploying Databases

Deploying application code is one thing but deploying databases can be a daunting task if not done properly. There are two pain points when deploying databases: structure and records.

With a database’s structure, you have the issue of adding, updating, and removing columns/fields from tables, along with updating the corresponding stored procedures, views, and other table-related functions to reflect the table updates.

With records, the process isn’t as tricky as changing a table’s structure. The frequency of updating records is not as regular, but when it does, happen that’s when you either want to seed a database with default records or update those seed records with new values.

The following sections will cover some common practices when deploying databases in a CI/CD pipeline.

Backing up Before Deploying

Since company data is essential to a business, it’s mandatory to back it up before making any modifications or updates to the database.

One recommendation is to make the entire database deploy a two-step process: back up the database, then apply the database updates.

The DevOps team can include a pre-deployment script to automatically back up the database before applying the database updates. If the backup was successful, you can continue deploying your changes to the database. If not, you can immediately stop the deployment and determine the cause of failure.

As discussed in the previous section, this is necessary for a “fallback” approach instead of a “fall forward” strategy.

Creating a Strategy for Table Structures

One strategy for updating a table is to take a non-destructive approach:

  • Adding a column: When adding columns, place a default value on the column for when a record is created. This will prevent the application from erroring out when you add a record, notifying the user that a field didn’t have a value or is required.
  • Updating/renaming a column: Updating a column is a little different because you may be changing a data type or value in the database. If you’re changing the column name and/or type to something else, add a new column with the new column type, make sure you default the value, and proceed to use it in your application code. Once the code is solid and is performing as expected, remove the old column from the table and then from your code.
  • Removing a column: There are several different ways to handle this process. If the field was created with a default value, make the appropriate changes in your application code to stop using the column. When records are added to the table, the default value won’t create an error. Once the application code has been updated, rename the column in the table instead of deleting it. If your code is still using it, you’ll be able to identify the code issue and fix it. Once your code is running without error, it’ll be safe to remove the column from your table.

While making the appropriate changes to table structures, don’t forget about updating the additional database code to reflect the table changes, including stored procedures, views, and functions.

Creating a Database Project

If your Visual Studio solution connects to a database, there’s another project type you need to add to your solution called the Database Project type. When you add this project to your solution, it takes a snapshot of your database and adds it to your project as code.

Why include this in your solution? There are three reasons to include it in your solution:

  1. It provides a database schema as T-SQL when you create a database from scratch.
  2. It allows you to version your database, in keeping with the Infrastructure as Code (IaC) paradigm.
  3. When you’re building your solution in Visual Studio, it automatically generates a DAC file from your Database Project for deployment with the option to attach a custom script. With the DAC included in your solution, the pipeline can deploy and update the database with the DAC file first. Once the database deployment (and backup) is finished, the pipeline can deploy the artifact.

As you can see, it’s pretty handy to include with your solution.

Using Entity Framework Core’s Migrations

Entity Framework has come a long way since its early days. Migrations are another way to include database changes through C# as opposed to T-SQL.

Upon creating a migration, Entity Framework Core takes a snapshot of the database and DbContext and creates the delta between the database schema and DbContext using C#.

With the initial migration, the entire C# code is generated with an Up() method.

Any subsequent migrations will contain an Up() method and a Down() method for upgrading and downgrading the database, respectively. This allows developers to save their database delta changes, along with their code changes.

Entity Framework Core’s migrations are an alternative to using DACs and custom scripts. These migrations can perform database changes based on the C# code.

If you require seed records, then you can use Entity Framework Core’s .HasData() method for easily creating seed records for tables.

In this section, we learned how to prepare our database deployment by always creating a backup, looked at a common strategy for adding, updating, and deleting table fields, and learned how to deploy databases in a CI/CD pipeline using either a DAC or Entity Framework Core’s migrations.

The three Types of Build Providers

Now that we’ve learned how a standard pipeline works, in this section, we’ll look at the different types of pipeline providers.

The three types of providers are on-premises, off-premises, and hybrid.

On-premises (meaning on-site or on-premises) relates to the software you own, which you can use to build your product at your company’s location. An advantage of on-premises build services is that once you purchase the software, you own it; there isn’t a subscription fee. So, if there’s a problem with the build server, you can easily look at the software locally to identify and fix the problem.

Off-premises (or cloud) providers are the more common services used nowadays. Since everyone wants everything yesterday, it’s quicker to set up and is usually an immediate way to create a software pipeline.

As you can guess, hybrid services are a mix of on-premises and off-premises services. Some companies like to keep control of certain aspects of software development and send the artifacts to a remote server for deployment purposes.

While hybrid services are an option, it makes more sense to use off-premises services for automated software builds.

In this section, we learned about three types of providers: on-premises, off-premises, and hybrid services. While these services are used in various companies, the majority of companies lean toward off-premises (or cloud) services to automate their software builds.

CI/CD Providers

In this section, we’ll review a current list of providers on the internet to help you automate your builds. While there are other providers available, these are considered what developers use in the industry as a standard.

Since we are targeting ASP.NET Core, rest assured, each of these providers supports ASP.NET Core in its build processes and deployments.

Microsoft Azure Pipelines

Since Microsoft created ASP.NET Core, it only makes sense to mention its off-premises cloud offerings. It does offer on-premises and hybrid support as well. Azure Pipelines provides the most automated support for ASP.NET Core applications and deployment mechanisms to date.

While Azure is considered one of the biggest cloud providers in the world, I consider Azure Pipelines a small component under the Azure moniker.

Important note

You can learn more about Azure Pipelines here: https://azure.microsoft.com/en-us/products/devops/pipelines/.

GitHub Actions

When Microsoft purchased GitHub back in June of 2018, GitHub came out with an automation pipeline with GitHub Actions in October of the same year.

Since GitHub is a provider of all things source code-related, GitHub Actions was considered an inevitable step toward making code deployable.

After signing up to Actions, you’ll notice the screens are very “Azure-ish” and provide a very similar interface when you’re building software pipelines.

Important note

You can learn more about GitHub Actions here: https://github.com/features/actions.

Amazon CodePipeline

With Amazon commanding a large lead in the e-commerce landscape and with its Amazon Web Services (AWS offering), it also provides automated pipelines for developers.

Its pipelines are broken down into categories:

  • CodeCommit: For identifying source code repositories
  • CodeArtifact: A centralized location for build artifacts
  • CodeBuild: A dedicated service for building your product based on updates in your repository, which are defined in CodeCommit
  • CodeDeploy: For managing environments for deploying software
  • CodePipelne: The glue that holds it all together

You can pick and choose the services you need based on your requirements. Amazon CodePipeline is similar to most cloud services, where you can use one service or all of them.

Important note

You can learn more about Amazon CodePipeline here: https://aws.amazon.com/codepipeline/.

Google CI

The final cloud provider is none other than Google CI. Google CI also provides the tools required to perform automated builds and deployments.

Google CI provides similar tools, such as Artifact Registry, source repositories, Cloud Build, and even private container registries.

As mentioned previously, once you understand how one cloud provider works, you’ll start to see similar offerings in other cloud providers.

Important note

You can learn more about Google CI here: https://cloud.google.com/solutions/continuous-integration.

In this section, we examined four CI/CD cloud providers: Microsoft’s Azure Pipelines, GitHub Actions, Amazon’s CodePipeline, and Google’s CI. Any one of these providers is a suitable candidate for creating an ASP.NET Core pipeline.

Walkthrough of Azure Pipelines

With everything we’ve discussed so far, this section will take us through a standard pipeline with a web application every developer should be familiar with: the ASP.NET Core web application.

If you have a web application of your own, you’ll be able to follow along and make the modifications to your web application as well.

In this section, we’ll demonstrate what a pipeline consists of by considering a sample application and walking through all of the components that will make it a successful build.

Preparing the Application

Before we move forward, we need to confirm whether the application in our version control is ready for a pipeline:

  • Does the application compile and clone without errors?
  • Do all the unit tests that accompany the application pass?
  • Do you have the correct environment settings in your application? (For example, appsettings.json, appsettings.qa.json, and so on.)
  • Will you deploy this application to a Docker container? If so, confirm you have a Dockerfile in the root of your application.

Again, the Dockerfile is optional, but most companies include one since they have numerous environments running on different operating systems. We’ll include the Dockerfile in our web application to complete the walkthrough.

Once everything has been confirmed in our checklist, we can move forward and create our pipeline.

Introducing Azure Pipelines

Azure Pipelines is a free service for developers to use to automate, test, and deploy their software to any platform.

Since Azure is user-specific, you’ll have to log in to your Azure Pipelines account or create a new one at https://azure.microsoft.com/en-us/products/devops/pipelines/. Don’t worry – it’s free to sign up and create pipelines:

  1. To continue with this walkthrough, click on the Start free with GitHub button, as shown in Figure 2.2:
Figure 2.2 – The Azure Pipelines web page

Figure 2.2 – The Azure Pipelines web page

Once you’ve logged in to Azure Pipelines, you are ready to create a project.

  1. Click New Project in the top right-hand corner. Enter details for Project Name and Description and determine whether it’s Private or Public.
  2. Upon clicking Create, we need to define which repository to use in our pipeline.

Identifying the Repository

We haven’t designated a repository for Azure Pipelines to use yet. So, we need to import an existing repository:

  1. If you click on any option under Files, you’ll notice a message saying <YourProjectNameHere> is empty. Add some code!. Sounds like solid advice.
  2. Click on the Import button under the Import a repository section, as shown in Figure 2.3:
Figure 2.3 – Importing a repository

Figure 2.3 – Importing a repository

  1. Clicking on the Import button will result in a side panel popping out, asking where your source code is located. Currently, there is only Git and Team Foundation Version Control (TFVC).
  2. Since the code for DefaultWebApp is in Git, I copied the clone URL and pasted it into the text box, and then clicked the Import button at the bottom of the side panel, as shown in Figure 2.4:
Figure 2.4 – Identifying the repository Azure Pipelines will use

Figure 2.4 – Identifying the repository Azure Pipelines will use

Azure Pipelines will proceed to import the repository. The next screen will be the standard Explorer view everyone is used to seeing, with a tree view on the left of your repository and a detailed list of files from the current directory on the right-hand side.

With that, we have finished importing the repository into Azure Pipelines.

Creating the Build

Now that we’ve imported our repository, Azure Pipelines makes this process extremely easy for us by adding a button called Set up build, as shown in Figure 2.5:

Figure 2.5 – Imported repository with a “Set up build” button as the next step

Figure 2.5 – Imported repository with a “Set up build” button as the next step

As vast as Azure Pipelines’ features can be, there are several preset templates to use for your builds. Each template pertains to a particular project in the .NET ecosystem, along with not-so-common projects as well:

  1. For our purposes, we’ll select the ASP.NET Core (.NET Framework) option.
  2. After the Configure step in our wizard (see the top?), we will come to the Review step, where we can examine the YAML file.
  3. With that said, you aren’t excluded from adding tasks at any time. There is Show Assistant to help you with adding new tasks to your existing YAML file.

For the DefaultWebApp example, we don’t need to update our YAML file because we don’t have any changes; this is because we want something very simple to create our build. The default YAML file looks like this:

# ASP.NET Core (.NET Framework)
# Build and test ASP.NET Core projects targeting the full .NET Framework.
# Add steps that publish symbols, save build artifacts, and more:
# https://docs.microsoft.com/azure/devops/pipelines/languages/dotnet-core
trigger:
- master
pool:
  vmImage: 'windows-latest'
variables:
  solution: '**/*.sln'
  buildPlatform: 'Any CPU'
  buildConfiguration: 'Release'
steps:
- task: NuGetToolInstaller@1
- task: NuGetCommand@2
  inputs:
    restoreSolution: '$(solution)'
- task: VSBuild@1
  inputs:
    solution: '$(solution)'
    msbuildArgs: '/p:DeployOnBuild=true /p:WebPublishMethod=Package /p:PackageAsSingleFile=true /p:SkipInvalidConfigurations=true /p:DesktopBuildPackageLocation="$(build.artifactStagingDirectory)\WebApp.zip" /p:DeployIisAppPath="Default Web Site"'
    platform: '$(buildPlatform)'
    configuration: '$(buildConfiguration)'
- task: VSTest@2
  inputs:
    platform: '$(buildPlatform)'
    configuration: '$(buildConfiguration)'

This new file that Azure Pipelines created is called azure-pipelines.yml. So, where does this new azure-pipelines.yml file reside when it’s created? It’s committed to the root of your repository. Once we’ve confirmed everything looks good in the YAML file, we can click the Save and run button.

Once you’ve done this, a side panel will appear, asking you for a commit message and optional description, as well as to specify options on whether to commit directly to the master branch or create a new branch for this commit. Once you’ve clicked the Save and run button at the bottom of the side panel, it will commit your new YAML file to your repository and execute the pipeline immediately.

Creating the Artifacts

Once the build is running, you’ll see something similar to Figure 2.6:

Figure 2.6 – Queueing up our DefaultWebApp build process

Figure 2.6 – Queueing up our DefaultWebApp build process

As shown at the bottom of the preceding screenshot, my job’s status is Queued. Once it’s out of the queue and executing, you can watch the builds progress by clicking on Job next to the blue clock at the bottom.

In terms of DefaultWebApp, this is what the build process looks as seen in Figure 2.7:

Figure 2.7 – The build progress of DefaultWebApp

Figure 2.7 – The build progress of DefaultWebApp

Congratulations! You have created a successful pipeline and artifact.

For the sake of not writing an entire book on Azure Pipelines, next, we will move on to creating releases.

Creating a Release

With a completed and successful build, we can now focus on releasing our software. Follow these steps:

  1. If you click on Releases, you’ll see we need to create a new release pipeline. Click the New Pipeline button.
  2. Immediately, you’ll see a side panel appear with a list of templates you can choose from. Select Empty job at the top of the side panel, as shown in Figure 2.8:
Figure 2.8 – Selecting an empty job template

Figure 2.8 – Selecting an empty job template

There is a term in Releases called Stages where your software can go through several stages before it’s sent to the final stage. These stages can also be synonymous with environments. These stages include development, QA, staging, and production. Once one stage has been approved (development), it moves to the next stage (QA) until the final one, which is usually production. However, these stages can get extremely complicated.

  1. After you click the Apply button, you will see another side panel where you can define your stage. Since we are simply deploying the website, we’ll call this the Push to Site stage.
  2. After entering your Stage name (that just doesn’t sound right), click the X button to close the side panel and examine the pipeline.

As shown in Figure 2.9, we need to add an artifact:

Figure 2.9 – The Push to Site stage is defined, but there’s no artifact

Figure 2.9 – The Push to Site stage is defined, but there’s no artifact

  1. When you click Add an Artifact, another side panel will slide open and ask you to add the artifact. Since we created an artifact in the previous subsection, we can populate all of our inputs with the DefaultWebApp project and source, as shown in Figure 2.10:
Figure 2.10 – Adding the DefaultWebApp artifact to our release pipeline

Figure 2.10 – Adding the DefaultWebApp artifact to our release pipeline

  1. Click Add to add your artifact to the pipeline.

Deploying the Build

Once we have defined our stages, we can attach certain deployment conditions, both before and after, to each stage. The ability to define post-deployment approvals, gates, and auto-redeploy triggers is possible but disabled by default for each stage.

In any stage, you can add, edit, or remove any task you want by clicking on the “x job, x tasks” link under each stage’s name, as shown in Figure 2.11:

Figure 2.11 – Stages allow you to add any number of tasks

Figure 2.11 – Stages allow you to add any number of tasks

Each stage has an agent job, which can perform any number of tasks. The list of tasks to choose from is mind-numbing. If you can think of it, there is a task for it.

For example, we can deploy a website using Azure, IIS Web Deploy, or even simply a file that’s been copied from one directory to another. Want to FTP the files over to a server? Click on the Utility tab and find FTP Upload.

Each task you add has parameters per topic and can easily be modified to suit a developer’s requirements.

In this section, we covered how to create a pipeline by preparing the application to meet certain requirements. We did this by introducing Azure Pipelines by logging in and adding our sample project, identifying the repository we’ll be using in our pipeline, and creating the build. Once we’d done this, we found our artifacts, created a release, and found a way to deploy the build.

Summary

In this chapter, we identified ways to prepare our code for a CI/CD pipeline so that we can build flawlessly, avoid relative path names with file-based operations, confirm our unit tests are unit tests, and create environment settings for our application. Once our code was ready, we examined what’s included in a common CI/CD pipeline, including a way to pull the code, build it, run unit tests with optional code analysis, create artifacts, wrap our code in a container, and deploy an artifact.

We also covered two ways to recover from a failed deployment using a fall-back or fall-forward approach. Then, we discussed common ways to prepare for deploying a database, which includes backing up your data, creating a strategy for modifying tables, adding a database project to your Visual Studio solution, and using Entity Framework Core’s migrations so that you can use C# to modify your tables.

We also reviewed the three types of CI/CD providers: on-premises, off-premises, and hybrid providers, with each one specific to a company’s needs, and then examined four cloud providers who offer full pipeline services: Microsoft’s DevOps Pipelines, GitHub Actions, Amazon’s CodePipeline, and Google’s CI.

Finally, we learned how to create a sample pipeline by preparing the application so that it meets certain requirements, logging in to Azure Pipelines and defining our sample project, identifying the repository we’ll be using in our pipeline, and creating the build. Once the build was complete, it generated our artifacts, and we learned how to create a release and find a way to deploy the build.

In the next chapter, we’ll learn about some of the best approaches for using middleware in ASP.NET Core.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Get to grips with standard guidelines for every phase of the SDLC, encompassing pre-coding, coding, and post-coding stages
  • Build high-quality software by employing industry best practices throughout the development process
  • Apply proven techniques to improve your coding, debugging, and deployment processes for websites
  • Purchase of the print or Kindle book includes a free PDF eBook

Description

As .NET 8 emerges as a long-term support (LTS) release designed to assist developers in migrating legacy applications to ASP.NET, this best practices book becomes your go-to guide for exploring the intricacies of ASP.NET and advancing your skills as a software engineer, full-stack developer, or web architect. This book will lead you through project structure and layout, setting up robust source control, and employing pipelines for automated project building. You’ll focus on ASP.NET components and gain insights into their commonalities. As you advance, you’ll cover middleware best practices, learning how to handle frontend tasks involving JavaScript, CSS, and image files. You’ll examine the best approach for working with Blazor applications and familiarize yourself with controllers and Razor Pages. Additionally, you’ll discover how to leverage Entity Framework Core and exception handling in your application. In the later chapters, you’ll master components that enhance project organization, extensibility, security, and performance. By the end of this book, you’ll have acquired a comprehensive understanding of industry-proven concepts and best practices to build real-world ASP.NET 8.0 websites confidently.

Who is this book for?

This book is for developers who have working knowledge of ASP.NET and want to advance in their careers by learning best practices followed in developer communities or corporate environments. Beginners can use this book as a springboard for integrating best practices into their learning journey, and as a reference to gain clarity on advanced ASP.NET topics at a later time.

What you will learn

  • Explore the common IDE tools used in the industry
  • Identify the best approach for organizing source control, projects, and middleware
  • Uncover and address top web security threats, implementing effective strategies to protect your code
  • Optimize Entity Framework for faster query performance using best practices
  • Automate software through continuous integration/continuous deployment
  • Gain a solid understanding of the .NET Core coding fundamentals for building websites
  • Harness HtmlHelpers, TagHelpers, ViewComponents, and Blazor for component-based development
Estimated delivery fee Deliver to Malta

Premium delivery 7 - 10 business days

€32.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Dec 29, 2023
Length: 256 pages
Edition : 1st
Language : English
ISBN-13 : 9781837632121
Languages :
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Estimated delivery fee Deliver to Malta

Premium delivery 7 - 10 business days

€32.95
(Includes tracking information)

Product Details

Publication date : Dec 29, 2023
Length: 256 pages
Edition : 1st
Language : English
ISBN-13 : 9781837632121
Languages :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 112.97
ASP.NET 8 Best Practices
€29.99
Apps and Services with .NET 8
€37.99
C# 12 and .NET 8 – Modern Cross-Platform Development Fundamentals
€44.99
Total 112.97 Stars icon

Table of Contents

13 Chapters
Chapter 1: Taking Control with Source Control Chevron down icon Chevron up icon
Chapter 2: CI/CD – Building Quality Software Automatically Chevron down icon Chevron up icon
Chapter 3: Best Approaches for Middleware Chevron down icon Chevron up icon
Chapter 4: Applying Security from the Start Chevron down icon Chevron up icon
Chapter 5: Optimizing Data Access with Entity Framework Core Chevron down icon Chevron up icon
Chapter 6: Best Practices with Web User Interfaces Chevron down icon Chevron up icon
Chapter 7: Testing Your Code Chevron down icon Chevron up icon
Chapter 8: Catching Exceptions with Exception Handling Chevron down icon Chevron up icon
Chapter 9: Creating Better Web APIs Chevron down icon Chevron up icon
Chapter 10: Push Your Application with Performance Chevron down icon Chevron up icon
Chapter 11: Appendix Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.8
(15 Ratings)
5 star 80%
4 star 20%
3 star 0%
2 star 0%
1 star 0%
Filter icon Filter
Top Reviews

Filter reviews by




Mike D. Jan 08, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
ASP.NET 8 Best Practices contains a variety of best practices related to .NET development with some general software development best practices. If someone is just starting in software development, there are more things to learn about source control than just what is mentioned in this book, but it isn’t set out to be a book on source control either. As a developer who has been in the industry for a while, there were still some things that I could get from this book. I’ve mostly worked on the .NET Framework, but not as much with the newer versions of .NET. Some of the concepts in this book only exist in the newer versions of .NET, so this book is also a good reference to learn those new features.
Amazon Verified review Amazon
Alex saavedra Jan 06, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book is an indispensable resource for any .NET developer eager to enhance their skills or delve into the language. Excellently crafted by Jonathan R. Danylko, it simplifies complex concepts into a straightforward approach complemented by practical examples, making it ideal for both beginners and seasoned professionals.What sets this book apart is its emphasis on best practices in ASP.NET development. It's not just about code samples; the author delves deep into the rationale behind each best practice, offering critical insights for creating robust .NET web applications. As you progress through the book, you gain a comprehensive understanding of ASP.NET, making it an invaluable guide for developers committed to mastering their skills. It is a great resource to refer to when you want to refresh your knowledge or want to improve your current code.
Amazon Verified review Amazon
Brett Mar 12, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This a great resource for most people working in Net 8, applicable to more than just ASP. Definitely recommend to those starting, or soon starting, their first development jobs. The book has a lot of general best practices that newcomers could really benefit from, and the inclusion of testing makes it all the better.
Amazon Verified review Amazon
Cory Harkins Jan 08, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
ASP.NET 8 Best Practices is the go to handbook for modern asp.net engineering and proficient suggestions in the space.From start to finish there isn't a boring page in this book. It's taken me a while to read it but that's on me, not due to this book. Nearly every aspect of a project is covered here, from setup, middleware, security, speed and efficiency improvements, and much much more.
Amazon Verified review Amazon
jk Dec 29, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
ASP.NET 8 Best Practices is an excellent reference for developers seeking to improve their ASP.NET applications. In each chapter, the author starts by defining terms and explaining concepts, and then goes into providing detailed code examples, techniques, and procedures. This approach makes the book very accessible, while also providing concrete, advanced examples for more senior practitioners. I appreciated the availability of code samples, which are easily accessible through the book's GitHub repo. I also really enjoyed how the author added sprinkles of humor throughout the book.The book's checklist of best practices in each chapter is a particularly helpful feature, providing guidance to both novice and experienced developers in designing their code. In addition, the author recommends other information sources for deep dives into specific topics. In summary, this book is an excellent tool for developers seeking to review ASP.NET applications holistically. Its clear explanations, detailed code examples, and helpful best practice checklists make it a valuable reference for developers of all levels. I plan to use this book during my code reviews going forward!
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela