Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Real-World Web Development with .NET 9
Real-World Web Development with .NET 9

Real-World Web Development with .NET 9: Build websites and services using mature and proven ASP.NET Core MVC, Web API, and Umbraco CMS

Arrow left icon
Profile Icon Mark J. Price
Arrow right icon
₱579.99 ₱2040.99
eBook Dec 2024 578 pages 1st Edition
eBook
₱579.99 ₱2040.99
Paperback
₱2551.99
Subscription
Free Trial
Arrow left icon
Profile Icon Mark J. Price
Arrow right icon
₱579.99 ₱2040.99
eBook Dec 2024 578 pages 1st Edition
eBook
₱579.99 ₱2040.99
Paperback
₱2551.99
Subscription
Free Trial
eBook
₱579.99 ₱2040.99
Paperback
₱2551.99
Subscription
Free Trial

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Table of content icon View table of contents Preview book icon Preview Book

Real-World Web Development with .NET 9

Introducing Web Development Using Controllers

This book is about mature and proven web development with .NET. This means a set of technologies that have been refined over a decade or more with plenty of documentation, support forums, and third-party investment.

These technologies are:

  • ASP.NET Core: A set of shared components for building websites and services.
  • ASP.NET Core MVC: An implementation of the model-view-controller design pattern for complex yet well-structured website development.
  • ASP.NET Core Web APIs: For building controller-based web services that conform to the HTTP/REST service conventions.
  • ASP.NET Core OData: For building data access web services using an open standard.
  • Umbraco CMS: A third-party, open source, Content Management System (CMS) platform built on ASP.NET Core.

With these technologies, you will learn how to build cross-platform websites and web services using .NET 8 or .NET 9, the two actively supported versions of .NET.

You can choose either because some of the newer features that we will learn about, like the HybridCache class, have backward compatibility with .NET 8. Others, like the new MapStaticAssets method that optimizes files like stylesheets and JavaScript, only work with .NET 9. I will warn you in these cases.

The benefit of choosing .NET 8 is that it is a Long-Term Support (LTS) release, meaning it is supported for three years. .NET 8 will reach its end of life in November 2026.

The benefit of choosing the latest .NET 9 is significant performance improvements and better support for containerization for cloud hosting compared to earlier versions. .NET 9 will reach its end of life in May 2026.

Throughout this book, I use the term modern .NET to refer to .NET 9 and its predecessors, like .NET 6, that derive from .NET Core. I use the term legacy .NET to refer to .NET Framework, Mono, Xamarin, and .NET Standard. Modern .NET is a unification of those legacy platforms and standards.

Who are you? While writing this book, I have assumed that you are a .NET developer who is employed by a consultancy or a large organization. As such, you primarily work with mature and proven technologies like MVC rather that the newest shiny technologies pushed by Microsoft like Blazor. I also assume that you have little professional interest in being a web designer or content editor.

I recommend that you work through this and subsequent chapters sequentially because later chapters will reference projects in earlier chapters, and you will build up sufficient knowledge and skills to tackle the more challenging problems in later chapters. For example, the last section in this chapter will walk you through creating a pair of class libraries that define a database entity model that will be used in all subsequent chapters.

In this chapter, we will cover the following topics:

  • Understanding ASP.NET Core
  • Structuring projects and managing packages
  • Making good use of the GitHub repository for this book
  • Building an entity model for use in the rest of the book

Warning! Prerequisites for this book are knowledge of C# and .NET fundamentals, and I assume you have already set up your development environment to use Visual Studio 2022, Visual Studio Code, or JetBrains Rider. Throughout this book, I will use the names Visual Studio, VS Code, and Rider to refer to these three code editors respectively. If you have not set up your development environment, then you can learn how at the following link:

https://github.com/markjprice/web-dev-net9/blob/main/docs/ch01-setup-dev-env.md

Understanding ASP.NET Core

To understand ASP.NET Core, it is useful to first see where it came from.

A brief history of ASP.NET Core

ASP.NET Core is part of a 30-year history of Microsoft technologies used to build websites and services that work with data that have evolved over the decades:

  • ActiveX Data Objects (ADO) was released in 1996 and was Microsoft’s attempt to provide a single set of Component Object Model (COM) components for working with data. With the release of .NET Framework in 2002, an equivalent was created named ADO.NET, which is still today the faster method to work with data in .NET with its core classes, DbConnection, DbCommand, and DbDataReader. ORMs like EF Core use ADO.NET internally.
  • Active Server Pages (ASP) was released in 1996 and was Microsoft’s first attempt at a platform for dynamic server-side execution of website code. ASP files contain a mix of HTML and code that executes on the server written in the VBScript language.
  • ASP.NET Web Forms was released in 2002 with .NET Framework and was designed to enable non-web developers, such as those familiar with Visual Basic, to quickly create websites by dragging and dropping visual components and writing event-driven code in Visual Basic or C#, as shown in Figure 1.1. Web Forms is not available on modern .NET and it should be avoided for new web projects even with .NET Framework due to limitations on cross-platform compatibility and modern development practices.
  • Windows Communication Foundation (WCF) was released in 2006 and enables developers to build SOAP and REST services. SOAP is powerful but complex, so it should be avoided in new projects unless you need advanced features, such as distributed transactions and complex messaging topologies. SOAP is still widely used in existing enterprise solutions, so you may come across it. I would be interested in hearing from you about this, since I am considering adding a chapter in a future edition of this book if there is enough interest.
  • ASP.NET MVC was released in 2009 to cleanly separate the concerns of web developers between the models, which temporarily store the data; the views, which present the data using various formats in the UI; and the controllers, which fetch the model and pass it to a view. This separation enables improved reuse and unit testing, and fits more naturally with web development without hiding the reality with an additional complex layer of event-driven user interface.
  • ASP.NET Web API was released in 2012 and enables developers to create HTTP services (a.k.a. REST services) that are simpler and more scalable than SOAP services.
  • ASP.NET SignalR was released in 2013 and enables real-time communication for websites by abstracting underlying technologies and techniques, such as WebSockets and long polling. This enables website features such as live chat or updates to time-sensitive data such as stock prices across a wide variety of web browsers, even when they do not support an underlying technology such as WebSockets.
  • ASP.NET Core was released in 2016 and combines modern implementations of .NET Framework technologies such as MVC, Web API, and SignalR with alternative technologies such as Razor Pages, gRPC, and Blazor, all running on modern .NET. Therefore, ASP.NET Core can execute cross-platform. ASP.NET Core has many project templates to get you started with its supported technologies. Over the past decade, the ASP.NET Core team has greatly improved performance and reduced memory footprint to make it the best platform for cloud computing. In some ways, Blazor is a return to Web Forms-style user interface development, as shown in Figure 1.1:

Figure 1.1: Evolution of web user interface technologies in .NET

Good Practice: Choose ASP.NET Core to develop websites and web services because it includes web-related technologies that are mature, proven, and cross-platform.

Classic ASP.NET versus modern ASP.NET Core

Until modern .NET, ASP.NET was built on top of a large assembly in .NET Framework named System.Web.dll and it was tightly coupled to Microsoft’s Windows-only web server named Internet Information Services (IIS). Over the years, this assembly has accumulated a lot of features, many of which are not suitable for modern cross-platform development.

ASP.NET Core is a major redesign of ASP.NET. It removes the dependency on the System.Web.dll assembly and IIS and is composed of modular lightweight packages, just like the rest of modern .NET. Using IIS as the web server is still supported by ASP.NET Core, but there is a modern option.

You can develop and run ASP.NET Core applications cross-platform on Windows, macOS, and Linux. Microsoft has even created a cross-platform, super-performant web server named Kestrel.

Kestrel is mostly open source. However, it depends on some underlying components and infrastructure that are not fully open source. Kestrel’s open source components include:

Kestrel’s non-open source components include:

  • Some lower-level networking optimizations and APIs in Windows, which Kestrel can take advantage of, are not open source. For example, some of the advanced socket APIs are part of Windows’ closed-source infrastructure.
  • While the .NET runtime is largely open source, there are some proprietary components or dependencies—especially when running on Windows—that are not open source. This would include some optimizations and integrations specific to Microsoft’s cloud infrastructure or networking stack that are baked into Kestrel’s performance characteristics when running on Windows.
  • If you’re using Kestrel hosted in Azure, some integration points, telemetry, and diagnostic services are proprietary. For example, Azure-specific logging, application insights, and security features (though not strictly part of Kestrel itself) are not fully open source.

Also, note that a non-open source alternative to Kestrel is HTTP.sys. This is a Windows-specific HTTP server and it is closed source. Applications can use HTTP.sys for edge cases requiring Windows authentication or other Windows-specific networking features, but this is outside of Kestrel itself.

Building websites using ASP.NET Core

Websites are made up of multiple web pages loaded statically from the filesystem or generated dynamically by a server-side technology such as ASP.NET Core. A web browser makes GET requests using Unique Resource Locators (URLs) that identify each page and can manipulate data stored on the server using POST, PUT, and DELETE requests.

With many websites, the web browser is treated as a presentation layer, with almost all the processing performed on the server side. Some JavaScript might be used on the client side to implement form validation warnings and some presentation features, such as carousels.

ASP.NET Core provides multiple technologies for building the user interface for websites:

  • ASP.NET Core Razor Pages is a simple way to dynamically generate HTML for simple websites.
  • ASP.NET Core MVC is an implementation of the Model-View-Controller (MVC) design pattern that is popular for developing complex websites. Microsoft’s first implementation of MVC on .NET was in 2009, so it is more than 15 years old now. Its APIs are stable, it has plentiful documentation and support, and many third parties have built powerful products and platforms on top of it and controller-based Web APIs. MVC is designed to work with the HTTP request/response model instead of hiding it so that you are encouraged to embrace the nature of web development rather than pretending it doesn’t exist, which can store up worse problems in the future.
  • Blazor lets you build user interface components using C# and .NET instead of a JavaScript-based UI framework like Angular, React, and Vue. Early versions of Blazor required a developer to choose a hosting model. The Blazor WebAssembly hosting model runs your code in the browser like a JavaScript-based framework would. The Blazor Server hosting model runs your code on the server and updates the web page dynamically using SignalR. Introduced with .NET 8 is a unified, full-stack hosting model that allows individual components to execute either on the server or client side, or even to adapt dynamically at runtime.

So which should you choose?

”Blazor is now our recommended approach for building web UI with ASP.NET Core, but neither MVC nor Razor Pages are now obsolete. Both MVC & Razor Pages are mature, fully supported, and widely used frameworks that we plan to support for the foreseeable future. There is also no requirement or guidance to migrate existing MVC or Razor Pages apps to Blazor. For existing, well-established MVC-based projects, continuing to develop with MVC is a perfectly valid and reasonable approach.” – Dan Roth

You can see the original comment post at the following link:

https://github.com/dotnet/aspnetcore/issues/51834#issuecomment-1913282747

Dan Roth is the Principal Product Manager on the ASP.NET team, so he knows the future of ASP.NET Core better than anyone else:

https://devblogs.microsoft.com/dotnet/author/danroth27/

I agree with the quote by Dan Roth. For me, there are two main choices:

  • For real-world websites and web services using mature and proven web development, choose controller-based ASP.NET Core MVC and Web API. For even more productivity, you can layer on top third-party platforms, for example, a .NET CMS like Umbraco. All these technologies are covered in this book.
  • For websites and web services using modern web development, choose Blazor for the web user interface and Minimal APIs for the web service. Choosing these is more of a risk because their APIs are still changing because they are relatively new. These technologies are covered in my other books, C# 13 and .NET 9 – Modern Cross-Platform Development Fundamentals and Apps and Services with .NET 8.

Much of ASP.NET Core is shared across these two choices anyway, so you will only need to learn about those shared components once, as shown in Figure 1.2:

Figure 1.2: Modern or mature controller-based (and shared) ASP.NET Core components

JetBrains did a survey of 26,348 developers from all around the world and asked about web development technologies and ASP.NET Core usage by .NET developers. The results showed that most .NET developers still use mature and proven controller-based technologies like MVC and Web API. The newer technologies like Blazor were far behind. A chart from the report is shown in Figure 1.3:

Figure 1.3: The State of Developer Ecosystem 2023 – ASP.NET Core

It is also interesting to see which JavaScript libraries and cloud host providers are used by .NET developers. For example, 18% use React, 15% use Angular, and 9% use Vue, and all have dropped by a few percent since the previous year. I speculate that this is due to a shift to Blazor instead. For cloud hosting, 24% use Azure, and 12% use AWS. This makes sense for .NET developers since Microsoft puts more effort into supporting .NET developers on its cloud platform.

More Information: You can read more about the JetBrains report, The State of Developer Ecosystem 2023, and see the results of the ASP.NET Core question at https://www.jetbrains.com/lp/devecosystem-2023/csharp/#csharp_asp_core.

In summary, C# and .NET can be used on both the server side and the client side to build websites, as shown in Figure 1.4:

Figure 1.4: The use of C# and .NET to build websites on both the server- and client-side

To summarize what’s new in ASP.NET Core 9 for its mature and proven controller-based technologies, let’s end this section with another quote from Dan Roth:

”We’re optimizing how static web assets are handled for all ASP.NET Core apps so that your files are pre-compressed as part of publishing your app. For API developers we’re providing built-in support for OpenAPI document generation.” - Dan Roth

Comparison of file types used in ASP.NET Core

It is useful to summarize the file types used by these technologies because they are similar but different. If the reader does not understand some subtle but important differences, it can cause much confusion when trying to implement their own projects. Please note the differences in Table 1.1:

Technology

Special filename

File extension

Directive

Razor View (MVC)

.cshtml

Razor Layout

.cshtml

Razor View Start

_ViewStart

.cshtml

Razor View Imports

_ViewImports

.cshtml

Razor Component (Blazor)

.razor

Razor Component (Blazor with page routing)

.razor

@page "<path>"

Razor Component Imports (Blazor)

_Imports

.razor

Razor Page

.cshtml

@page

Table 1.1: Comparison of file types used in ASP.NET Core

Directives like @page are added to the top of a file’s contents.

If a file does not have a special filename, then it can be named anything. For example, you might create a Razor View named Customer.cshtml, or you might create a Razor Layout named _MobileLayout.cshtml.

The naming convention for shared Razor files like layouts and partial views is to prefix with an underscore _. For example, _ViewStart.cshtml, _Layout.cshtml, or _Product.cshtml (this might be a partial view for rendering a product).

A Razor Layout file like _MyCustomLayout.cshtml is identical to a Razor View. What makes the file a layout is being set as the Layout property of another Razor file, as shown in the following code:

@{
  Layout = "_MyCustomLayout"; // File extension is not needed.
}

Warning! Be careful to use the correct file extension and directive at the top of the file or you will get unexpected behavior.

Building websites using a content management system

Most websites have a lot of content, and if developers had to be involved every time some content needed to be changed, that would not scale well. Almost no real-world website built with .NET only uses ASP.NET Core. A professional .NET web developer therefore needs to learn about other platforms built on top of ASP.NET Core.

A Content Management System (CMS) enables or CMS Administrators to define content structure and templates to provide consistency and good design while making it easy for a non-technical content owner to manage the actual content. They can create new pages or blocks of content, and update existing content, knowing it will look great for visitors with minimal effort.

There are a multitude of CMSs available for all web platforms, like WordPress for PHP or Django for Python. CMSs that support modern .NET include Optimizely Content Cloud, Umbraco, Piranha, and Orchard Core.

The key benefit of using a CMS is that it provides a friendly content management user interface. Content owners log in to the website and manage the content themselves. The content is then rendered and returned to visitors using ASP.NET Core MVC controllers and views, or via web service endpoints, known as a headless CMS, to provide that content to “heads” implemented as mobile or desktop apps, in-store touchpoints, or clients built with JavaScript frameworks or Blazor.

This book covers the world’s most popular .NET CMS, Umbraco in Chapter 13, Web Content Management Using Umbraco, and Chapter 14, Customizing and Extending Umbraco. The quantifiable evidence—usage statistics from BuiltWith, GitHub activity, download numbers, community engagement, and search trends—all point to Umbraco as the most popular .NET-based CMS worldwide. You can see a list of almost 100,000 websites built using Umbraco at the following link:

https://trends.builtwith.com/websitelist/Umbraco/Historical

Umbraco is open source and hosted on GitHub. It has over 2.7k forks and 4.4k stars on its main repository, found at the following link:

https://github.com/umbraco/Umbraco-CMS

The active developer community and constant updates indicate its popularity among developers. Umbraco has reported more than six million downloads of its CMS, which is a significant metric compared to competitors in the .NET CMS space.

More Information: You can learn more about alternative .NET CMSs in the GitHub repository at https://github.com/markjprice/web-dev-net9/blob/main/docs/book-links.md#net-content-management-systems.

Building web applications using SPA frameworks

Web applications are often built using technologies known as Single-Page Application (SPA) frameworks, such as Blazor, Angular, React, Vue, or a proprietary JavaScript library. They can make requests to a backend web service to get more data when needed and post updated data using common serialization formats such as XML and JSON. The canonical examples are Google web apps like Gmail, Maps, and Docs.

With a web application, the client side uses JavaScript frameworks or Blazor to implement sophisticated user interactions, but most of the important processing and data access still happens on the server side because the web browser has limited access to local system resources.

JavaScript is loosely typed and is not designed for complex projects, so most JavaScript libraries these days use TypeScript, which adds strong typing to JavaScript and is designed with many modern language features for handling complex implementations.

The .NET SDK has project templates for JavaScript and TypeScript-based SPAs, but we will not spend any time learning how to build JavaScript and TypeScript-based SPAs in this book.

If you are interested in building SPAs with an ASP.NET Core backend, Packt has other books that you might be interested in, as shown in the following list:

Building web and other services

In this book, you will learn how to build a controller-based web service using ASP.NET Core Web API, and then how to call that web service from an ASP.NET Core MVC website.

There are no formal definitions, but services are sometimes described based on their complexity:

  • Service: All functionality needed by a client app in one monolithic service.
  • Microservice: Multiple services that each focus on a smaller set of functionalities. They are often deployed using containerization, which we will cover in Chapter 8, Configuring and Containerizing ASP.NET Core Projects.
  • Nanoservice: A single function provided as a service. Unlike services and microservices that are hosted 24/7/365, nanoservices are often inactive until called upon to reduce resources and costs.

Cloud providers and deployment tools

These days, websites and web services are often deployed to cloud providers like Microsoft Azure or Amazon Web Services. Hundreds of different tools are used to perform the deployments, like Azure Pipelines or Octopus Deploy.

Cloud providers and deployment tools are out-of-scope for this book because there are too many choices and I don’t want to force anyone to learn about or pay for cloud hosting that they will never use for their own projects.

Instead, this book covers containerization using Docker in Chapter 8, Configuring and Containerizing ASP.NET Core Projects. Once you have containerized an ASP.NET Core project, it is easy to deploy it to any cloud provider using any deployment or production management tool.

Structuring projects and managing packages

How should you structure your projects? In this book, we will build multiple projects using different technologies that work together to provide a single solution.

With large, complex solutions, it can be difficult to navigate through all the code. So, the primary reason to structure your projects is to make it easier to find components. It is good to have an overall name for your solution that reflects the application or solution.

We will build multiple projects for a fictional company named Northwind. We will name the solution MatureWeb and use the name Northwind as a prefix for all the project names.

There are many ways to structure and name projects and solutions, for example, using a folder hierarchy as well as a naming convention. If you work in a team, make sure you know how your team does it.

Structuring projects in a solution

It is good to have a naming convention for your projects in a solution so that any developer can tell what each one does instantly. A common choice is to use the type of project, for example, class library, console app, website, and so on.

Since you might want to run multiple web projects at the same time, and they will be hosted on a local web server, we need to differentiate each project by assigning different port numbers for their endpoints for both HTTP and HTTPS.

Commonly assigned local port numbers are 5000 for HTTP and 5001 for HTTPS. We will use a numbering convention of 5<chapter>0 for HTTP and 5<chapter>1 for HTTPS. For example, for an ASP.NET Core MVC website project that we will create in Chapter 2, we will assign 5020 for HTTP and 5021 for HTTPS.

We will therefore use the following project names and port numbers, as shown in Table 1.2:

Name

Ports

Description

Northwind.Common

N/A

A class library project for common types like interfaces, enums, classes, records, and structs, is used across multiple projects.

Northwind.EntityModels

N/A

A class library project for common EF Core entity models. Entity models are often used on both the server and client side, so it is best to separate dependencies on specific database providers.

Northwind.DataContext

N/A

A class library project for the EF Core database context with dependencies on specific database providers.

Northwind.UnitTests

N/A

An xUnit test project for the solution.

Northwind.Mvc

http 5020,

https 5021

An ASP.NET Core project for complex websites that uses a mixture of static HTML files and MVC Razor Views.

Northwind.WebApi

http 5090,

https 5091

An ASP.NET Core project for a Web API aka HTTP service. A good choice for integrating with websites because it can use any .NET app, JavaScript library, or Blazor to interact with the service.

Table 1.2: Example project names for various project types

Structuring folders in a project

In ASP.NET Core projects, organizing the project structure is vital for maintainability and scalability. Two popular approaches are organizing by technological concerns and using feature folders.

Folder structure based on technological concerns

In this approach, folders are structured based on the type of components, such as Controllers, Models, Views, Services, and so on, as shown in the following output:

/Controllers
  ShoppingCartController.cs
  CatalogController.cs
/Models
  Product.cs
  ShoppingCart.cs
/Views
  /ShoppingCart
    Index.cshtml
    Summary.cshtml
  /Catalog
    Index.cshtml
    Details.cshtml
/Services
  ProductService.cs
  ShoppingCartService.cs

There are pros and cons to the technical concerns approach, as shown in the following list:

  • Pro – Familiarity: This structure is common and well-documented, and many sample projects use it, making it easier for developers to understand.
  • Pro – IDE support: SDKs and IDEs assume this structure and may provide better support and navigation for it.
  • Con – Scalability: As the project grows, finding related files can become difficult since they are spread across multiple folders.
  • Con – Cross-cutting concerns: Managing cross-cutting concerns like logging and validation can become cumbersome.

The .NET SDK project templates use this technological concerns approach to folder structure. This means that many organizations use it by default despite it not being the best approach for their needs.

Folder structure based on features

In this approach, folders are organized by features or vertical slices, grouping all related files for a specific feature together, as shown in the following output:

/Features
  /ShoppingCart
    ShoppingCartController.cs
    ShoppingCartService.cs
    ShoppingCart.cs
    Index.cshtml
    Summary.cshtml
  /Catalog
    CatalogController.cs
    ProductService.cs
    Product.cs
    Index.cshtml
    Details.cshtml

There are pros and cons to the feature folders approach, as shown in the following list:

  • Pro – Modularity: Each feature is self-contained, making it easier to manage and understand. Adding new features is straightforward and doesn’t affect the existing structure. Easier to maintain since related files are located together.
  • Pro – Isolation: Helps in isolating different parts of the application, promoting better testability and refactoring.
  • Con – Learning curve: Less familiar to some developers, requiring a learning curve.
  • Con – Code duplication: Potential for code duplication if not managed properly.

Feature folders are a common choice for modular monolith architecture. It makes it easier to later split the feature out into a separate project for deployment.

Feature folders align well with the principles of Vertical Slice Architecture (VSA). VSA focuses on organizing code by features or vertical slices, each slice handling a specific business capability end-to-end. This approach often includes everything from the UI layer down to the data access layer for a given feature in one place, as described in the following key points:

  • Each slice represents an end-to-end implementation of a feature.
  • VSA promotes loose coupling between features, making the application more modular and easier to maintain.
  • Each slice is responsible for a single feature or use case, which fits well with SOLID’s Single Responsibility Principle (SRP).
  • VSA allows for features to be developed, tested, and deployed independently, which is beneficial for microservices or distributed systems.

Folder structure summary

Both organizational techniques have their merits, and the choice depends on the specific needs of your project. Technological concerns organization is straightforward and familiar but can become unwieldy as the project grows. Feature folders, while potentially introducing a learning curve, offer better modularity and scalability, aligning well with the principles of VSA.

Feature folders are particularly advantageous in larger projects or those with distributed teams, as they promote better organization and isolation of features, leading to improved maintainability and flexibility in the long run.

Central Package Management

By default, with the .NET SDK CLI and most code editor-created projects, if you need to reference a NuGet package, you add the reference to the package name and version directly in the project file.

Central Package Management (CPM) is a feature that simplifies the management of NuGet package versions across multiple projects within a solution. This is particularly useful for large solutions with many projects, where managing package versions individually can become cumbersome and error-prone.

The key features and benefits of CPM include:

  • Centralized Control: CPM allows you to define package versions in a single file, typically Directory.Packages.props, which is placed in the root directory of your solution. This file centralizes the version information for all NuGet packages used across the projects in your solution.
  • Consistency: Ensures consistent package versions across multiple projects. By having a single source of truth for package versions, it eliminates discrepancies that can occur when different projects specify different versions of the same package.
  • Simplified Updates: Updating a package version in a large solution becomes straightforward. You update the version in the central file, and all projects referencing that package automatically use the updated version. This significantly reduces the maintenance overhead.
  • Reduced Redundancy: Removes the need to specify package versions in individual project files (.csproj). This makes project files cleaner and easier to manage, as they no longer contain repetitive version information.

Good Practice: It is important to regularly update NuGet packages and their dependencies to address security vulnerabilities.

Let’s set up Central Package Management for a solution that we will use throughout the rest of the chapters in this book:

  1. Create a new folder named web-dev-net9 that we will use for all the code in this book. For example, on Windows, create a folder: C:\web-dev-net9.
  2. In the web-dev-net9 folder, create a new folder named MatureWeb.
  3. In the MatureWeb folder, create a new file named Directory.Packages.props.
  4. In Directory.Packages.props, modify its contents, as shown in the following markup:
    <Project>
      <PropertyGroup>
        <ManagePackageVersionsCentrally>true</Man
    agePackageVersionsCentrally>
      </PropertyGroup>
      <ItemGroup Label="For EF Core.">
        <PackageVersion
          Include="Microsoft.EntityFrameworkCore.SqlServer"
          Version="9.0.0" />
        <PackageVersion
          Include="Microsoft.EntityFrameworkCore.Sqlite"
          Version="9.0.0" />
        <PackageVersion
          Include="Microsoft.EntityFrameworkCore.Design"
          Version="9.0.0" />
        <PackageVersion
          Include="Microsoft.EntityFrameworkCore.Tools"
          Version="9.0.0" />
      </ItemGroup>
      <ItemGroup Label="For testing.">
        <PackageVersion Include="coverlet.collector"
          Version="6.0.2" />
        <PackageVersion Include="Microsoft.NET.Test.Sdk"
          Version="17.11.1" />
        <PackageVersion Include="xunit" Version="2.9.2" />
        <!--The following package was still a preview on .NET 9 release day.-->
        <PackageVersion
          Include="xunit.runner.visualstudio"
          Version="3.0.0-pre.49" />
        <PackageVersion Include="Microsoft.Playwright" Version="1.49.0" />
        <PackageVersion
          Include="Microsoft.AspNetCore.Mvc.Testing"
          Version="9.0.0" />
      </ItemGroup>
      <ItemGroup Label="For ASP.NET Core websites.">
        <PackageVersion Include=
          "Microsoft.AspNetCore.Diagnostics.EntityFrameworkCore"
          Version="9.0.0" />
        <PackageVersion Include=
          "Microsoft.AspNetCore.Identity.EntityFrameworkCore"
          Version="9.0.0" />
        <PackageVersion
          Include="Microsoft.AspNetCore.Identity.UI"
          Version="9.0.0" />
      </ItemGroup>
      <ItemGroup Label="For deployment.">
        <PackageVersion Include=
    "Microsoft.VisualStudio.Azure.Containers.Tools.Targets"
          Version="1.21.0" />
      </ItemGroup>
      <ItemGroup Label="For caching.">
        <!--The following package was still a preview on .NET 9 release day.-->
        <PackageVersion
          Include="Microsoft.Extensions.Caching.Hybrid"
          Version="9.0.0-preview.9.24556.5" />
      </ItemGroup>
      <ItemGroup Label="For ASP.NET Core web services.">
        <PackageVersion
          Include="Microsoft.AspNetCore.OpenApi"
          Version="9.0.0" />
        <PackageVersion
          Include="NSwag.MSBuild" Version="14.1.0" />
        <PackageVersion Include=
          "Microsoft.AspNetCore.Authentication.JwtBearer"
          Version="9.0.0" />
        <PackageVersion
          Include="Microsoft.AspNetCore.OData"
          Version="9.0.0" />
      </ItemGroup>
      <ItemGroup Label="For FastEndpoints web services.">
        <PackageVersion Include="FastEndpoints"
          Version="5.31.0" />
      </ItemGroup>
      <ItemGroup Label="For Umbraco CMS.">
        <PackageVersion Include="Umbraco.Cms"
          Version="14.3.1" />
        <PackageVersion
          Include="Microsoft.ICU.ICU4C.Runtime"
          Version="72.1.0.3" />
      </ItemGroup>
    </Project>
    

Warning! The <ManagePackageVersionsCentrally> element and its true value must go all on one line. Also, you cannot use floating wildcard version numbers like 9.0-* as you can in an individual project. Wildcards are useful to automatically get the latest patch version, for example, monthly package updates on Patch Tuesday. But with CPM you must manually update the versions.

For any projects that we add underneath the folder containing this file, we can reference the packages without explicitly specifying the version, as shown in the following markup:

<ItemGroup>
  <PackageReference
    Include="Microsoft.EntityFrameworkCore.SqlServer" />
  <PackageReference
    Include="Microsoft.EntityFrameworkCore.Design" />
</ItemGroup>

You should regularly review and update the package versions in the Directory.Packages.props file to ensure that you are using the latest stable releases with important bug fixes and performance improvements. For example, the Microsoft.Extensions.Caching.Hybrid package was still in preview on the day of .NET 9’s release when I finished final drafts. By the time you read this, it is likely to be out of preview, so update its version number.

Good Practice: I recommend that you set a monthly event in your calendar for the second Wednesday of each month. This will occur after the second Tuesday of each month, which is Patch Tuesday when Microsoft releases bug fixes and patches for .NET and related packages.

For example, in December 2024, there are likely to be new versions, so you can go to the NuGet page for each of your packages. You can then update the versions if necessary, for example, as shown in the following markup:

<ItemGroup Label="For EF Core.">
  <PackageVersion
    Include="Microsoft.EntityFrameworkCore.SqlServer"
    Version="9.0.1" />
  ...
</ItemGroup>

Before updating package versions, check for any breaking changes in the release notes of the packages. Test your solution thoroughly after updating to ensure compatibility.

Educate your team and document the purpose and usage of the Directory.Packages.props file to ensure everyone understands how to manage package versions centrally.

You can override an individual package version by using the VersionOverride attribute on a <PackageReference /> element, as shown in the following markup:

<ItemGroup>
  <PackageReference
    Include="Microsoft.EntityFrameworkCore.SqlServer"
    VersionOverride="9.0.0" />
  ...
</ItemGroup>

This can be useful if a newer version introduces a regression bug.

More Information: You can learn more about CPM at the following link:

https://learn.microsoft.com/en-us/nuget/consume-packages/central-package-management

Making good use of the GitHub repository for this book

Git is a commonly used source code management system. GitHub is a company, website, and desktop application that makes it easier to manage Git. Microsoft purchased GitHub in 2018, so it will continue to get closer integration with Microsoft tools.

I created a GitHub repository for this book, and I use it for the following:

  • To store the solution code for the book that can be maintained after the print publication date.
  • To provide extra materials that extend the book, like errata fixes, small improvements, lists of useful links, and optional sections about topics that cannot fit in the printed book.
  • To provide a place for readers to get in touch with me if they have issues with the book.

    Good Practice: I strongly recommend that all readers review the errata, improvements, post-publication changes, and common errors pages before attempting any coding task in this book. You can find them at https://github.com/markjprice/web-dev-net9/blob/main/docs/errata/README.md.md.

Understanding the solution code on GitHub

The solution code in the GitHub repository for this book can be opened with any of the following code editors:

  • Visual Studio or Rider: Open the MatureWeb.sln solution file.
  • VS Code: Open the MatureWeb.sln folder.

All the chapters in this book share a single solution file named MatureWeb.sln.

All the code solutions can be found at the following link:

https://github.com/markjprice/web-dev-net9/tree/main/code

If you are new to .NET development, then the GitHub repository has step-by-step instructions for three code editors (Visual Studio, VS Code, and Rider), along with additional screenshots:

https://github.com/markjprice/web-dev-net9/tree/main/docs/code-editors/

Downloading solution code from the GitHub repository

If you just want to download all the solution files without using Git, click the green Code button and then select Download ZIP, as shown in Figure 1.5:

Figure 1.5: Downloading the repository as a ZIP file

Good Practice: It is best to clone or download the code solutions to a short folder path, like C:\web-dev-net9\ or C:\book\, to avoid build-generated files exceeding the maximum path length. You should also avoid special characters like #. For example, do not use a folder name like C:\C# projects\. That folder name might work for a simple console app project but once you start adding features that automatically generate code, you are likely to have strange issues. Keep your folder names short and simple.

Using Git with VS Code and the command prompt

VS Code has integrations with Git, but it will use your operating system’s Git installation, so you must install Git 2 or later first before you get these features.

You can install Git from the following link:

https://git-scm.com/download

If you like to use a GUI, you can download GitHub Desktop from the following link:

https://desktop.github.com

Cloning the book solution code repository

Let’s clone the book solution code repository. In the steps that follow, you will use the VS Code terminal, but you could enter the commands at any command prompt or terminal window:

  1. Create a folder named Repos-vscode in your User or Documents folder, or wherever you want to store your Git repositories.
  2. Open the Repos-vscode folder at the command prompt or terminal, and then enter the following command:
    git clone https://github.com/markjprice/web-dev-net9.git
    

Note that cloning all the solutions for all the chapters will take a minute or so, so please be patient.

Building an entity model for use in the rest of the book

Websites and web services usually need to work with data in a relational database or another data store. There are several technologies that could be used, from lower-level ADO.NET to higher-level EF Core. We will use EF Core since it is flexible and more familiar to .NET developers.

In this section, we will define an EF Core entity data model for a database named Northwind stored in SQL Server. It will be used in most of the projects that we create in subsequent chapters.

Northwind database SQL scripts

The script for SQL Server creates 13 tables as well as related views and stored procedures. The SQL scripts are found at https://github.com/markjprice/web-dev-net9/tree/main/scripts/sql-scripts.

There are multiple SQL scripts to choose from, as described in the following list:

  • Northwind4AzureSqlEdgeDocker.sql script: To use SQL Server on a local computer in Docker. The script creates the Northwind database. It does not drop it if it already exists because the Docker container should be empty anyway as a fresh one will be spun up each time. This is my recommendation. Instructions to install Docker and set up a SQL Edge image and container are in the next section of this book.
  • Northwind4SqlServer.sql script: To use SQL Server on a local Windows or Linux computer. The script checks if the Northwind database already exists and if necessary drops it before creating it. Instructions to install SQL Server Developer Edition (free) on your local Windows computer can be found in the GitHub repository for this book at https://github.com/markjprice/web-dev-net9/blob/main/docs/sql-server/README.md.
  • Northwind4AzureSqlDatabaseCloud.sql script: To use SQL Server with an Azure SQL Database resource created in the Azure cloud. You will need an Azure account; these resources cost money as long as they exist! The script does not drop or create the Northwind database because you should manually create the Northwind database using the Azure portal user interface. The script only creates the database objects, including the table structure and data.

Installing Docker and the Azure SQL Edge container image

Docker provides a consistent environment across development, testing, and production, minimizing the “it works on my machine” issue. Docker containers are more lightweight than traditional virtual machines, making them faster to start up and less resource-intensive.

Docker containers can run on any system with Docker installed, making it easy to move databases between environments or across different machines. You can quickly spin up a SQL database container with a single command, making setup faster and more reproducible. Each database instance runs in its own container, ensuring that it is isolated from other applications and databases on the same machine.

You can install Docker on any operating system and use a container that has Azure SQL Edge, a cross-platform minimal featured version of SQL Server that only includes the database engine. For personal, educational, and small business use, Docker Desktop is free to use. It includes the full set of Docker features, including container management and orchestration. The Docker Command-line Interface (CLI) and Docker engine are open source and free to use, allowing developers to build, run, and manage containers.

Docker also has paid tiers that offer additional features, such as enhanced security, collaboration tools, more granular access control, priority support, and higher rate limits on Docker Hub image pull.

The Docker image we will use has Azure SQL Edge based on Ubuntu 18.4. It is supported with Docker Engine 1.8 or later. Azure SQL Edge requires a 64-bit processor (either x64 or ARM64), with a minimum of one processor and 1 GB RAM on the host:

  1. Install Docker Desktop from the following link: https://docs.docker.com/engine/install/
  2. Start Docker Desktop, which could take a few minutes on the initial start, as shown in Figure 1.6:

Figure 1.6: Docker Desktop v4.33.1 (August 2024) on Windows

  1. At the command prompt or terminal, pull down the latest container image for Azure SQL Edge, as shown in the following command:
    docker pull mcr.microsoft.com/azure-sql-edge:latest
    
  2. Wait for the image as it is downloading, as shown in the following output:
    latest: Pulling from azure-sql-edge
    a055bf07b5b0: Pull complete
    cb84717c05a1: Pull complete
    35d9c30b7f54: Downloading [========================>                          ]  20.46MB/42.55MB
    46be68282524: Downloading [============>                                      ]  45.94MB/186MB
    5eee3e29ad15: Downloading [======================================>            ]  15.97MB/20.52MB
    15bd653c6216: Waiting
    d8d6247303da: Waiting
    c31fafd6718a: Waiting
    fa1c91dcb9c8: Waiting
    1ccbfe988be8: Waiting
    
  3. Note the results, as shown in the following output:
    latest: Pulling from azure-sql-edge
    2f94e549220a: Pull complete
    830b1adc1e72: Pull complete
    f6caea6b4bd2: Pull complete
    ef3b33eb5a27: Pull complete
    8a42011e5477: Pull complete
    f173534aa1e4: Pull complete
    6c1894e17f11: Pull complete
    a81c43e790ea: Pull complete
    c3982946560a: Pull complete
    25f31208d245: Pull complete
    Digest: sha256:7c203ad8b240ef3bff81ca9794f31936c9b864cc165dd187c23c5bfe06cf0340
    Status: Downloaded newer image for mcr.microsoft.com/azure-sql-edge:latest
    mcr.microsoft.com/azure-sql-edge:latest
    

Running the Azure SQL Edge container image

Now we can run the image:

  1. At the command prompt or terminal, run the container image for Azure SQL Edge with a strong password and name the container azuresqledge, as shown in the following command:
    docker run --cap-add SYS_PTRACE -e 'ACCEPT_EULA=1' -e 'MSSQL_SA_PASSWORD=s3cret-Ninja' -p 1433:1433 --name azuresqledge -d mcr.microsoft.com/azure-sql-edge
    

Good Practice: The password must be at least eight characters long and contain characters from three of the following four sets: uppercase letters, lowercase letters, digits, and symbols. Otherwise, the container cannot set up the SQL Edge engine and will stop working.

On Windows 11, running the container image at the command prompt failed for me. See the next section titled Running a container using the user interface for steps that worked.

  1. If your operating system firewall blocks access, then allow access.
  2. In Docker Desktop, in the Containers section, confirm that the image is running, as shown in Figure 1.7:

Figure 1.7: Azure SQL Edge running in Docker Desktop on Windows

  1. At the command prompt or terminal, ask Docker to list all containers, both running and stopped, as shown in the following command:
    docker ps -a
    
  2. Note the container is “Up” and listening externally on port 1433, which is mapped to its internal port 1433, as shown highlighted in the following output:
    CONTAINER ID   IMAGE                              COMMAND                  CREATED         STATUS         PORTS                              NAMES
    183f02e84b2a   mcr.microsoft.com/azure-sql-edge   "/opt/mssql/bin/perm…"   8 minutes ago   Up 8 minutes   1401/tcp, 0.0.0.0:1433->1433/tcp   azuresqledge
    

More Information: You can learn more about the docker ps command at https://docs.docker.com/engine/reference/commandline/ps/.

Running a container using the user interface

If you successfully ran the SQL Edge container, then you can skip this section and continue with the next section, titled Connecting to Azure SQL Edge in a Docker container.

If entering a command at the prompt or terminal fails for you, try following these steps to use the user interface:

  1. In Docker Desktop, navigate to the Images tab.
  2. In the mcr.microsoft.com/azuresqledge row, click the Run action.
  3. In the Run a new container dialog box, expand Optional settings, and complete the configuration, as shown in Figure 1.8 and in the following items:
    • Container name: azuresqledge, or leave blank to use a random name.
    • Ports:
      • Enter 1401 to map to :1401/tcp.
      • Enter 1433 to map to :1433/tcp.
    • Volumes: leave empty.
    • Environment variables (click + to add a second one):
      • Enter ACCEPT_EULA with value Y (or 1).
      • Enter MSSQL_SA_PASSWORD with value s3cret-Ninja.
  4. Click Run.
A screenshot of a computer

Description automatically generated

Figure 1.8: Running a container for Azure SQL Edge with the user interface

Connecting to Azure SQL Edge in a Docker container

Use your preferred database tool to connect to Azure SQL Edge in the Docker container. Some common database tools are shown in the following list:

  • Windows only:
    • SQL Server Management Studio (SSMS): The most popular and comprehensive tool for managing SQL Server databases. Free to download from Microsoft.
    • SQL Server Data Tools (SSDT): Integrated into Visual Studio and free to use, SSDT provides database development tools for designing, deploying, and managing SQL Server databases.
  • Cross-platform for Windows, macOS, Linux:
    • VS Code’s MS SQL extension: Query execution, IntelliSense, database browsing, and connection to SQL Server databases.
    • Azure Data Studio: A cross-platform database management tool focused on query editing, data insights, and lightweight management.

Some notes about the database connection string for SQL Edge:

  • Data Source, a.k.a. server: tcp:127.0.0.1,1433
  • You must use SQL Server Authentication, a.k.a. SQL Login. That is, you must supply a username and password. The Azure SQL Edge image has the sa user already created and you had to give it a strong password when you ran the container. We chose the password s3cret-Ninja.
  • You must select the Trust Server Certificate check box.
  • Initial Catalog, a.k.a. database: master or leave blank. (We will create the Northwind database using a SQL script so we do not specify that as the database name yet.)

Connecting from Visual Studio

To connect to SQL Edge using Visual Studio:

  1. In Visual Studio, navigate to View | Server Explorer.
  2. In the mini-toolbar, click the Connect to Database... button.
  3. Enter the connection details, as shown in Figure 1.9:

Figure 1.9: Connecting to your Azure SQL Edge server from Visual Studio

Connecting from VS Code

To connect to SQL Edge using VS Code:

  1. In VS Code, navigate to the SQL Server extension. Note that the mssql extension might take a few minutes to initialize the first time.
  2. In the SQL extension, click Add Connection....
  3. Enter the server name tcp:127.0.0.1,1433, as shown in Figure 1.10:
A screenshot of a computer

Description automatically generated

Figure 1.10: Specifying the server name

  1. Leave the database name blank by pressing Enter, as shown in Figure 1.11:
A screenshot of a computer

Description automatically generated

Figure 1.11: Specifying the database name (leave blank)

  1. Select SQL Login, as shown in Figure 1.12:

Figure 1.12: Choosing SQL Login to authenticate

  1. Enter the user ID sa, as shown in Figure 1.13:
A screen shot of a computer

Description automatically generated

Figure 1.13: Entering the user ID of sa

  1. Enter the password s3cret-Ninja, as shown in Figure 1.14:
A screenshot of a computer

Description automatically generated

Figure 1.14: Entering the password

  1. Select Yes to save the password for the future, as shown in Figure 1.15:
A screenshot of a computer

Description automatically generated

Figure 1.15: Saving the password for future use

  1. Enter a connection profile name, Azure SQL Edge in Docker, as shown in Figure 1.16:
A screenshot of a computer

Description automatically generated

Figure 1.16: Naming the connection

  1. Click Enable Trust Server Certificate, as shown in Figure 1.17:
A screenshot of a computer

Description automatically generated

Figure 1.17: Trusting the local developer certificate

  1. Note the success notification message.

Creating the Northwind database using a SQL script

Now you can use your preferred code editor (or database tool) to execute the SQL script to create the Northwind database in SQL Edge:

  1. Open the Northwind4AzureSQLEdgeDocker.sql file.
  2. Execute the SQL script:
    • If you are using Visual Studio, right-click in the script, then select Execute, and then wait to see the Command completed successfully message.
    • If you are using VS Code, right-click in the script, select Execute Query, select the Azure SQL Edge in Docker connection profile, and then wait to see the Commands completed successfully message.
  3. Refresh the data connection:
    • If you are using Visual Studio, then in Server Explorer, right-click Tables and select Refresh.
    • If you are using VS Code, then right-click the Azure SQL Edge in Docker connection profile and choose Refresh.
  4. Expand Databases, expand Northwind, and then expand Tables.
  5. Note that 13 tables have been created, for example, Categories, Customers, and Products. Also note that dozens of views and stored procedures have also been created, as shown in Figure 1.18:
A screenshot of a computer

Description automatically generated

Figure 1.18: Northwind database created by SQL script in VS Code

You now have a running instance of Azure SQL Edge containing the Northwind database that you can connect to from your ASP.NET Core projects.

Removing Docker resources

When you have completed all the chapters in the book, or you plan to use a full SQL Server or Azure SQL Database instead of a SQL Edge container, and you want to remove all the Docker resources, then follow these steps:

  1. At the command prompt or terminal, stop the azuresqledge container, as shown in the following command:
    docker stop azuresqledge
    
  2. At the command prompt or terminal, remove the azuresqledge container, as shown in the following command:
    docker rm azuresqledge
    

Warning! Removing the container will delete all data inside it.

  1. At the command prompt or terminal, remove the azure-sql-edge image to release its disk space, as shown in the following command:
    docker rmi mcr.microsoft.com/azure-sql-edge
    

Setting up the EF Core CLI tool

The .NET CLI tool named dotnet can be extended with capabilities useful for working with EF Core. It can perform design-time tasks like creating and applying migrations from an older model to a newer model and generating code for a model from an existing database.

The dotnet-ef command-line tool is not automatically installed. You must install this package as either a global or local tool. If you have already installed an older version of the tool, then you should update it to the latest version:

  1. At a command prompt or terminal, check if you have already installed dotnet-ef as a global tool, as shown in the following command:
    dotnet tool list --global
    
  2. Check in the list if an older version of the tool has been installed, like the one for .NET 7, as shown in the following output:
    Package Id      Version     Commands
    -------------------------------------
    dotnet-ef       9.0.0       dotnet-ef
    
  3. If an old version is installed, then update the tool, as shown in the following command:
    dotnet tool update --global dotnet-ef
    
  4. If it is not already installed, then install the latest version, as shown in the following command:
    dotnet tool install --global dotnet-ef
    

If necessary, follow any OS-specific instructions to add the dotnet tools directory to your PATH environment variable, as described in the output of installing the dotnet-ef tool.

By default, the latest GA release of .NET will be used to install the tool. To explicitly set a version, for example, to use a preview, add the --version switch. For example, to update to the latest .NET 10 preview or release candidate version (that will be available from February 2025 to October 2025), use the following command with a version wildcard:

dotnet tool update --global dotnet-ef --version 10.0-*

Once the .NET 10 GA release happens in November 2025, you can just use the command without the --version switch to upgrade.

You can also remove the tool, as shown in the following command:

dotnet tool uninstall --global dotnet-ef

Creating a class library for entity models

You will now define entity data models in a class library so that they can be reused in other types of projects, including client-side app models.

Good Practice: You should create a separate class library project for your entity data models from the class library for your data context. This allows easier sharing of the entity models between backend web servers and frontend desktop, mobile, and Blazor clients, while only the backend needs to reference the data context class library.

We will automatically generate some entity models using the EF Core command-line tool:

  1. Use your preferred code editor to create a new project and solution, as defined in the following list:
    • Project template: Class Library /classlib
    • Project file and folder: Northwind.EntityModels
    • Solution file and folder: MatureWeb

You can target either .NET 8 (LTS) or .NET 9 (STS) for all the projects in this book but you should be consistent. If you choose .NET 9 for the class libraries, then choose .NET 9 for later MVC and Web API projects.

  1. In the Northwind.EntityModels project, add package references for the SQL Server database provider and EF Core design-time support, as shown in the following markup:
    <ItemGroup>
      <PackageReference
        Include="Microsoft.EntityFrameworkCore.SqlServer" />
      <PackageReference
        Include="Microsoft.EntityFrameworkCore.Design">
        <PrivateAssets>all</PrivateAssets>
        <IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets>
      </PackageReference>
    </ItemGroup>
    
  2. Delete the Class1.cs file.
  3. Build the Northwind.EntityModels project to restore packages.
  4. Make sure that the SQL Edge container is running because you are about to connect to the server and its Northwind database.
  5. At a command prompt or terminal, in the Northwind.EntityModels project folder (the folder that contains the .csproj project file), generate entity class models for all tables, as shown in the following command:
    dotnet ef dbcontext scaffold "Data Source=tcp:127.0.0.1,1433;Initial Catalog=Northwind;User Id=sa;Password=s3cret-Ninja;TrustServerCertificat
    e=true;" Microsoft.EntityFrameworkCore.SqlServer --namespace Northwind.EntityModels --data-annotations
    

Note the following:

  • The command to perform: dbcontext scaffold
  • The connection string: "Data Source=tcp:127.0.0.1,1433;Initial Catalog=Northwind;User Id=sa;Password= s3cret-Ninja';TrustServerCertificate=true;"
  • The database provider: Microsoft.EntityFrameworkCore.SqlServer
  • The namespace: --namespace Northwind.EntityModels
  • To use data annotations as well as the Fluent API: --data-annotations

Warning! dotnet-ef commands must be entered all on one line and in a folder that contains a project, or you will see the following error: No project was found. Change the current working directory or use the --project option. Remember that all command lines can be found at and copied from the following link:

https://github.com/markjprice/web-dev-net9/blob/main/docs/command-lines.md

Creating a class library for a database context

You will now define a database context class library:

  1. Add a new project to the solution, as defined in the following list:
    • Project template: Class Library /classlib
    • Project file and folder: Northwind.DataContext
    • Solution file and folder: MatureWeb
  2. In the Northwind.DataContext project, statically and globally import the Console class, add a package reference to the EF Core data provider for SQL Server, and add a project reference to the Northwind.EntityModels project, as shown in the following markup:
    <ItemGroup Label="To simplify use of WriteLine.">
      <Using Include="System.Console" Static="true" />
    </ItemGroup>
    <ItemGroup Label="Versions are set at solution-level.">
      <PackageReference
        Include="Microsoft.EntityFrameworkCore.SqlServer" />
    </ItemGroup>
    <ItemGroup>
      <ProjectReference Include="..\Northwind.EntityModels
    \Northwind.EntityModels.csproj" />
    </ItemGroup>
    

Warning! The path to the project reference should not have a line break in your project file.

  1. In the Northwind.DataContext project, delete the Class1.cs file.
  2. Build the Northwind.DataContext project to restore packages.
  3. In the Northwind.DataContext project, add a class named NorthwindContextLogger.cs.
  4. Modify its contents to define a static method named WriteLine that appends a string to the end of a text file named northwindlog-<date_time>.txt on the desktop, as shown in the following code:
    using static System.Environment;
    namespace Northwind.EntityModels;
    public class NorthwindContextLogger
    {
      public static void WriteLine(string message)
      {
        string folder = Path.Combine(GetFolderPath(
          SpecialFolder.DesktopDirectory), "book-logs");
        if (!Directory.Exists(folder))
          Directory.CreateDirectory(folder);
        string dateTimeStamp = DateTime.Now.ToString(
          "yyyyMMdd_HHmmss");
        string path = Path.Combine(folder,
          $"northwindlog-{dateTimeStamp}.txt");
        StreamWriter textFile = File.AppendText(path);
        textFile.WriteLine(message);
        textFile.Close();
      }
    }
    
  5. Move the NorthwindContext.cs file from the Northwind.EntityModels project/folder to the Northwind.DataContext project/folder.

In Visual Studio Solution Explorer, if you drag and drop a file between projects, it will be copied. If you hold down Shift while dragging and dropping, it will be moved. In VS Code EXPLORER, if you drag and drop a file between projects, it will be moved. If you hold down Ctrl while dragging and dropping, it will be copied.

  1. In NorthwindContext.cs, note the second constructor can have options passed as a parameter, which allows us to override the default database connection string in any projects such as websites that need to work with the Northwind database, as shown in the following code:
    public NorthwindContext(
      DbContextOptions<NorthwindContext> options)
      : base(options)
    {
    }
    
  2. In NorthwindContext.cs, in the OnConfiguring method, remove the compiler #warning about the connection string and then add statements to dynamically build a database connection string for SQL Edge in Docker, as shown in the following code:
    protected override void OnConfiguring(
      DbContextOptionsBuilder optionsBuilder)
    {
      if (!optionsBuilder.IsConfigured)
      {
        SqlConnectionStringBuilder builder = new();
        builder.DataSource = "tcp:127.0.0.1,1433"; // SQL Edge in Docker.
        builder.InitialCatalog = "Northwind";
        builder.TrustServerCertificate = true;
        builder.MultipleActiveResultSets = true;
        // Because we want to fail faster. Default is 15 seconds.
        builder.ConnectTimeout = 3;
        // SQL Server authentication.
        builder.UserID = Environment.GetEnvironmentVariable("MY_SQL_USR");
        builder.Password = Environment.GetEnvironmentVariable("MY_SQL_PWD");
        optionsBuilder.UseSqlServer(builder.ConnectionString);
        optionsBuilder.LogTo(NorthwindContextLogger.WriteLine,
          new[] { Microsoft.EntityFrameworkCore
          .Diagnostics.RelationalEventId.CommandExecuting });
      }
    }
    
  3. In the Northwind.DataContext project, add a class named NorthwindContextExtensions.cs. Modify its contents to define an extension method that adds the Northwind database context to a collection of dependency services, as shown in the following code:
    using Microsoft.Data.SqlClient; // To use SqlConnectionStringBuilder.
    using Microsoft.EntityFrameworkCore; // To use UseSqlServer.
    using Microsoft.Extensions.DependencyInjection; // To use IServiceCollection.
    namespace Northwind.EntityModels;
    public static class NorthwindContextExtensions
    {
      /// <summary>
      /// Adds NorthwindContext to the specified IServiceCollection. Uses the SqlServer database provider.
      /// </summary>
      /// <param name="services">The service collection.</param>
      /// <param name="connectionString">Set to override the default.</param>
      /// <returns>An IServiceCollection that can be used to add more services.</returns>
      public static IServiceCollection AddNorthwindContext(
        this IServiceCollection services, // The type to extend.
        string? connectionString = null)
      {
        if (connectionString is null)
        {
          SqlConnectionStringBuilder builder = new();
          builder.DataSource = "tcp:127.0.0.1,1433"; // SQL Edge in Docker.
          builder.InitialCatalog = "Northwind";
          builder.TrustServerCertificate = true;
          builder.MultipleActiveResultSets = true;
          // Because we want to fail faster. Default is 15 seconds.
          builder.ConnectTimeout = 3;
          // SQL Server authentication.
          builder.UserID = Environment.GetEnvironmentVariable("MY_SQL_USR");
          builder.Password = Environment.GetEnvironmentVariable("MY_SQL_PWD");
          connectionString = builder.ConnectionString;
        }
        services.AddDbContext<NorthwindContext>(options =>
        {
          options.UseSqlServer(connectionString);
          options.LogTo(NorthwindContextLogger.WriteLine,
            new[] { Microsoft.EntityFrameworkCore
              .Diagnostics.RelationalEventId.CommandExecuting });
        },
        // Register with a transient lifetime to avoid concurrency
        // issues with Blazor Server projects.
        contextLifetime: ServiceLifetime.Transient,
        optionsLifetime: ServiceLifetime.Transient);
        return services;
      }
    }
    
  4. Build the two class libraries and fix any compiler errors.

Setting the user and password for SQL Server authentication

If you are using SQL Server authentication, i.e., you must supply a user and password, then complete the following steps:

  1. In the Northwind.DataContext project, note the statements that set UserId and Password, as shown in the following code:
    // SQL Server authentication.
    builder.UserId = Environment
      .GetEnvironmentVariable("MY_SQL_USR");
    builder.Password = Environment
      .GetEnvironmentVariable("MY_SQL_PWD");.
    
  2. Set the two environment variables at the command prompt or terminal, as shown in the following commands:
    • On Windows:
    setx MY_SQL_USR <your_user_name>
    setx MY_SQL_PWD <your_password>
    
    • On macOS and Linux:
    export MY_SQL_USR=<your_user_name>
    export MY_SQL_PWD=<your_password>
    
  3. You will need to restart any command prompts, terminal windows, and applications like Visual Studio for this change to take effect.

Good Practice: Although you could define the two environment variables in the launchSettings.json file of an ASP.NET Core project, you must then be extremely careful not to include that file in a GitHub repository! You can learn how to ignore files in Git at https://docs.github.com/en/get-started/getting-started-with-git/ignoring-files.

Registering dependency services

You can register dependency services with different lifetimes, as shown in the following list:

  • Transient: These services are created each time they’re requested. Transient services should be lightweight and stateless.
  • Scoped: These services are created once per client request and are disposed of, then the response returns to the client.
  • Singleton: These services are usually created the first time they are requested and then shared, although you can provide an instance at the time of registration too.

Introduced in .NET 8 is the ability to set a key for a dependency service. This allows multiple services to be registered with different keys and then retrieved later using that key:

builder.Services.AddKeyedsingleton<IMemoryCache, BigCache>("big");
builder.Services.AddKeyedSingleton<IMemoryCache, SmallCache>("small");
class BigCacheConsumer([FromKeyedServices("big")] IMemoryCache cache)
{
  public object? GetData() => cache.Get("data");
}
class SmallCacheConsumer(IKeyedServiceProvider keyedServiceProvider)
{
  public object? GetData() => keyedServiceProvider
    .GetRequiredKeyedService<IMemoryCache>("small");
}

In this book, you will use all three types of lifetime but we will not need to use keyed services.

By default, a DbContext class is registered using the Scope lifetime, meaning that multiple threads can share the same instance. But DbContext does not support multiple threads. If more than one thread attempts to use the same NorthwindContext class instance at the same time, then you will see the following runtime exception thrown: A second operation started on this context before a previous operation completed. This is usually caused by different threads using the same instance of a DbContext. However, instance members are not guaranteed to be thread-safe.

This happens in Blazor projects with components set to run on the server side because, whenever interactions on the client side happen, a SignalR call is made back to the server where a single instance of the database context is shared between multiple clients. This issue does not occur if a component is set to run on the client side.

Improving the class-to-table mapping

We will make some small changes to improve the entity model mapping and validation rules for SQL Server.

Remember that all code is available in the GitHub repository for the book. Although you will learn more by typing the code yourself, you never have to. Go to the following link and press . to get a live code editor in your browser: https://github.com/markjprice/web-dev-net9.

We will add a regular expression to validate that a CustomerId value is exactly five uppercase letters:

  1. In Customer.cs, add a regular expression to validate its primary key CustomerId to only allow uppercase Western characters, as shown highlighted in the following code:
    [Key]
    [StringLength(5)]
    [RegularExpression("[A-Z]{5}")]
    public string CustomerId { get; set; } = null!;
    
  2. In Customer.cs, add the [Phone] attribute to its Phone property, as shown highlighted in the following code:
    [StringLength(24)]
    [Phone]
    public string? Phone { get; set; }
    

The [Phone] attribute adds the following to the rendered HTML: type="tel". On a mobile phone, this makes the keyboard use the phone dialer instead of the normal keyboard.

  1. In Order.cs, decorate the CustomerId property with the same regular expression to enforce five uppercase characters.

Testing the class libraries using xUnit

Several benefits of using xUnit are shown in the following list:

  • xUnit is open-source and has a strong community and active development team behind it. This makes it more likely that it will stay up to date with the latest .NET features and best practices. xUnit benefits from a large and active community, which means many tutorials, guides, and third-party extensions are available for it.
  • xUnit uses a more simplified and extensible approach compared to older frameworks. It encourages the use of custom test patterns and less reliance on setup and teardown methods, leading to cleaner test code.
  • Tests in xUnit are configured using .NET attributes, which makes the test code easy to read and understand. It uses [Fact] for standard test cases and [Theory] with [InlineData], [ClassData], or [MemberData] for parameterized tests, enabling data-driven testing. This makes it easier to cover many input scenarios with the same test method, enhancing test thoroughness while minimizing effort.
  • xUnit includes an assertion library that allows for a wide variety of assertions out of the box, making it easier to test a wide range of conditions without having to write custom test code. It can also be extended with popular assertion libraries, like FluentAssertions, that allow you to articulate test expectations with human-readable reasons.
  • By default, xUnit supports parallel test execution within the same test collection, which can significantly reduce the time it takes to run large test suites. This is particularly beneficial in continuous integration environments where speed is critical. However, if you run your tests in a memory-limited VPS (Virtual Private Server), then that impacts how much data the server can handle at any given time and how many applications or processes it can run concurrently. In this scenario, you might want to disable parallel test execution. Memory-limited VPS instances are typically used as cheap testing environments.
  • xUnit offers precise control over the test lifecycle with setup and teardown commands through the use of the constructor and destructor patterns and the IDisposable interface, as well as with the [BeforeAfterTestAttribute] for more granular control.

Now let’s build some unit tests to ensure the class libraries are working correctly.

Let’s write the tests:

  1. Use your preferred coding tool to add a new xUnit Test Project [C#] / xunit project named Northwind.UnitTests to the MatureWeb solution.
  2. In the Northwind.UnitTests project, delete the version numbers specified for the testing packages in the project file. (Visual Studio and other code editors will give errors if you have projects that should use CPM but specify their own package versions without using the VersionOverride attribute.)
  3. In the Northwind.UnitTests project, add a project reference to the Northwind.DataContext project, as shown in the following configuration:
    <ItemGroup>
      <PackageReference Include="coverlet.collector" />
      <PackageReference Include="Microsoft.NET.Test.Sdk" />
      <PackageReference Include="xunit" />
      <PackageReference Include="xunit.runner.visualstudio" />
    </ItemGroup>
    <ItemGroup>
      <ProjectReference Include="..\Northwind.DataContext
    \Northwind.DataContext.csproj" />
    </ItemGroup>
    

Warning! The project reference must go all on one line with no line break.

  1. Build the Northwind.UnitTests project to build referenced projects.
  2. Rename UnitTest1.cs to EntityModelTests.cs.
  3. Modify the contents of the file to define two tests, the first to connect to the database and the second to confirm there are eight categories in the database, as shown in the following code:
    using Northwind.EntityModels; // To use NorthwindContext.
    namespace Northwind.UnitTests;
    public class EntityModelTests
    {
      [Fact]
      public void DatabaseConnectTest()
      {
        using NorthwindContext db = new();
        Assert.True(db.Database.CanConnect());
      }
      [Fact]
      public void CategoryCountTest()
      {
        using NorthwindContext db = new();
        int expected = 8;
        int actual = db.Categories.Count();
        Assert.Equal(expected, actual);
      }
      [Fact]
      public void ProductId1IsChaiTest()
      {
        using NorthwindContext db = new();
        string expected = "Chai";
        Product? product = db.Products.Find(keyValues: 1);
        string actual = product?.ProductName ?? string.Empty;
        Assert.Equal(expected, actual);
      }
    }
    
  4. Run the unit tests:
    • If you are using Visual Studio, then navigate to Test | Run All Tests, and then view the results in Test Explorer.
    • If you are using VS Code, then in the Northwind.UnitTests project’s TERMINAL window, run the tests, as shown in the following command: dotnet test. Alternatively, use the TESTING window if you have installed C# Dev Kit.
  5. Note that the results should indicate that three tests ran, and all passed, as shown in Figure 1.19:

Figure 1.19: Three successful unit tests ran

If any of the tests fail, then try fix the issue.

Practicing and exploring

Test your knowledge and understanding by answering some questions, getting some hands-on practice, and exploring this chapter’s topics with deeper research.

Exercise 1.1 – Online material

If you have any issues with the code or content of this book, or general feedback or suggestions for me for future editions, then please read the following short article:

https://github.com/markjprice/web-dev-net9/blob/main/docs/ch01-issues-feedback.md

If you are new to web development on the client side using HTML, CSS, and JavaScript, then you can start with an online section found at the following link:

https://github.com/markjprice/web-dev-net9/blob/main/docs/ch01-web-dev.md

One of the best sites for learning client-side web development is W3Schools, found at https://www.w3schools.com/.

A summary of what’s new with ASP.NET Core 9 can be found at the following link:

https://learn.microsoft.com/en-us/aspnet/core/release-notes/aspnetcore-9.0

If you need to decide between ASP.NET Core web UIs, check this link:

https://learn.microsoft.com/en-us/aspnet/core/tutorials/choose-web-ui

You can learn about ASP.NET Core best practices at https://learn.microsoft.com/en-us/aspnet/core/fundamentals/best-practices.

Exercise 1.2 – Practice exercises

The following practice exercises help you to explore the topics in this chapter more deeply.

Troubleshooting web development

It is common to have temporary issues with web development because there are so many moving parts. Sometimes, variations of the classic “turn it off and on again” can fix these!

  1. Delete the project’s bin and release folders.
  2. Restart the web server to clear its caches.
  3. Reboot the computer.

Exercise 1.3 – Test your knowledge

Answer the following questions:

  1. What was the name of Microsoft’s first dynamic server-side-executed web page technology and why is it still useful to know this history today?
  2. What are the names of two Microsoft web servers?
  3. What are some differences between a microservice and a nanoservice?
  4. What is Blazor?
  5. What was the first version of ASP.NET Core that could not be hosted on .NET Framework?
  6. What is a user agent?
  7. What impact does the HTTP request-response communication model have on web developers?
  8. Name and describe four components of a URL.
  9. What capabilities does Developer Tools give you?
  10. What are the three main client-side web development technologies and what do they do?

Know your webbreviations

What do the following web abbreviations stand for and what do they do?

  1. URI
  2. URL
  3. WCF
  4. TLD
  5. API
  6. SPA
  7. CMS
  8. Wasm
  9. SASS
  10. REST

Exercise 1.4 – Explore topics

Use the links on the following page to learn more details about the topics covered in this chapter:

https://github.com/markjprice/web-dev-net9/blob/main/docs/book-links.md#chapter-1---introducing-web-development-using-controllers

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Master ASP.NET Core MVC, Web API, and OData for building robust web services.
  • Get hands-on experience with web testing, security, and containerization techniques.
  • Learn how to implement Umbraco CMS for content management websites.

Description

Real-World Web Development with .NET 9 equips you to build professional websites and services using proven technologies like ASP.NET Core MVC, Web API, and OData—trusted by organizations for delivering robust web applications. You’ll learn to design and build efficient web applications with ASP.NET Core MVC, creating well-structured, maintainable code that follows industry best practices. From there, you’ll focus on Web API, building RESTful services that are both secure and scalable. Along the way, you’ll also explore testing, authentication, and containerization for deployment, ensuring that your solutions are fully production ready. In the final part of the book, you will be introduced to Umbraco CMS, a popular content management system for .NET. By mastering this tool, you’ll learn how to empower users to manage website content independently. By the end of this book, you'll not only have a solid grasp of controller-based development but also the practical know-how to build dynamic, content-driven websites using a popular .NET CMS.

Who is this book for?

This book is aimed at intermediate .NET developers with a good understanding of C# and .NET fundamentals. It is ideal for developers looking to expand their skills in building professional, controller-based web applications.

What you will learn

  • Build web applications using ASP.NET Core MVC with well-structured, maintainable code
  • Develop secure and scalable RESTful services using Web API and OData
  • Implement authentication and authorization for your applications
  • Test and containerize your .NET projects for smooth deployment
  • Optimize application performance with caching and other techniques
  • Learn how to use and implement Umbraco CMS

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Dec 20, 2024
Length: 578 pages
Edition : 1st
Language : English
ISBN-13 : 9781835880395
Languages :
Tools :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Product Details

Publication date : Dec 20, 2024
Length: 578 pages
Edition : 1st
Language : English
ISBN-13 : 9781835880395
Languages :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just ₱260 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just ₱260 each
Feature tick icon Exclusive print discounts
Banner background image

Table of Contents

16 Chapters
Introducing Web Development Using Controllers Chevron down icon Chevron up icon
Building Websites Using ASP.NET Core MVC Chevron down icon Chevron up icon
Model Binding, Validation, and Data Using EF Core Chevron down icon Chevron up icon
Building and Localizing Web User Interfaces Chevron down icon Chevron up icon
Authentication and Authorization Chevron down icon Chevron up icon
Performance Optimization Using Caching Chevron down icon Chevron up icon
Web User Interface Testing Using Playwright Chevron down icon Chevron up icon
Configuring and Containerizing ASP.NET Core Projects Chevron down icon Chevron up icon
Building Web Services Using ASP.NET Core Web API Chevron down icon Chevron up icon
Building Web Services Using ASP.NET Core OData Chevron down icon Chevron up icon
Building Web Services Using FastEndpoints Chevron down icon Chevron up icon
Web Service Integration Testing Chevron down icon Chevron up icon
Web Content Management Using Umbraco Chevron down icon Chevron up icon
Customizing and Extending Umbraco Chevron down icon Chevron up icon
Epilogue Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.