Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Entity Framework Core Cookbook
Entity Framework Core Cookbook

Entity Framework Core Cookbook: Transactions, stored procedures, query libraries, and more , Second Edition

eBook
R$49.99 R$245.99
Paperback
R$306.99
Subscription
Free Trial
Renews at R$50p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Table of content icon View table of contents Preview book icon Preview Book

Entity Framework Core Cookbook

Chapter 1. Improving Entity Framework in the Real World

In this chapter, we will cover the following topics:

  • Improving Entity Framework by using a code-first approach
  • Unit testing and mocking
  • Creating databases from code
  • Creating mock database connections
  • Implementing the repository pattern
  • Implementing the unit of work pattern

Introduction

If we were to buy the materials to build a house, would we buy the bare minimum to get four walls up and a roof, without a kitchen or a bathroom? Or would we buy enough material to build the house with multiple bedrooms, a kitchen, and multiple bathrooms?

The problem lies in how we define the bare minimum. The progression of software development has made us realize that there are ways of building software that do not require additional effort, but reap serious rewards. This is the same choice we are faced with when we decide on the approach to take with Entity Framework. We could just get it running and it would work most of the time.

Customizing and adding to it later would be difficult, but doable. There are a few things that we would need to give up for this approach. The most important among those is control over how the code is written. We have already seen that applications grow, mature, and have features added. The only thing that stays constant is the fact that at some point in time, in some way, we will come to push the envelope of almost every tool that we leverage to help us. The other side is that we could go into development, being aware of the value-added benefits that cost nothing, and with that knowledge, avoid dealing with unnecessary constraints.

When working with Entity Framework, there are some paths and options available to us. There are two main workflows for working with Object-Relational Mapper (ORM) tools such as Entity Framework:

  • Database first: We start by defining our database objects and their relations, then write our classes to match them, and we bind them together
  • Code first: We start by designing our classes as Plain Old CLR Objects (POCOs) to model the concepts that we wish to represent, without caring (too much!) how they will be persisted in the database

    Note

    The model-first approach was dropped in Entity Framework Core 1.0.

While following the database-first approach, we are not concerned with the actual implementation of our classes, but merely the structures—tables, columns, keys—on which they will be persisted. In contrast, with POCOs or code first, we start by designing the classes that will be used in our programs to represent the business and domain concepts that we wish to model. This is known as Domain-Driven Design (DDD). DDD certainly includes code first, but it is much more than that.

All of these approaches will solve the problem with varying degrees of flexibility.

Starting with a database-first approach in Entity Framework means we have an existing database schema and are going to let the schema, along with the metadata in the database, determine the structure of our business objects and domain model. The database-first approach is normally how most of us start out with Entity Framework and other ORMs, but the tendency is to move toward more flexible solutions as we gain proficiency with the framework. This will drastically reduce the amount of code that we need to write, but will also limit us to working within the structure of the generated code. Entities, which are generated by default here, are not 100% usable with WCF services, ASP.NET Web APIs, and similar technologies – just think about lazy loading and disconnected entities, for example. This is not necessarily a bad thing if we have a well-built database schema and a domain model that translates well into Data Transfer Objects (DTOs). Such a domain and database combination is a rare exception in the world of code production. Due to the lack of flexibility and the restrictions on the way these objects are used, this solution is viewed as a short-term or small-project solution.

Modeling the domain first allows us to fully visualize the structure of the data in the application, and work in a more object-oriented manner while developing our application. Just think of this: a relational database does not understand OOP concepts such as inheritance, static members, and virtual methods, although, for sure, there are ways to simulate them in the relational world. The main reasons for the lack of adoption of this approach include the poor support for round-trip updates, and the lack of documentation on manipulating the POCO model so as to produce the proper database structure. It can be a bit daunting for developers with less experience, because they probably won't know how to get started. Historically, the database had to be created each time the POCO model changed, causing data loss when structural changes were made.

Coding the classes first allows us to work entirely in an object-oriented direction, and not worry about the structuring of the database, without the restrictions that the model-first designer imposes. This abstraction gives us the ability to craft a more logically sound application that focuses on the behavior of the application rather than the data generated by it. The objects that we produce that are capable of being serialized over any service have true persistence ignorance, and can be shared as contract objects as they are not specific to the database implementation. This approach is also much more flexible as it is entirely dependent on the code that we write. This allows us to translate our objects into database records without modifying the structure of our application. All of this, however, is somewhat theoretical, in the sense that we still need to worry about having primary key properties, generation strategies, and so on.

In each of the recipes presented in this book, we will follow an incremental approach, where we will start by adding the stuff we need for the most basic cases, and later on, as we make progress, we will refactor it to add more complex stuff.

Improving Entity Framework by using a code-first approach

In this recipe, we start by separating the application into a user interface (UI) layer, a data access layer, and a business logic layer. This will allow us to keep our objects separated from database-specific implementations. The objects and the implementation of the database context will use a layered approach so we can add testing to the application. The following table shows the various projects, and their purpose, available for code-first approach:

Project

Purpose

BusinessLogic

Stores the entities that represent business entities.

DataAccess

Classes that access data and manipulate business entities. Depends on the BusinessLogic project.

UI

User interface – the MVC application. Makes use of the BusinessLogic and DataAccess projects.

UnitTests

Unit tests. Uses both the BusinessLogic and DataAccess projects.

Getting ready

We will be using the NuGet Package Manager to install the Entity Framework Core 1 package, Microsoft.EntityFrameworkCore. We will also be using a SQL Server database for storing the data, so we will also need Microsoft.EntityFrameworkCore.SqlServer.

Finally, xunit is the package we will be using for the unit tests and dotnet-text-xunit adds tooling support for Visual Studio. Note that the UnitTests project is a .NET Core App 1.0 (netcoreapp1.0), that Microsoft.EntityFrameworkCore.Design is configured as a build dependency, and Microsoft.EntityFrameworkCore.Tools is set as a tool.

Open Using EF Core Solution from the included source code examples.

Execute the database setup script from the code samples included for this recipe. This can be found in the DataAccess project within the Database folder.

How to do it…

Let's get connected to the database using the following steps:

  1. Add a new C# class named Blog with the following code to the BusinessLogic project:
    namespace BusinessLogic
    {
        public class Blog
        {
            public int Id { get; set; }
            public string Title { get; set; }
        }
    }
  2. Create a new C# class named BlogContext with the following code in the DataAccess project:
    using Microsoft.EntityFrameworkCore;
    using BusinessLogic;
    namespace DataAccess
    {
        public class BlogContext : DbContext
        {
            private readonly string _connectionString;
            public BlogContext(string connectionString)
            {
                _connectionString = connectionString;
            }
            public DbSet<Blog> Blogs { get; set; }
            protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
            {
                optionsBuilder.UseSqlServer(_connectionString);
                base.OnConfiguring(optionsBuilder);
            }
        }
    }

    Note

    For Entity Framework 6, replace the Microsoft.EntityFrameworkCore namespace with System.Data.Entity and call the base constructor of DbContext passing it the connection string.

  3. Add the following connection string to the appsettings.json file:
    {
      "Data": {
        "Blog": {
          "ConnectionString":"Server=(local)\\SQLEXPRESS; Database=Blog; Integrated Security=SSPI;MultipleActiveResultSets=true"
        }
      }
    }

    Note

    With Entity Framework 6, we would add this connection string to the Web.config file, under the connectionStrings section, with the name Blog. Of course, change the connection string to match your system settings, for example, the name of the SQL Server instance (SQLEXPRESS, in this example).

  4. In the Controllers\BlogController.cs file, modify the Index method with the following code:
    using BusinessLogic;
    using DataAccess;
    using System.Linq;
    using Microsoft.AspNetCore.Mvc;
    using Microsoft.Extensions.Configuration;
    namespace UI.Controllers
    {
        public class BlogController : Controller
        {
            private readonly BlogContext _blogContext;
            public BlogController(IConfiguration config)
            {
                _blogContext = new BlogContext(config["Data:Blog:ConnectionString"]);
            }
            public IActionResult Index()
            {
                var blog = _blogContext.Blogs.First();
                return View(blog);
            }
        }
    }

    Note

    For Entity Framework 6, remove the config parameter from the HomeController constructor, and initialize BlogContext with the ConfigurationManager.ConnectionStrings["Blog"].ConnectionString value.

  5. Finally, in Startup.cs, we need to register the IConfiguration service so that it can be injected into the HomeController constructor. Please add the following lines to the ConfigureServices method:
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddMvc();
        services.AddSingleton<IConfiguration>(_ => Configuration);
    }

    Note

    Prior to version 5, ASP.NET MVC does not include any built-in Inversion of Control containers, unlike ASP.NET Core. You will need to bring your own and register it with the DependencyResolver.SetResolver method, or rely on a third-party implementation.

How it works…

The blog entity is created but not mapped explicitly to a database structure. This takes advantage of convention over configuration, found in the code-first approach, wherein the properties are examined and then the table mappings are determined. This is obviously a time saver, but it is fairly limited if you have a non-standard database schema. The other big advantage of this approach is that the entity is persistence-ignorant. In other words, it has no knowledge of how it is to be stored in the database.

The BlogContext class has a few key elements to understand. The first is to understand the inheritance from DbContext. DbContext is the code-first context class, which encapsulates all connection pooling, entity change tracking, and database interactions. We added a constructor to take in the connection string, so that it knows where to connect to.

We used the standard built-in functionality for the connection string, storing it in a text (JSON) file, but this could easily be any application setting store; one such location would be the .NET Core secrets file. We pass the connection string on the construction of the BlogContext. It enables us to pass that connection string from anywhere so that we are not coupled. Because Entity Framework is agnostic when it comes to data sources—can use virtually any database server–we need to tell it to use the SQL Server provider, and to connect to it using the supplied connection string. That's what the UseSqlServer method does.

There's more…

Approaching the use of code-first development, we have several overarching themes and industry standards that we need to be aware of. Knowing about them will help us leverage the power of this tool without falling into the pit of using it without understanding.

Convention over configuration

This is a design paradigm that says that default rules dictate how an application will behave, but allows the developer to override any of the default rules with specific behavior, in case it is needed. This allows us, as programmers, to avoid using a lot of configuration files or code to specify how we intended something to be used or configured. In our case, Entity Framework allows the most common behaviors to use default conventions that remove the need for a majority of the configurations. When the behavior we wish to create is not supported by the convention, we can easily override the convention and add the required behavior to it without the need to get rid of it everywhere else. This leaves us with a flexible and extendable system to configure the database interaction.

Model-View-Controller

In our example, we use Microsoft ASP.NET MVC. We would use MVC 5 for Entity Framework 6 and .NET 4.x, and MVC Core 1 for Entity Framework Core 1 and .NET Core, and, in both cases, the Razor view engine for rendering the UI. We have provided some simple views that will allow us to focus on the solutions and the code without needing to deal with UI design and markup.

Single Responsibility Principle

One of the SOLID principles of development, the Single Responsibility Principle (SRP), states that every class should have only one reason to change. In this chapter, there are several examples of that in use, for example, the separation of model, view and controller, as prescribed by MVC.

Entities in code-first have the structure of data as their singular responsibility in memory. This means that we will only need to modify the entities if the structure needs to be changed. By contrast, the code automatically generated by the database-first tools of Entity Framework inherits your entities from base classes within the Entity Framework Application Programming Interface (API). The process of Microsoft making occasional updates to the base classes of Entity Framework is the one that introduces a second reason to change, thus violating our principle.

Provider Model

Entity Framework relies on providers for achieving different parts of its functionality. These are called providers, and the most important, for sure, is the one that supplies the connection to the underlying data store. Different providers exist for different data sources, from traditional relational databases such as SQL Server, to non-relational ones, such as Redis and Azure Table Storage. There's even one for abstracting a database purely in memory!

Testing

While we did not actively test this recipe, we layered in the abstractions to do so. All of the other recipes will be executed and presented using test-driven development, as we believe it leads to better software design and a much clearer representation of intent.

See also

In this chapter:

  • Unit testing and mocking
  • Implementing the unit of work pattern
  • Implementing the repository pattern

Unit testing and mocking

Software development is not just writing code. We also need to test it, to confirm that it does what is expected. There are several kinds of tests, and unit tests are one of the most popular. In this chapter, we will set up the unit test framework that will accompany us throughout the book. Another important concept is that of mocking; by mocking a class (or interface), we can provide a dummy implementation of it that we can use instead of the real thing. This comes in handy in unit tests, because we do not always have access to real-life data and environments, and this way, we can pretend we do.

Getting ready

We will be using the NuGet Package Manager to install the Entity Framework Core 1 package, Microsoft.EntityFrameworkCore. We will be using a SQL Server database for storing the data, so we will also need Microsoft.EntityFrameworkCore.SqlServer.

To mock interfaces and base classes, we will use Moq.

Finally, xunit is the package we will be using for the unit tests and dotnet-text-xunit adds tooling support for Visual Studio. Note that the UnitTests project is a .NET Core App 1.0 (netcoreapp1.0), that Microsoft.EntityFrameworkCore.Design is configured as a build dependency, and Microsoft.EntityFrameworkCore.Tools is set as a tool.

Open Using EF Core Solution from the included source code examples.

Execute the database setup script from the code samples included for this recipe. This can be found in the DataAccess project within the Database folder.

How to do it…

  1. Start by adding the required NuGet packages to the UnitTests project. We'll edit and add two dependencies, the main xUnit library and its runner for .NET Core, and then set the runner command.
  2. Now, let's add a base class to the project; create a new C# class file and call it BaseTests.cs:
    using Microsoft.Extensions.Configuration;
    namespace UnitTests
    {
        public abstract class BaseTest
        {
            protected BaseTest()
            {
                var builder = new ConfigurationBuilder()
                    .AddJsonFile("appsettings.json");
                Configuration = builder.Build();
            }
            protected IConfiguration Configuration{ get; private set; }
        }
    }
  3. Now, for a quick test, add a new C# file, called SimpleTest.cs, to the project, with this content:
    using Moq;
    using Xunit;
    namespace UnitTests
    {
        public class SimpleTest : BaseTest
        {
            [Fact]
            public void CanReadFromConfiguration()
            {
                var connectionString = Configuration["Data:Blog:ConnectionString"];
                Assert.NotNull(connectionString);
                Assert.NotEmpty(connectionString);
            }
            [Fact]
            public void CanMock()
            {
                var mock = new Mock<IConfiguration>();
                mock.Setup(x => x[It.IsNotNull<string>()]).Returns("Dummy Value");
                var configuration = mock.Object;
                var value = configuration["Dummy Key"];
                Assert.NotNull(value);
                Assert.NotEmpty(value);
            }
        }
    }
  4. If you want to have the xUnit runner running your unit tests automatically, you will need to set the test command as the profile to run in the project properties:
    How to do it…

    Project properties

How it works…

We have a unit tests base class that loads configuration from an external file, in pretty much the same way as the ASP.NET Core template does. Any unit tests that we will define later on should inherit from this one.

When the runner executes, it will discover all unit tests in the project—those public concrete methods marked with the [Fact] attribute. It will then try to execute them and evaluate any Assert calls within.

The Moq framework lets you define your own implementations for any abstract or interface methods that you wish to make testable. In this example, we are mocking the IConfiguration class, and saying that any attempt to retrieve a configuration value should have a dummy value as the result.

If you run this project, you will get the following output:

How it works…

Running unit tests

There's more…

Testing to the edges of an application requires that we adhere to certain practices that allow us to shrink the untestable sections of the code. This will allow us to unit test more code, and make our integration tests far more specific.

One class under test

An important point to remember while performing unit testing is that we should only be testing a single class. The point of a unit test is to ensure that a single operation of this class performs the way we expect it to.

This is why simulating classes that are not under test is so important. We do not want the behavior of these supporting classes to affect the outcomes of unit tests for the class that is being tested.

Integration tests

Often, it is equally important to test the actual combination of your various classes to ensure they work properly together. These integration tests are valuable, but are almost always more brittle, require more setup, and are run slower than the unit tests. We certainly need integration tests on any project of a reasonable size, but we want unit tests first.

Arrange, Act, Assert

Most unit tests can be viewed as having three parts: Arrange, Act, and Assert. Arrange is where we prepare the environment to perform the test, for instance, mocking the IDBContext with dummy data with the expectation that Set will be called. Act is where we perform the action under test, and is most often a singular line of code. Assert is where we ensure that the proper result was reached. Note the comments in the preceding examples that call out these sections. We will use them throughout the book to make it clear what the test is trying to do.

Mocking

Mocking and stubbing—providing a pre-built implementation for methods to intercept—is a very interesting topic. There are numberless frameworks that can provide mocking capabilities for even the most challenging scenarios, such as static methods and properties. Mocking fits nicely with unit tests because we seldom have an environment that is identical to the one where we will be deploying, but we don't have "real" data. Also, data changes, and we need a way to be able to reproduce things consistently.

Creating databases from code

As we start down the code-first path, there are a couple of things that could be true. If we already have a database, then we will need to configure our objects to that schema, but what if we do not have one? That is the subject of this recipe: creating a database from the objects we declare.

Getting ready

We will be using the NuGet Package Manager to install the Entity Framework Core 1 package, Microsoft.EntityFrameworkCore. We will also be using a SQL Server database for storing the data, so we will also need Microsoft.EntityFrameworkCore.SqlServer.

To mock interfaces and base classes, we will use Moq.

Finally, xunit is the package we will be using for the unit tests and dotnet-text-xunit adds tooling support for Visual Studio. Note that the UnitTests project is a .NET Core App 1.0 (netcoreapp1.0), that Microsoft.EntityFrameworkCore.Design is configured as a build dependency, and Microsoft.EntityFrameworkCore.Tools is set as a tool.

Open Using EF Core Solution from the included source code examples.

Execute the database setup script from the code samples included for this recipe. This can be found in the DataAccess project within the Database folder.

How to do it…

  1. First, we write a unit test with the following code in a new C# file called DatabaseTest.cs, in the UnitTests project:
    using BusinessLogic;
    using Xunit;
    using DataAccess;
    namespace UnitTests
    {
        public class DatabaseTest : BaseTest
        {
            [Fact]
            public void CanCreateDatabase()
            {
                //Arrange
                var connectionString = Configuration["Data:Blog:ConnectionString"];
                var context =new BlogContext(connectionString);
                //Act
                var created = context.Database.EnsureCreated();
                //Assert
                Assert.True(created);
            }
        }
    }
  2. We will need to add a connection string to the UnitTests project to our database; we do so by providing an identical appSettings.json file to the one introduced in the previous recipe:
    {
        "Data": {
            "Blog": {
                "ConnectionString": "Server=(local)\\SQLEXPRESS;Database=Blog;Integrated Security=SSPI;MultipleActiveResultSets=true"
            }
        }
    }

    Note

    Change the connection string to match your specific settings.

  3. In the DataAccess project, we will use the C# BlogContext class that was introduced in the previous chapter:
    using Microsoft.EntityFrameworkCore;
    using BusinessLogic;
    namespace DataAccess
    {
        public class BlogContext : DbContext
        {
            private readonly string _connectionString;
            public BlogContext(string connectionString)
            {
                _connectionString = connectionString;
            }
            public DbSet<Blog> Blogs { get; set; }
            protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
            {
                optionsBuilder.UseSqlServer(_connectionString);
                base.OnConfiguring(optionsBuilder);
            }
        }
    }

How it works…

Entity Framework will initialize itself by calling the OnConfiguring method whenever it needs to get data; after that, it knows about the database to use. The EnsureCreated method will make sure that the database either already exists or is created in the moment.

There's more…

When we start a green field project, we have that rush of happiness to be working in a problem domain that no one has touched before. This can be exhilarating and daunting at the same time. The objects we define and the structure of our program come naturally to a programmer, but most of us need to think differently to design the database schema. This is where the tools can help to translate our objects and intended structure into the database schema if we leverage some patterns. We can then take full advantage of being object-oriented programmers.

A word of caution: previous versions of Entity Framework offered mechanisms such as database initializers. These not only would create the database, but also rebuild it, in case the code-first model had changed, and even add some initial data. For better or worse, these mechanisms are now gone, and we will need to leverage Entity Framework Core Migrations for similar features. We will discuss Migrations in another recipe.

See also

In this chapter:

  • Unit testing and mocking

Creating mock database connections

When working with Entity Framework in a test-driven manner, we need to be able to slip a layer between our last line of code and the framework. This allows us to simulate the database connection without actually hitting the database.

We will be using the NuGet Package Manager to install the Entity Framework Core 1 package, Microsoft.EntityFrameworkCore. We will also be using a SQL Server database for storing the data, so we will also need Microsoft.EntityFrameworkCore.SqlServer.

To mock interfaces and base classes, we will use Moq.

Finally, xunit is the package we will be using for the unit tests and dotnet-text-xunit adds tooling support for Visual Studio. Note that the UnitTests project is a .NET Core App 1.0 (netcoreapp1.0), that Microsoft.EntityFrameworkCore.Design is configured as a build dependency, and Microsoft.EntityFrameworkCore.Tools is set as a tool.

Open Using EF Core Solution from the included source code examples.

Execute the database setup script from the code samples included for this recipe. This can be found in the DataAccess project within the Database folder.

How to do it…

  1. In the DataAccess project, add a new C# interface named IDbContext using the following code:
    using System.Linq;
    namespace DataAccess
    {
        public interface IDbContext
        {
            IQueryable<T> Set<T>() where T : class;
        }
    }
  2. Add a new unit test in the UnitTests project to test so we can supply dummy results for fake database calls with the following code:
    using System.Linq;
    using DataAccess;
    using BusinessLogic;
    using Moq;
    using Xunit;
    namespace UnitTests
    {
        public class MockTest : BaseTest
        {      
            [Fact]
            public void CanMock()
            {
               //Arrange
                var data = new[] { new Blog { Id = 1, Title = "Title" }, newBlog { Id = 2, Title = "No Title" } }.AsQueryable();
                var mock = new Mock<IDbContext>();
                mock.Setup(x => x.Set<Blog>()).Returns(data);
                //Act
                var context = mock.Object;
                var blogs = context.Set<Blog>();
                //Assert
                Assert.Equal(data, blogs);
            }
        }
    }
  3. In the DataAccess project, update the C# class named BlogContext with the following code:
    using BusinessLogic;
    using System.Linq;
    using Microsoft.EntityFrameworkCore;
    namespace DataAccess
    {
        public class BlogContext : DbContext, IDbContext
        {
            private readonly string _connectionString;
            public BlogContext(string connectionString)
            {
                _connectionString = connectionString;
            }
            public DbSet<Blog> Blogs { get; set; }
            IQueryable<T> IDbContext.Set<T>()
            {
                return base.Set<T>();  
            }
            protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
            {
                optionsBuilder.UseSqlServer(_connectionString);
                base.OnConfiguring(optionsBuilder);
            }
            public void Rollback()
            {
                ChangeTracker.Entries().ToList().ForEach(x =>
                {
                    x.State = EntityState.Detached;
                    var keys = GetEntityKey(x.Entity);
                    Set(x.Entity.GetType(), keys);
                });
            }
        }
    }

How it works…

We implemented a fake class —a mock—that mimics some of the functionality of our IDbContext interface that we wish to expose and make testable; in this case, it is just the retrieval of data. This allows us to keep our tests independent of the actual data in the database. Now that we have data available from our mock, we can test whether it acts exactly like we coded it to. Knowing the inputs of the data access code, we can test the outputs for validity. We made our existing BlogContext class implement the interface where we define the contract that we wish to mock, IDbContext, and we configured a mock class to return dummy data whenever its Set method was called.

This layering is accomplished by having a Set method as an abstraction between the public framework method of Set<T> and our code, so we can change the type to something constructible. By layering this method, we can now control every return from the database in the test scenarios.

This layering also provides a better separation of concerns, as the DbSet<T> in Entity Framework mingles multiple independent concerns, such as connection management and querying, into a single object, whereas IQueryable<T> is the standard .NET interface for performing queries against a data source (DbSet<T> implements IQueryable<T>). We will continue to separate these concerns in future recipes.

See also

In this chapter:

  • Unit testing and mocking

Implementing the repository pattern

This recipe is an implementation of the Repository Pattern, which allows us to abstract the underlying data source and the queries used to obtain the data.

Getting ready

We will be using the NuGet Package Manager to install the Entity Framework Core 1 package, Microsoft.EntityFrameworkCore. We will also be using a SQL Server database for storing the data, so we will also need Microsoft.EntityFrameworkCore.SqlServer.

To mock interfaces and base classes, we will use Moq.

Finally, xunit is the package we will be using for the unit tests and dotnet-text-xunit adds tooling support for Visual Studio. Note that the UnitTests project is a .NET Core App 1.0 (netcoreapp1.0), that Microsoft.EntityFrameworkCore.Design is configured as a build dependency, and Microsoft.EntityFrameworkCore.Tools is set as a tool.

Open Using EF Core Solution from the included source code examples.

Execute the database setup script from the code samples included for this recipe. This can be found in the DataAccess project within the Database folder.

How to do it…

  1. Create a new file in the DataAccess project, with this content:
    using System.Linq;
    namespace DataAccess
    {
        public interface IRepository<out T> where T : class
        {
            IQueryable<T> Set<T>() where T : class;
            void RollbackChanges();
            void SaveChanges();
        }
    }
  2. In the DataAccess project, add a new C# interface named IBlogRepository with the following code:
    using System.Linq;
    namespace DataAccess
    {
        public interface IBlogRepository : IRepository<Blog>
        {
        }
    }
  3. In the DataAccess project, create a new C# class named BlogRepository containing the following code:
    using System.Data.Entity;
    using System.Linq;
    using BusinessLogic;
    namespace DataAccess
    {
        public class BlogRepository : IBlogRepository
        {
            private readonly IDbContext _context;
            public BlogRepository(IDbContext context)
            {
                _context = context;
            }
            public IQueryable<Blog> Set()
            {
                return _context.Set<Blog>();
            }
        }
    }
  4. We'll add a new unit test in the UnitTests project that defines a test for using the repository with the following code:
    using System.Linq;
    using BusinessLogic;
    using DataAccess;
    using Moq;
    using Xunit;
    namespace UnitTests
    {
        public class RepositoryTest : BaseTest
        {
            [Fact]
            public void ShouldAllowGettingASetOfObjectsGenerically()
            {
                //Arrange
                var data = new[] { new Blog { Id = 1, Title = "Title" },              newBlog { Id = 2, Title = "No Title" } }.AsQueryable();
                var mock = new Mock<IDbContext>();
                mock.Setup(x => x.Set<Blog>()).Returns(data);
                var context = mock.Object;
                var repository = new BlogRepository(context);
                //Act
                var blogs = repository.Set();
                //Assert
                Assert.Equal(data, blogs);
            }
        }
    }
  5. In the BlogController class of the UI project, update the usage of BlogContext so it uses IBlogRepository with the following code:
    using BusinessLogic;
    using DataAccess;
    using System.Linq;
    using Microsoft.AspNet.Mvc;
    namespace UI.Controllers
    {
        public class BlogController : Controller
        {
            private readonly IBlogRepository _repository;
            public BlogController(IBlogRepository repository)
            {
                _repository = repository;
            }
            public IActionResult Index()
            {
                var blog = _repository.Set().First();
                return View(blog);
            }
        }
    }
  6. Finally, we need to register the IBlogRepository service for dependency injection so that it can be passed automatically to the HomeController's constructor. We do that in the Startup.cs file in the UI project, in the ConfigureServices method:
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddMvc();
        services.AddSingleton<IConfiguration>(_ => Configuration);
        services.AddScoped<IDbContext>(_ => new BlogContext(Configuration["Data:Blog:ConnectionString"]));
        services.AddScoped<IBlogRepository>(_ => new BlogRepository(_.GetService<IDbContext>()));
    }

How it works…

We start off with a test that defines what we hope to accomplish. We use mocking (or verifiable fake objects) to ensure that we get the behavior that we expect. The test states that any BlogRepository function will communicate with the context to connect for the data. This is what we are hoping to accomplish, as doing so allows us to layer tests and extension points into the domain.

The usage of the repository interface is a key part of this flexible implementation as it will allow us to leverage mocks, and test the business layer, while still maintaining an extensible solution. The interface to the context is a straightforward API for all database communication. In this example, we only need to read data from the database, so the interface is very simple.

Even in this simple implementation of the interface, we see that there are opportunities to increase reusability. We could have created a method or property that returned the list of blogs, but then we would have had to modify the context and interface for every new entity. Instead, we set up the Set method to take a generic type, which allows us to add entities to the usage of the interface without modifying the interface. We will only need to modify the implementation.

Notice that we constrained the IRepository interface to accept only the reference types for T, using the where T : class constraint. We did this because value types cannot be stored using Entity Framework; if you had a base class, you could use it here to constrain the usage of the generic even further. Importantly, not all reference types are valid for T, but the constraint is as close as we can get using C#. Interfaces are not valid because Entity Framework cannot construct them when it needs to create an entity. Instead, it will produce a runtime exception, as they are valid reference types and therefore the compiler won't complain.

Once we have the context, we need to wrap it with an abstraction. IBlogRepository will allow us to query the data without allowing direct control over the database connection. We can hide the details of the specific implementation, the actual context object, while surfacing a simplified API for gathering data. We can also introduce specific operations for the Blog entity here.

The other interface that we abstracted is the IDbContext interface. This abstraction allows us to intercept operations just before they are sent to the database. This makes the untestable part of the application as thin as possible. We can, and will, test right up to the point of database connection.

We had to register the two interfaces, IDbContext and IBlogRepository, in the ASP.NET dependency resolver. This is achieved at startup time, so that any code that requires these services can use them. You will notice that the registration for IBlogRepository makes use of the IDbContext registration. This is OK, because it is a requirement for the actual implementation of BlogRepository to rely on IDbContext to actually retrieve the data.

There's more…

Keeping the repository implementation clean requires us to leverage some principles and patterns that are at the core of object-oriented programming, but not specific to using Entity Framework. These principles will not only help us to write clean implementations of Entity Framework, but can also be leveraged by other areas of our code.

Dependency Inversion Principle

Dependency inversion is another SOLID principle. This states that all of the dependencies of an object should be clearly visible and passed in, or injected, to create the object. The benefit of this is twofold: the first is exposing all of the dependencies so the effects of using a piece of code are clear to those who will use the class. The second benefit is that by injecting these dependencies at construction, they allow us to unit test by passing in mocks of the dependent objects. Granular unit tests require the ability to abstract dependencies, so we can ensure only one object is under test.

Repository and caching

This repository pattern gives us the perfect area for implementing a complex or global caching mechanism. If we want to persist a value into the cache at the point of retrieval, and not retrieve it again, the repository class is the perfect location for such logic. This layer of abstraction allows us to move beyond simple implementations and start thinking about solving business problems quickly, and later extend to handle more complex scenarios as they are warranted by the requirements of the specific project. You can think of repository as a well-tested 80%+ solution. Put off anything more until the last responsible moment.

Mocking

The usage of mocks is commonplace in tests because mocks allow us to verify underlying behavior without having more than one object under test. This is a fundamental piece of the puzzle for test-driven development. When you test at a unit level, you want to make sure that the level directly following the one you are testing was called correctly while not actually executing the specific implementation. This is what mocking buys us.

Where generic constraint

There are times when we need to create complex sets of queries that will be used frequently, but only by one or two objects. When this situation occurs, we want to reuse that code without needing to duplicate it for each object. This is where the where constraint helps us. It allows us to limit generically defined behavior to an object or set of objects that share a common interface or base class. The extension possibilities are nearly limitless.

See also

In this chapter:

  • Implementing the unit of work pattern
  • Creating mock database connections

Implementing the unit of work pattern

In the next example, we present an implementation of the Unit of Work pattern. This pattern was introduced by Martin Fowler, and you can read about it at http://martinfowler.com/eaaCatalog/unitOfWork.html. Basically, this pattern states that we keep track of all entities that are affected by a business transaction and send them all at once to the database, sorting out the ordering of the changes to apply—inserts before updates, and so on.

Getting ready

We will be using the NuGet Package Manager to install the Entity Framework Core 1 package, Microsoft.EntityFrameworkCore. We will also be using a SQL Server database for storing the data, so we will also need Microsoft.EntityFrameworkCore.SqlServer.

To mock interfaces and base classes, we will use Moq.

Finally, xunit is the package we will be using for the unit tests and dotnet-text-xunit adds tooling support for Visual Studio. Note that the UnitTests project is a .NET Core App 1.0 (netcoreapp1.0), that Microsoft.EntityFrameworkCore.Design is configured as a build dependency, and Microsoft.EntityFrameworkCore.Tools is set as a tool.

Open Using EF Core Solution from the included source code examples.

Execute the database setup script from the code samples included for this recipe. This can be found in the DataAccess project within the Database folder.

How to do it…

  1. First, we start by adding a new unit test in the UnitTests project to define the tests for using a unit of work pattern with the following code:
    using System;
    using System.Linq;
    using BusinessLogic;
    using DataAccess;
    using Moq;
    using Xunit;
    namespace UnitTests
    {
        public class UnitOfWorkTest : BaseTest
        {
            [Fact]
            public void ShouldReadToDatabaseOnRead()
            {
                //Arrange
                var findCalled = false;
                var mock = new Mock<IDbContext>();
                mock.Setup(x => x.Set<Blog>()).Callback(() => findCalled = true);
                var context = mock.Object;
                var unitOfWork = new UnitOfWork(context);
                var repository = new BlogRepository(context);
                //Act
                var blogs = repository.Set();
                //Assert
                Assert.True(findCalled);
            }
            [Fact]
            public void ShouldNotCommitToDatabaseOnDataChange()
            {
                //Arrange
                var saveChangesCalled = false;
                var data = new[] { new Blog() { Id = 1, Title = "Test" }              }.AsQueryable();
                var mock = new Mock<IDbContext>();
                mock.Setup(x => x.Set<Blog>()).Returns(data);
                mock.Setup(x => x.SaveChanges()).Callback(() => saveChangesCalled = true);
                var context = mock.Object;
                var unitOfWork = new UnitOfWork(context);
                var repository = new BlogRepository(context);
                //Act
                var blogs = repository.Set();
                blogs.First().Title = "Not Going to be Written";
                //Assert
                Assert.False(saveChangesCalled);
            }
            [Fact]
            public void ShouldPullDatabaseValuesOnARollBack()
            {
                //Arrange
                var saveChangesCalled = false;
                var rollbackCalled = false;
                var data = new[] { new Blog() { Id = 1, Title = "Test" } }.AsQueryable();
                var mock = new Mock<IDbContext>();
                mock.Setup(x => x.Set<Blog>()).Returns(data);
                mock.Setup(x => x.SaveChanges()).Callback(() => saveChangesCalled = true);
                mock.Setup(x => x.Rollback()).Callback(() => rollbackCalled = true);
                var context = mock.Object;
                var unitOfWork = new UnitOfWork(context);
                var repository = new BlogRepository(context);
                //Act
                var blogs = repository.Set();
                blogs.First().Title = "Not Going to be Written";
                repository.RollbackChanges();
                //Assert
                Assert.False(saveChangesCalled);
                Assert.True(rollbackCalled);
            }
            [Fact]
            public void ShouldCommitToDatabaseOnSaveCall()
            {
                //Arrange
                var saveChangesCalled = false;
                var data = new[] { new Blog() { Id = 1, Title = "Test" } }.AsQueryable();
                var mock = new Mock<IDbContext>();
                mock.Setup(x => x.Set<Blog>()).Returns(data);
                 mock.Setup(x => x.SaveChanges()).Callback(() => saveChangesCalled = true);
                var context = mock.Object;
                var unitOfWork = new UnitOfWork(context);
                var repository = new BlogRepository(context);
                //Act
                var blogs = repository.Set();
                blogs.First().Title = "Going to be Written";
                repository.SaveChanges();
                //Assert
                Assert.True(saveChangesCalled);
            }
            [Fact]
            public void ShouldNotCommitOnError()
            {
                //Arrange
                var rollbackCalled = false;
                var data = new[] { new Blog() { Id = 1, Title = "Test" } }.AsQueryable();
                var mock = new Mock<IDbContext>();
                mock.Setup(x => x.Set<Blog>()).Returns(data);
                mock.Setup(x => x.SaveChanges()).Throws(new Exception());
                mock.Setup(x => x.Rollback()).Callback(() => rollbackCalled = true);
                var context = mock.Object;
                var unitOfWork = new UnitOfWork(context);
                var repository = new BlogRepository(context);
                //Act
                var blogs = repository.Set();
                blogs.First().Title = "Not Going to be Written";
                try
                {
                    repository.SaveChanges();
                }
                catch
                {
                }
                //Assert
                Assert.True(rollbackCalled);
            }
        }
    }
  2. In the DataAccess project, create a new C# class named BlogContext with the following code:
    using BusinessLogic;
    using System.Linq;
    using Microsoft.EntityFrameworkCore;
    using Microsoft.Extensions.Configuration;
    using Microsoft.EntityFrameworkCore.Metadata.Internal;
    namespace DataAccess
    {
        public class BlogContext : DbContext, IDbContext
        {
            private readonly string _connectionString;
          
            public BlogContext(string connectionString)
            {
                _connectionString = connectionString;
            }
            public DbSet<Blog> Blogs { get; set; }
            protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
            {
                optionsBuilder.UseSqlServer(_connectionString);
                base.OnConfiguring(optionsBuilder);
            }
            public void Rollback()
            {
                ChangeTracker.Entries().ToList().ForEach(x =>
                {
                    x.State = EntityState.Detached;
                    var keys = GetEntityKey(x.Entity);
                    Set(x.Entity.GetType(), keys);
                });
            }
            
            public DbSet<T> Set<T>() where T : class
            {
                return Set<T>();
            }
            
            public object[] GetEntityKey<T>(T entity) where T : class
            {
                var state = Entry(entity);
                var metadata = state.Metadata;
                var key = metadata.FindPrimaryKey();
                var props = key.Properties.ToArray();
                return props.Select(x => x.GetGetter().GetClrValue(entity)).ToArray();
            }
        }
    }
  3. In the DataAccess project, create a new C# interface called IDbContext with the following code:
    using System.Linq;
    using Microsoft.EntityFrameworkCore;
    using Microsoft.EntityFrameworkCore.ChangeTracking;
    namespace DataAccess
    {
        public interface IDbContext
        {
            ChangeTracker ChangeTracker { get; }
            DbSet<T> Set<T>() where T : class;
            IQueryable<T> Set<T>() where T : class;
            EntityEntry<T> Entry<T>(T entity) where T : class;
            int SaveChanges();
            void Rollback();
        }
    }
  4. In the DataAccess project, create a new C# interface called IUnitOfWork with the following code:
    namespace DataAccess
    {
      public interface IUnitOfWork
      {
        void RegisterNew<T>(T entity) where T : class;
        void RegisterUnchanged<T>(T entity) where T : class;
        void RegisterChanged<T>(T entity) where T : class;
        void RegisterDeleted<T>(T entity) where T : class;
        void Refresh();
        void Commit();
        IDbContext Context { get; }
      }
    }
  5. In the DataAccess project, add a new C# class named UnitOfWork with the following code:
    using Microsoft.EntityFrameworkCore;
    namespace DataAccess
    {
      public class UnitOfWork : IUnitOfWork
      {
        public IDbContext Context { get; private set; }
        public UnitOfWork(IDbContext context)
        {
          Context = context;
        }
        public void RegisterNew<T>(T entity) where T : class
        {
          Context.Set<T>().Add(entity);
        }
        public void RegisterUnchanged<T>(T entity) where T : class
        {
          Context.Entry(entity).State = EntityState.Unchanged;
        }
        public void RegisterChanged<T>(T entity) where T : class
        {
          Context.Entry(entity).State = EntityState.Modified;
        }
        public void RegisterDeleted<T>(T entity) where T : class
        {
          Context.Set<T>().Remove(entity);
        }
        public void Refresh()
        {
          Context.Rollback();
        }
        public void Commit()
        {
          Context.SaveChanges();
        }
      }
    }
  6. Create a new C# file in the DataAccess project with this content:
    using System.Linq;
    namespace DataAccess
    {
        public interface IRepository<out T> where T : class
        {
            IQueryable<T> Set();
            void RollbackChanges();
            void SaveChanges();
        }
    }
  7. Also in the DataAccess project, add a new C# interface named IBlogRepository with the following code:
    using System.Linq;
    namespace DataAccess
    {
      public interface IBlogRepository : IRepository<Blog>
      {
      }
    }
  8. In the DataAccess project, create a new C# class named BlogRepository containing the following code:
    using System.Linq;
    using BusinessLogic;
    namespace DataAccess
    {
      public class BlogRepository : IBlogRepository
      {
        private readonly IUnitOfWork _unitOfWork;
        public BlogRepository(IUnitOfWork unitOfWork)
        {
          _unitOfWork = unitOfWork;
        }
        public IQueryable<Blog> Set
        {
          return _unitOfWork.Context.Set<Blog>();
        }
        public void RollbackChanges()
        {
          _unitOfWork.Refresh();
        }
        public void SaveChanges()
        {
          try
          {
            _unitOfWork.Commit();
          }
          catch (Exception)
          {
            _unitOfWork.Refresh();
            throw;
          }
        }
      }
    }
  9. In BlogController, update BlogContext to use IBlogRepository with the following code:
    using BusinessLogic;
    using System.Linq;
    using DataAccess;
    using Microsoft.AspNet.Mvc;
    using Microsoft.Extensions.Configuration;
    namespace UI.Controllers
    {
      public class BlogController : Controller
      {
        private IBlogRepository _repository;
        public BlogController(IBlogRepository repository)
        {
          _repository = repository;
        }
        //
        // GET: /Blog/
        public IActionResult Index()
        {
          var blog = _repository.Set().First();
          return View(blog);
        }
      }
    }
  10. Finally, register the IUnitOfWork interface in the Startup.cs file, in the ConfigureServices method:
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddMvc();
           services.AddSingleton<IConfiguration>(_ => Configuration);
           services.AddScoped<IDbContext>(_ => new BlogContext(Configuration["Data:Blog:ConnectionString"]));
           services.AddScoped<IBlogRepository>(_ => new BlogRepository(_.GetService<IDbContext>()));
           services.AddScoped<IUnitOfWork>(_ => new UnitOfWork(_.GetService<IDbContext>()));
    }

How it works…

The tests set up the scenarios in which we would want to use a unit of work pattern: reading, updating, rolling back, and committing. The key to this is that these are all separate actions, not dependent on anything before or after them. If the application is web-based, this gives you a powerful tool to tie to the HTTP request so any unfinished work is cleaned up, or to ensure that you do not need to call SaveChanges, since it can happen automatically.

The unit of work was originally created to track the changes made so they could be persisted, and it functions the same way now. We are using a more powerful, but less recognized, feature defining the scope of the unit of work. We gave the ability to control both scope and the changes that are committed in the database in this scenario. We have also put in some clean-up, which will ensure that even in the event of a failure, our unit of work will try to clean up after itself before throwing the error to be handled at a higher level. We do not want to ignore these errors, but we do want to make sure they do not destroy the integrity of our database.

In addition to this tight encapsulation of work against the database, pass in our unit of work to each repository. This enables us to couple multiple object interactions to a single unit of work. This will allow us to write code that's specific to the object, without giving up the shared feature set of the database context. This is an explicit unit of work, but Entity Framework, in the context, defines it to give you an implicit unit of work. If you want to tie this to the HTTP request, rollback on error, or tie multiple data connections together in new and interesting ways, then you will need to code in an explicit implementation such as this one.

This basic pattern will help to streamline data access, and resolve the concurrency issues caused by conflicts in the objects that are affected by a transaction.

There's more…

The unit of work is a concept that is deep at the heart of Entity Framework and adheres, out of the box, to the principles following it. Knowing these principles, and why they are leveraged, will help us use Entity Framework to its fullest without running into the walls built in the system on purpose.

Call per change

There is a cost for every connection to the database. If we were to make a call to keep the state in the database in sync with the state in the application, we would have thousands of calls, each with connection, security, and network overhead. Limiting the number of times that we hit the database not only allows us to control this overhead, but also allows the database software to handle the larger transactions for which it was built.

Interface Segregation Principle

Some might be inclined to ask why we should separate unit of work from the repository pattern. Unit of work is definitely a separate responsibility from repository, and as such it is important to not only define separate classes, but also to ensure that we keep small, clear interfaces. The IDbContext interface is specific in the area of dealing with database connections through an Entity Framework object context. This allows the mocking of a context to give us testability to the lowest possible level.

The IUnitOfWork interface deals with the segregation of work, and ensures that the database persistence happens only when we intend it to, ignorant of the layer under it that does the actual commands. The IRepository interface deals with selecting objects back from any type of storage, and allows us to remove all thoughts of how the database interaction happens from our dependent code base. These three objects, while related in layers, are separate concerns, and therefore need to be separate interfaces.

Refactoring

We have added IUnitOfWork to our layered approach to database communication, and if we have seen anything over the hours of coding, it is code changes. We change it for many reasons, but the bottom line is that code changes often, and we need to make it easy to change. The layers of abstraction that we have added to this solution with IRepository, IUnitOfWork, and IDbContext, have all given us a point at which the change would be minimally painful, and we can leverage the interfaces in the same way. This refactoring to add abstraction levels is a core tenet of clean, extensible code. Removing the concrete implementation details from related objects, and coding to an interface, forces us to encapsulate behavior and abstract our sections of code.

See also

In this chapter:

  • Implementing the repository pattern
Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Learn how to use the new features of Entity Framework Core 1
  • Improve your queries by leveraging some of the advanced features
  • Avoid common pitfalls
  • Make the best of your .NET APIs by integrating with Entity Framework

Description

Entity Framework is a highly recommended Object Relation Mapping tool used to build complex systems. In order to survive in this growing market, the knowledge of a framework that helps provide easy access to databases, that is, Entity Framework has become a necessity. This book will provide .NET developers with this knowledge and guide them through working efficiently with data using Entity Framework Core. You will start off by learning how to efficiently use Entity Framework in practical situations. You will gain a deep understanding of mapping properties and find out how to handle validation in Entity Framework. The book will then explain how to work with transactions and stored procedures along with improving Entity Framework using query libraries. Moving on, you will learn to improve complex query scenarios and implement transaction and concurrency control. You will then be taught to improve and develop Entity Framework in complex business scenarios. With the concluding chapter on performance and scalability, this book will get you ready to use Entity Framework proficiently.

Who is this book for?

This book is for .NET developers who work with relational databases on a daily basis and understand the basics of Entity Framework, but now want to use it in a more efficient manner. You are expected to have some prior knowledge of Entity Framework.

What you will learn

  • Master the technique of using sequence key generators
  • Validate groups of entities that are to be saved / updated
  • Improve MVC applications that cover applications developed using ASP.NET MVC Core 1
  • Retrieve database information (table, column names, and so on) for entities
  • Discover optimistic concurrency control and pessimistic concurrency control.
  • Implement Multilatency on the data side of things.
  • Enhance the performance and/or scalability of Entity Framework Core
  • Explore and overcome the pitfalls of Entity Framework Core
Estimated delivery fee Deliver to Brazil

Standard delivery 10 - 13 business days

R$63.95

Premium delivery 3 - 6 business days

R$203.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Nov 09, 2016
Length: 324 pages
Edition : 2nd
Language : English
ISBN-13 : 9781785883309
Vendor :
Microsoft
Category :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Estimated delivery fee Deliver to Brazil

Standard delivery 10 - 13 business days

R$63.95

Premium delivery 3 - 6 business days

R$203.95
(Includes tracking information)

Product Details

Publication date : Nov 09, 2016
Length: 324 pages
Edition : 2nd
Language : English
ISBN-13 : 9781785883309
Vendor :
Microsoft
Category :

Packt Subscriptions

See our plans and pricing
Modal Close icon
R$50 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
R$500 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just R$25 each
Feature tick icon Exclusive print discounts
R$800 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just R$25 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total R$ 852.97
Learning ASP.NET Core MVC Programming
R$272.99
Mastering C# and .NET Framework
R$272.99
Entity Framework Core Cookbook
R$306.99
Total R$ 852.97 Stars icon
Banner background image

Table of Contents

9 Chapters
1. Improving Entity Framework in the Real World Chevron down icon Chevron up icon
2. Mapping Entities Chevron down icon Chevron up icon
3. Validation and Changes Chevron down icon Chevron up icon
4. Transactions and Concurrency Control Chevron down icon Chevron up icon
5. Querying Chevron down icon Chevron up icon
6. Advanced Scenarios Chevron down icon Chevron up icon
7. Performance and Scalability Chevron down icon Chevron up icon
A. Pitfalls Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Empty star icon Empty star icon Empty star icon 2
(3 Ratings)
5 star 0%
4 star 0%
3 star 33.3%
2 star 33.3%
1 star 33.3%
Felix Aug 26, 2017
Full star icon Full star icon Full star icon Empty star icon Empty star icon 3
The book is OK, especially since it emphasizes what is not yet available in EF Core 1. Given the EF Core 2 just hit the market, I expect it to be significantly modified. But that's the fate of any book about fast-changing technology.The book is structured as a list of recipes - Getting Ready; How to do it; how it works; There's more; See also. As result, it looks more like a reference manual than a book. The explanation of what is *it* that is at the center of the recipe is very limited.Second, there is a lot of duplication: Getting Ready is pretty much the same - or at least the same in the chapter. The code without any explanation might as well be just downloaded from the site. And given that the title is EF *Core*, it's strange how many distracting references about how to do things in EF 6.
Amazon Verified review Amazon
Vlad Gâdescu Jul 17, 2017
Full star icon Full star icon Empty star icon Empty star icon Empty star icon 2
For me the book was hard to follow and not very clear. It has a lot of code examples that are also hard to follow and understand. After every code snippet the writer tries to explain it, but he does it in a very brief and superficial manner. If you want to learn about EF, do not buy this book.
Amazon Verified review Amazon
google007 Mar 27, 2017
Full star icon Empty star icon Empty star icon Empty star icon Empty star icon 1
Even doesn't mention how to replace an old object with new one.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela