Tag Archives: NET 7 Design Patterns In-Depth

NET 7 Design Patterns In-Depth 15. Base Design Patterns

Chapter 15
Base Design Patterns
Introduction
The design patterns of the base design patterns category can be divided into the following eleven main sections:

Gateway: It can encapsulate access to an external resource or system.
Mapper: It can communicate between two independent objects.
Layer supertype: A series of classes are considered as the parent of the rest of the classes, and in this way, common features and behaviors are placed in the parent class.
Separated interface: It can put interfaces and their implementations in different packages.
Registry: It can find and work with frequently used objects and services through an object.
Value object: It can put the values in the objects and then compare the objects based on their internal values.
Money: It can facilitate working with monetary values.
Special case: It can add a series of behaviors to the parent class in special situations.
Plugin: It can hold classes during configuration instead of compilation time.
Service stub: It can eliminate dependency on problematic services during testing.
Record set: It can provide a representation in memory of tabular data.
Structure
In this chapter, we will cover the following topics:

Gateway
Mapper
Layer supertype
Separated interface
Registry
Value object
Money
Special case
Plugin
Service stub
Record set
Objectives
In this chapter, you will get acquainted with base design patterns and learn how to implement other design patterns in a more suitable way with the help of these design patterns. You will also learn how to solve frequent problems in the software process with the help of these design patterns.

Base design patterns
Different design patterns can be used in the software development process to solve different problems. Each design pattern has its implementation method and can be used for specific issues.

Among them, some design patterns are the basis of other design patterns or the basis of solving many problems. These types of design patterns are called base design patterns. Today, programmers may deal with many of these design patterns daily.

Gateway
Name:

Gateway

Classification:

Base design patterns

Also known as:

---

Intent:

By using this design pattern, access to an external resource or system can be encapsulated.

Motivation, Structure, Implementation, and Sample code:

Suppose that we need to communicate with a Web API in our program, there are various methods to connect to this Web API, but the best method will be to implement the method of connecting and communicating with this Web API in one place so that the rest of the program can connect to it and use it without needing to know the details of the connection with the Web API.

To implement the above scenario, the following codes can be considered:

public class WebAPIGateway

{

private static readonly HttpClient HttpClient;

static WebAPIGateway() => HttpClient = new HttpClient();

public static async Task GetDataAsync(string url)

{

try

{

var response = await HttpClient.GetAsync(url);

response.EnsureSuccessStatusCode();

return await response.Content.ReadAsStringAsync();

}

catch

{

throw new Exception("Unable to get data");

}

}

}

As seen in the preceding code, in order to connect to WebAPI, the way to receive information from it is encapsulated inside the WebAPIGateway class. The rest of the program can easily use the data of this Web API through WebAPIGateway.

The above code can be used as follows:

var result = await WebAPIGateway.GetDataAsync

("https://jsonplaceholder.typicode.com/posts");

As it is clear in the above code, when using the GetDataAsync method, it is enough to send the service address to this method, and the access details to the external resource are encapsulated.

Notes:

The Gateway should be as simple as possible and designed and implemented in line with business requirements.
This design pattern is very similar to facade and adapter design patterns, but in terms of purpose, they have differences. For example, the facade design pattern presents a different view of what it covers (for example, it presents the system's complexity as a simple view to the client), but the gateway may present the same view. Also, the adapter design pattern tries to connect two classes that are not structurally compatible with each other and work with them. On the other hand, it is possible to use an adapter design pattern in the gateway implementation.
Consequences: Advantages

Code readability improves in the rest of the program.
It becomes easier to test the program.
Using this design pattern, external resources or systems can be changed without changing the code of other program parts.
Consequences: Disadvantages

Due to its similarity with other design patterns, it may not be used in its proper place and damage the design.
Applicability:

In order to connect to external resources or systems, this design pattern can be useful.
Related patterns:

Some of the following design patterns are not related to the gateway design pattern, but in order to implement this design pattern, checking the following design patterns will be useful:

Facade
Adapter
Mapper
Name:

Mapper

Classification:

Base design patterns

Also known as:

---

Intent:

Using this design pattern makes it possible to establish a connection between two independent objects.

Motivation, Structure, Implementation, and Sample code:

Suppose there is a need to communicate between the model and the database. This relationship has already been shown in the data mapper design pattern. As seen in the data mapper design pattern, the model and the database are unaware of each other's existence. Also, both sections are unaware of the mapper's existence. Using the data mapper design pattern is one of the most specific uses of the mapper design pattern.

Notes:

This design pattern is very similar to the gateway design pattern. Mapper is often used when the desired objects do not depend on each other (For example, communicating with a database).
This design pattern can be very similar to the mediator design pattern. The difference is that in this design pattern, the objects that the mapper intends to communicate with are unaware of the existence of the mapper.
In order to be able to use a mapper, there is usually another object that uses a mapper and communicates with other objects. Another way to use a mapper is to combine this design pattern with the observer design pattern. In this case, the mapper can listen to the events in the objects and do something when those events occur.
Consequences: Advantages

Different objects can be connected without creating dependencies on each other.
Consequences: Disadvantages

In terms of its similarity with other design patterns, it may not be used in the right place and cause damage to the design.
Applicability:

It can be useful in order to communicate between objects that are not related to each other.
Related patterns:

Some of the following design patterns are not related to the mapper design pattern, but in order to implement this design pattern, checking the following design patterns will be useful:

Data mapper
Gateway
Mediator
Observer
Layer supertype
Name:

Layer supertype

Classification:

Base design patterns

Also known as:

---

Intent:

Using this design pattern, a series of classes are considered as the parent of the rest of the classes, and in this way, common features and behaviors are placed in the parent class.

Motivation, Structure, Implementation, and Sample code:

Suppose that we are defining domain models. All models must have a property called ID. To implement this requirement, the definition of this feature can be assigned to any of the classes, or the ID property can be placed in a class so that other domain models can inherit from this class, as shown below:

public abstract class BaseModel

{

public int Id { get; set; }

}

public class Author : BaseModel

{

public string FirstName { get; set; }

}

As it is clear in the preceding code, the ID property has been transferred to the BaseModel class, and any class that needs an ID can inherit from this class.

Notes:

This design pattern can be combined with Generics in C# for better usability.
You can have several layer supertypes in each layer of the software.
Using this design pattern to define the identity field in models is very useful.
Consequences: Advantages

Using this design pattern will reduce duplicate codes, making it easier to maintain the code.
Consequences: Disadvantages

If this design pattern is not used correctly, it will confuse the code structure. For example, imagine if the BaseModel class has an ID property with an int data type, and the Book class inherits from BaseModel class. But we do not want the ID data type to be int (but we want it to have access to other behaviors in the BaseModel). Then it will be required to add another property to this class, or it will be required to ignore the provided property of the parent class by using the hiding feature (public new string Id { get; set; }), which will make the code confusing and will reduce the readability and maintainability of the code.
Applicability:

This design pattern can be used when there are common features or behaviors between classes.
Related patterns:

Some of the following design patterns are not related to layer supertype design patterns, but in order to implement this design pattern, checking the following design patterns will be useful:

Identity field
Separated interface
Name:

Separated interface

Classification:

Base design patterns

Also known as:

---

Intent:

By using this design pattern, interfaces, and their implementation are placed in different packages.

Motivation, Structure, Implementation, and Sample code:

Suppose that we are designing a program. In order to have a better design, reducing the dependency between different parts of the program can be a very important factor. One of the ways to reduce dependency is to use interfaces and then provide implementation based on these interfaces.

This separation itself has different states, including:

Interfaces and related implementations are in one package. Refer to the following figure:
Figure%2015.1.png
Figure 15.1: Interface and its implementation in the same package

namespace Package

{

public class Client

{

public IUnitOfWork UnitOfWork { get; set; }

}

public interface IUnitOfWork

{

bool Commit();

}

public class UnitOfWork : IUnitOfWork

{

public bool Commit() => true;

}

}

As it is clear in the above code, classes one and two are all in one package. In the above code, the namespace is assumed as a package.

Interfaces and related implementations are in different packages:
Interfaces and clients are in one package: Using this method can be easier if only one or all clients are in one package. In fact, in this method, the client is responsible for defining the interfaces, and the client will be able to work with any package that implements the defined interfaces.

Figure%2015.2.png
Figure 15.2: The interface and the client are in the same package,
but the implementation is in a different package

In the following code, it can be seen that Client and IUnitOfWork are in Package01 and the implementation of IUnitOfWork is placed in the UnitOfWork class defined in Package02.

namespace Package01

{

public class Client

{

public IUnitOfWork UnitOfWork { get; set; }

}

public interface IUnitOfWork

{

bool Commit();

}

}

namespace Package02

{

public class UnitOfWork : Package01.IUnitOfWork

{

public bool Commit() => true;

}

}

Interfaces and clients are also in different packages: If there are different clients, it is better to separate the package related to interfaces and clients. Refer to the following figure:

Figure%2015.3.png
Figure 15.3: Interface, client, and implementation all are in different packages

Notes:

Using interface keywords or programming language capability to define the interfaces is unnecessary. Sometimes using abstract classes is a much better option; you can put default implementations in them. From version 8 of the C# language, it is possible to provide default implementations for interfaces too.
Initialling the object from implementation is noteworthy in this design pattern. To make this easier, you can use the factory design pattern or assign the object initialization to another package. The appropriate object will be created in that package according to the related interface and implementation.
Consequences: Advantages

Dependencies between different parts of the program are reduced.
Consequences: Disadvantages

Using this design pattern in all parts of the program will only increase the complexity and volume of the code.

Applicability:

This design pattern can be used when there is a need to loosen the dependency between two parts of the system or to provide different implementations for an interface.
When there is a need to communicate with another module, this design pattern can be used so that communication can be established without depending on the implementation of that module.
Related patterns:

Some of the following design patterns are not related to the separated interface design pattern, but in order to implement this design pattern, checking the following design patterns will be useful:

Factory
Registry
Name:

Registry

Classification:

Base design patterns

Also known as:

---

Intent:

By using this design pattern, you can find and work with frequently used objects and services through an object.

Motivation, Structure, Implementation, and Sample code:

Suppose that we want to get all the books of an author. We will probably access their books through the author object to implement this requirement. In fact, through the author object, we can access the books. Now the question is, if we do not have the author object or there is no connection between the author and the book, how can we get an author's books? This is exactly where the registry design pattern comes in handy.

To implement this design pattern, you can do as follows:

public class Registry

{

private static Registry _registry = new();

}

In this implementation, we intend to implement this design pattern using the singleton design pattern. As it is clear in the preceding code, we have defined _registry as static so that we have only one instance of this class. We have also considered its access level private because the user does not need to be involved with the implementation details. Now we need to provide an object to the user so that they can work with the book. For this purpose, the Registry class can be changed as follows:

public class Registry

{

private static Registry _registry = new();

protected BookFinder bookFinder = new();

public static BookFinder BookFinder() => _registry.bookFinder;

}

If you pay attention to the preceding code, we have defined BookFinder as protected. The reason for this is that later we can change the instantiation of this object. Testing is one of the most specific modes in the service stub design pattern. Also, the BookFinder method is defined static, making it easy to work with the Registry class.

Considering the preceding implementation, suppose we also want to design a solution to test these codes. For this, you can proceed as follows:

public class RegistryStub : Registry

{

public RegistryStub() => base.bookFinder = new BookFinderStub();

}

As you can see, the RegistryStub class has inherited from the Registry class. Inside the constructor of this class, we instantiate the bookFinder object in the parent class using the BookFinderStub class. Now, to access the RegistryStub class, you can put the following method in the Registry class:

public static void InitializeStub() => _registry = new RegistryStub();

Or it may even be necessary somewhere in the program to re-instance the _registry. For this purpose, the following method can be defined in the Registry class:

public static void Initialize() => _registry = new Registry();

Notes:

This design pattern can also be implemented as thread safe.
Since we have defined the methods as static in the preceding implementation, there is no reason to define the variables or fields as static. This decision can be different according to the nature of the data and object. One of the applications for defining static variables or fields is the presence of static data, such as a list of cities or provinces and the like.
Paying attention to the scope of data in this design pattern is very important. According to the data type or object, their scope can be variable (at the process, thread, and session levels). Different implementations can be provided according to the scope of data or object. Diversity in the scope can also cause the emergence of a registry class for each scope or one Registry class for all scopes.
Using singleton implementation for mutable data in a multi-thread environment can be inappropriate. Using a singleton for process-level data that cannot be changed (such as a list of cities or provinces) is better.
For data in the scope of one thread (such as a connection to a database), using thread-specific storage resources such as the ThreadLocal class in C# can be very useful. Another solution is to use structures like a dictionary, where the key can be the thread ID.
As in the previous case, you can use a dictionary for data with a session scope. In this case, the key will be the session ID. Also, a thread's session data can still be placed in ThreadLocal.
Passing shared data as a method parameter or adding a reference to shared data as a property to the class are alternative methods of using the registry design pattern. Each of these methods has its problems. For example, if data is passed to a method as a parameter, there may be situations where this method needs to be called through other methods in the Call Tree. In this case, adding a parameter to the method will not be a good approach, and the registry will be better. Also, in the case of adding a reference to shared data as a class property, the preceding problem regarding the class constructor will still exist.
Consequences: Advantages

This design pattern allows you to easily access commonly used or shared data or objects.
Consequences: Disadvantages

Implementing this design pattern can be complicated due to the scope and nature of the data.
The use of this design pattern is often in confirming and tolerating mistakes in the design. You should always try to access the data in the objects through intra-object communication, and using the registry design pattern should be the last option.
It becomes difficult to test the code using this design pattern. Because if the data or objects presented in this design pattern change, the test results can change, and therefore the complexity of writing the test increases.
Applicability:

When providing a series of common data or objects to the rest of the program is necessary, using this design pattern can be useful.
Related patterns:

Some of the following design patterns are not related to the registry design pattern, but in order to implement this design pattern, checking the following design patterns will be useful:

Singleton
Service stub
Value object
Name:

Value object

Classification:

Base design patterns

Also known as:

---

Intent:

Using this design pattern, you can put the values in the objects, then compare them based on their internal values.

Motivation, Structure, Implementation, and Sample code:

Suppose we need to store different addresses for publishers in our program and then compare these with each other. In the desired scenario, an address consists of a city, main street, sub-street, and alley. A simple way is to design our model as follows:

public class Publisher

{

public string Title { get; set; }

public List

Addresses { get; set; }

}

public class Address

{

public string City { get; set; }

public string MainStreet { get; set; }

public string SubStreet { get; set; }

public string Alley { get; set; }

}

According to the preceding structure, suppose we have two different addresses, and we want to compare them:

Address address1 = new()

{

City = "Shahid beheshti",

MainStreet = "Kavoosifar",

SubStreet = "Nakisa",

Alley = "Rahnama"

};

Address address2 = new()

{

City = "Shahid beheshti",

MainStreet = "Kavoosifar",

SubStreet = "Nakisa",

Alley = "Rahnama"

};

As it is known, both address1 and address2 objects have the same content. With this assumption, we want to check that only one is saved if two addresses are identical. With this assumption, we write the following code:

if (address1 == address2)

publisher.Addresses.Add(address1);

else

publisher.Addresses.AddRange(new List

{ address1, address2 });

By executing the previous code, address1 is not equal to address2. This is because for reference types, instead of comparing values, the addresses of objects are compared.

Using the value object design pattern, we have previously transferred the values into the object (we have placed city and street properties into the Address class). Now, we need a mechanism to compare the values of these objects. For this, you can change the Address class as follows and add the following methods to it:

public static bool operator == (Address address1, Address address2)

=> address1.City == address2.City

&& address1.MainStreet == address2.MainStreet

&& address1.SubStreet == address2.SubStreet

&& address1.Alley == address2.Alley;

public static bool operator !=(Address address1, Address address2)

=> address1.City != address2.City

|| address1.MainStreet != address2.MainStreet

|| address1.SubStreet != address2.SubStreet

|| address1.Alley != address2.Alley;

As it is clear in the preceding codes, we have rewritten the == and != operators in order to present our method for comparing two objects. Now, if we run the following code again, we will find that the comparison results of address1==address2 are true:

if (address1 == address2)

publisher.Addresses.Add(address1);

else

publisher.Addresses.AddRange(new List

{ address1, address2 });

Another point is that if we create two objects with equal values, we will face two HashCodes. For example, consider the following commands:

Address address1 = new("Rahnama","Nakisa","Kavoosifar","Shahid beheshti");

Address address2 = new("Rahnama","Nakisa","Kavoosifar","Shahid beheshti");

In the preceding two lines, two different objects of the Address class are created, but their values are equal. Therefore, we will face two different HashCodes. We may want objects with equal values to generate equal HashCodes. In this case, we must rewrite the GetHashCode method in the Address class as follows:

public override int GetHashCode() =>

City.GetHashCode() ^

MainStreet.GetHashCode() ^

SubStreet.GetHashCode() ^

Alley.GetHashCode();

As seen in the preceding code, to generate the HashCode, the HashCode values of different properties have been combined using XOR(^) and produced the final HashCode. The advantage of the XOR combination is that the result of A XOR B always equals B XOR A.

Another solution to solve this problem is to use struct instead of class. In this case, there was no need to rewrite the operators == and != and GetHashCode.

Notes:

One of the most important differences between reference and value types is how they deal with comparison operations. When using value objects, the comparison operation must be based on values.
Value objects should be immutable. Otherwise, for example, two publishers may refer to the same address. Therefore, for any publisher who changes the address, this change should also be applied to the other publisher. For this purpose, the properties of the Address model can be changed as follows:
public string City { get; private set; }

public string MainStreet { get; private set; }

public string SubStreet { get; private set; }

public string Alley { get; private set; }

public Address(string city,string mainStreet,string subStreet,string alley)

{

City = city;

MainStreet = mainStreet;

SubStreet = subStreet;

Alley = alley;

}

In this case, the objects created from the Address class cannot be changed. There are other ways to make it immutable.

Consequences: Advantages

Using this design pattern will increase the readability of the code.
Logics such as validation can be encapsulated.
It provides the possibility of type safety.
Consequences: Disadvantages

This design pattern will increase the number of classes and objects in the program.
Applicability:

This design pattern is useful when comparing multiple properties using the == operation.
This design pattern will be useful when faced with objects that only complete the body and concept of other objects. For example, the Address object helps to complete the Publisher object.
When there are a series of parameters or data that are always used together and produce the same meaning. For example, the position of a point on the page (X position and Y position).
Related patterns:

Value object design pattern is very important and useful, which can be the basis of many design patterns. Also, in domain-driven design, this design pattern is highly useful.

Money
Name:

Money

Classification:

Base design patterns

Also known as:

---

Intent:

Using this design pattern can facilitate working with monetary values.

Motivation, Structure, Implementation, and Sample code:

Suppose we are designing a program in which the currency type Rial and Dollar will be used. To implement this requirement, one method is to store the amount of money and its unit in one of the class properties. But the problem will occur when we want to work with these monetary amounts. For example, add or subtract a value to it. A bigger problem will occur when multiplying or dividing numerically on these monetary values.

A better solution is to use a class to implement all the complexity and related details. Wherever we need to use monetary values, we can instantiate this class and use that instance. In other words, the Money class will be a value object. To implement the preceding requirement, suppose we have the following enum:

public enum Currency

{

Rial,

Dollar

}

In the preceding enum, we have defined different types of monetary units. According to the preceding enum, the Money class can be defined as follows:

public class Money

{

public int Amount { get; private set; }

public int Fraction { get; private set; }

public Currency Currency { get; private set; }

public Money(int amount, int fraction, Currency currency)

{

Currency = currency;

(int NewAmount, int NewFraction) = NormalizeValues(amount, fraction);

Amount = NewAmount;

Fraction = NewFraction;

}

private (int NewAmount, int NewFraction) NormalizeValues(

int amount, int fraction)

=> NormalizeValues((double)amount, (double)fraction);

private (int NewAmount, int NewFraction) NormalizeValues(

double amount, double fraction)

{

if (Currency == Currency.Rial)

fraction = 0;

else if (Currency == Currency.Dollar)

{

double totalCents = amount * 100 + fraction;

amount = totalCents / 100;

fraction = totalCents % 100;

}

return ((int)amount, (int)fraction);

}

}

In the preceding class, the Amount property represents the integer part of the value, and the Fraction represents its decimal part. As we know, the values do not have a decimal part in Rial currency, so it has been checked in the NormalizedValues method so that if the currency unit was Rial, the value of the decimal part is considered zero. The Currency property in this class also represents the monetary unit.

In the following, we will rewrite the Equals method to compare two Money objects.

public override bool Equals(object? other)

=> other is Money otherMoney && Equals(otherMoney);

public bool Equals(Money other)

=> Currency.Equals(other.Currency)

&& Amount == other.Amount && Fraction == other.Fraction;

The condition for two Money objects to be the same is that both objects have the same monetary unit, and the Amount and Fraction values are equal. In the same way, the ( ==, !=, < and > ) operators can also be rewritten.

public static bool operator ==(Money a, Money b) => a.Equals(b);

public static bool operator !=(Money a, Money b) => !a.Equals(b);

public static bool operator >(Money a, Money b)

{

if (a.Currency == b.Currency)

{

if (a.Amount > b.Amount)

return true;

else if (a.Amount == b.Amount && a.Fraction > b.Fraction)

return true;

else

return false;

}

return false;

}

public static bool operator <(Money a, Money b) { if (a.Currency == b.Currency) { if (a.Amount < b.Amount) return true; else if (a.Amount == b.Amount && a.Fraction < b.Fraction) return true; else return false; } return false; } public static bool operator >=(Money a, Money b)

{

if (a.Currency == b.Currency)

{

if (a.Amount > b.Amount)

return true;

else if (a.Amount == b.Amount)

{

if (a.Fraction > b.Fraction || a.Fraction == b.Fraction)

return true;

}

else

return false;

}

return false;

}

public static bool operator <=(Money a, Money b) { if (a.Currency == b.Currency) { if (a.Amount < b.Amount) return true; else if (a.Amount == b.Amount) { if (a.Fraction < b.Fraction || a.Fraction == b.Fraction) return true; } else return false; } return false; } As mentioned before, Money is a type of value object, and therefore the GetHashCode method must be rewritten as follows: public override int GetHashCode() => Amount.GetHashCode() ^ Fraction.GetHashCode() ^ Currency.GetHashCode();

Now we need to use a method to increase or decrease the amount of money in a Money object. For this, the following codes can be considered:

public void Add(Money other)

{

if (Currency == other.Currency)

{

int a = Amount + other.Amount;

int f = Fraction + other.Fraction;

(int NewAmount, int NewFraction) = NormalizeValues(a, f);

Amount = NewAmount;

Fraction = NewFraction;

}

else

throw new Exception("Unequal currencies");

}

public void Subtract(Money other)

{

if (Currency == other.Currency)

{

int a = Amount - other.Amount;

int f = Fraction - other.Fraction;

(int NewAmount, int NewFraction) = NormalizeValues(a, f);

Amount = NewAmount;

Fraction = NewFraction;

}

else

throw new Exception("Unequal currencies");

}

As it is clear in the implementation of the preceding methods, the condition of adding or subtracting is that the monetary units are the same. Also, similar methods can be considered for multiplication and division. The important thing about the multiplication and division methods is to pay attention to the decimal nature of the number, which we want to multiply by the Amount and Fraction values. The following is the simple implementation of the Multiply method:

public void Multiply(double number)

{

number = Math.Round(number, 2);

double a = Amount * number;

double f = Math.Round(Fraction * number);

(int NewAmount, int NewFraction) = NormalizeValues(a, f);

Amount = NewAmount;

Fraction = NewFraction;

}

For example, the input number is rounded to a number with two decimal places in the preceding implementation. Or in the calculation of f, the number of cents is rounded because, for example, in the Dollar currency, we do not have anything called one and a half cents. Also note that in the preceding implementations, in terms of ease of learning, simple examples with the least details are given. More details may be needed in operational tasks, or combining this design pattern with other design patterns is necessary.

Notes:

When using this design pattern, you should consider the decimal values and how to treat these values.
You can convert monetary units to each other using this design pattern and exchange rates.
Consequences: Advantages

Increases readability and facilitates code maintenance.
It causes the implementation of meaningful mechanisms for working with monetary items.
Consequences: Disadvantages

Using this design pattern can have a negative effect on efficiency, and most of the time, this effect will be very small.
Applicability:

You can benefit from this design pattern when using several monetary units in the program.
Related patterns:

Some of the following design patterns are not related to money design pattern, but in order to implement this design pattern, checking the following design patterns will be useful:

Value object
Special case
Name:

Special case

Classification:

Base design patterns

Also known as:

---

Intent:

Using this design pattern, a series of behaviors can be added to the parent class in special situations.

Motivation, Structure, Implementation, and Sample code:

Suppose a code is written inside a method that can be used to get the author's ID, retrieve their information from the database, and return it. The important thing here is what should be returned if no author for the given ID is found. There are usually three ways to answer this question:

Use of exceptions
Return NULL
Returning a specific object
Using method 1 has a negative effect on efficiency. Using an exception in these cases is not a good decision because as soon as the exception occurs, .NET stops everything to process the exception.

Method 2 seems to be suitable and is often used. But by using this method, it will be necessary to write different codes to check the returned response, and, in this way, we will find out whether the response is NULL. Usually, this method will need to inform the user that the author's ID is wrong through method 1.

Using method 3, this scenario is more suitable. Using this method, an object of a specific type is returned. This special type has a series of features, among the most important of which it can be mentioned that this special type inherits from the main type in question.

According to the preceding description, suppose we have the Author model as follows:

public class Author

{

public virtual int AuthorId { get; set; }

public virtual string FirstName { get; set; }

public virtual string LastName { get; set; }

public override string ToString() => $"{FirstName} {LastName}";

}

Note that in the preceding model, we have defined the features as virtual so that we can rewrite them later. Also, suppose the following method returns information related to the author after receiving the author's ID:

public class AuthorRepository

{

private readonly List authorList = new()

{

new Author{AuthorId=1,FirstName="Vahid",LastName="Farahmandian"},

new Author{AuthorId=2,FirstName="Ali",LastName="Mohammadi"}

};

public Author Find(int authorId)

{

Author result = authorList.FirstOrDefault(x => x.AuthorId == authorId);

return result;

}

}

As specified in the Find method, a NULL value is returned if the author ID does not exist in the current implementation. We need to define a special type with the mentioned conditions to correct this problem. Therefore, we define the AuthorNotFound class as follows:

public class AuthorNotFound : Author

{

public override int AuthorId { get => -1; set { } }

public override string FirstName { get => ""; set { } }

public override string LastName { get => ""; set { } }

public override string ToString() => "Author Not Found!";

}

As seen in the preceding code, the AuthorNotFound class has inherited from the Author class, so this class is considered a special type of the Author class. Now, with this class, you can rewrite the Find method in the AuthorRepository class as follows:

public Author Find(int authorId)

{

Author result = authorList.FirstOrDefault(x => x.AuthorId == authorId);

if (result == null)

return new AuthorNotFound();

return result;

}

The preceding code says that if the author ID is unavailable, an object of type AuthorNotFound will be returned. When using this code, you can use it as follows:

Author searchResult = new AuthorRepository().Find(3);

if (searchResult is AuthorNotFound)

{

Console.WriteLine("Author not found!");

}

After receiving the response in the preceding code, it is checked whether the return type is AuthorNotFound or not.

Notes:

You can often use the flyweight design pattern to implement this design pattern. In this case, the amount of memory used will also be saved.
This design pattern is very similar to the null object design pattern. But a null object can be considered a special case of this design pattern. There are no behaviors in the null object design pattern, or these behaviors do nothing. But in the special case design pattern, behaviors can do certain things. For example, in the preceding scenario, the ToString() method in the AuthorNotFound class returns a meaningful message to the user.
Among the most important applications of this design pattern, we can mention the implementation of infinite positive/negative values or NaN in working with numbers.
Consequences: Advantages

By using this design pattern, complications caused by NULL values can be avoided.
Consequences: Disadvantages

Sometimes returning a NULL value will be a much simpler solution, and using this design pattern will increase the complexity of the code as well.
Using this design pattern when critical events occur can cause the program to face problems. In these cases, using exceptions will be the best option.
Applicability:

In order to return the results of mechanisms such as search, this design pattern can be used.
Related patterns:

Some of the following design patterns are not related to special case design patterns, but in order to implement this design pattern, checking the following design patterns will be useful:

Flyweight
Null object
Plugin
Name:

Plugin

Classification:

Base design patterns

Also known as:

---

Intent:

This design pattern allows classes to be held during configuration instead of at compile time.

Motivation, Structure, Implementation, and Sample code:

Suppose there is a method through which a log can be recorded. The behavior of this method is different for test and operational environments. In the test environment, we want to save the logs in a text file, but in the operational environment, we want to save the logs in the database. To implement this requirement, one method is to identify the environment using condition blocks in the mentioned method and perform the desired work based on the environment. The problem with this method is that if we need to check the environment for various tasks, we will end up with messy and complicated code. In addition, if the settings are changed, the code must be changed, recompiled, and finally deployed in the production environment.

Another method is to use the plugin design pattern. This design pattern helps select and execute the appropriate implementation based on the environment using centralized runtime settings. The Figure 15.4 sequence diagram shows the generality of this design pattern:

Figure%2015.4.png
Figure 15.4: Plugin design pattern sequence design pattern

As you can see in Figure 15.4, the request to receive the plugin is given to the Plugin Factory. Based on the requested type, Plugin Factory searches for plugins in the settings and creates an object of that plugin type if it finds the corresponding plugin.

The first step in implementing the plugin design pattern is to use the separated interface design pattern.

public interface ILog

{

void Execute();

}

The preceding code shows the interface that any class that wants to do logging must implement it:

public class TxtLogger : ILog

{

public void Execute()

=> File.AppendAllText(

$"E:\\log\\{Guid.NewGuid()}.txt",

$"Executed at: {DateTime.Now}");

}

public class SqlServerLogger : ILog

{

public void Execute()

=> new SqlCommand($"" +

$"INSERT INTO log (Content)" +

$"VALUES(N'Executed {DateTime.Now}')",

DB.Connection).ExecuteNonQuery();

}

TxtLogger and SqlServerLogger classes are intended for recording logs in a file and database. Both classes have implemented the ILog interface.

Now that we have the classes, we need a class to select and return the appropriate class by referring to the settings. In this scenario, we have put the settings in the files test.props.json and prod.props.json as follows:

{

"logging":

[

{

"interface": "ILog",

"implementation": " MyModule.Log.Plugin.TxtLogger",

"assembly": "MyModule"

}

]

}

As it is clear in the preceding JSON content, it is said that when working with the logging module, the implementation of the ILog interface is in the TxtLogger class in the MyModule assembly. If the environment changes, it will be enough to load the prod.props.json file instead of the test.props.json file.

Now, we need the PluginFactory class so that through this class, the appropriate class can be identified and instantiated based on the environment. For this purpose, consider the following codes:

public class PluginFactory

{

private static readonly List configs;

static PluginFactory()

{

var jsonConfigs =

JObject.Parse(File.ReadAllText(@$"{Environment.Name}.props.json"));

configs = JsonConvert.DeserializeObject>

(jsonConfigs.SelectToken("logging").ToString());

}

public static ILog GetPlugin(Type @interface)

{

ConfigModel config = configs.FirstOrDefault(

x => x.Interface == @interface.Name);

if(config == null)

throw new Exception("Invalid interface");

return (ILog)Activator.CreateInstance(

config.Assembly, config.Implementation).Unwrap();

}

}

As seen in the preceding code, in the static constructor of the class, the settings are read from the JSON file. Next, the GetPlugin method selects the desired settings from the list of settings, and then the corresponding object is created and returned.

Thanks to the possibility of having a default implementation in interfaces in C# language, the GetPlugin method can be used without making any changes in the subset classes. With this explanation, the ILog interface changes as follows:

public interface Ilog

{

public static Ilog Instance = PluginFactory.GetPlugin(typeof(Ilog));

void Execute();

}

In the preceding code, it has been tried to combine the plugin design pattern with the singleton design pattern and increase the readability of the code. To use these codes, you can do the following:

ILog.Instance.Execute();

As you can see, when referring to Instance in ILog, the appropriate plugin is selected, and its Execute method is executed. Now, with this design, at the time of execution, you can change the settings and see the results without the need to make changes in the code and build and deploy again.

Notes:

Settings can be saved in different formats and presented through different data sources.
The setting of the environment can be done in different ways, including JSON files available in the project configuration, YAML files available for pipelines in CI/CD processes, and sending parameters through the CLI.
The important thing about this design pattern is that the interface and the implementation communication should be done dynamically from execution.
Usually, different plugins are in different library files, and the implementation does not need to be in the same interface assembly.
This design pattern is closely related to the Separated Interface design pattern. In fact, you can implement interfaces in other modules or files using a Separated Interface design pattern.
When presenting the assembly containing the intended implementation, attention should be paid to dependencies and dependent assemblies. Using the dotnet publish command for the class library project, all dependencies will be copied to the output. Another way to copy all the dependencies and the rest of the files is to use EnableDynamicLoading and set it to true in the csproj file.
When using this design pattern in .NET, it should be noted that until the writing of this book, .NET does not allow the introduction of new frameworks to the host program. Therefore, everything that is needed must be pre-loaded by the host program.
Consequences: Advantages

By using this design pattern, it is possible to eliminate the dispersion of settings in the length of the code, and in this case, it will be easier to maintain the code.
This design pattern improves the development capability, and new plugins can be added to the program.
Consequences: Disadvantages

This design pattern makes all the development items presented in the interface format. Therefore, from this point of view, the ability to develop will be limited.
Maintainability using this design pattern will be difficult because plugins will need to be compatible with different versions of the provided interface.
The complexity of testing will increase. Because the plugin may work alone, but it may have problems interacting with the rest of the plugins and the program.
Applicability:

When faced with behaviors requiring different implementations during execution, we can benefit from this design pattern.
Related patterns:

Some of the following design patterns are not related to the plugin design patterns, but in order to implement this design pattern, checking the following design patterns will be useful:

Separated interface
Singleton
Service stub
Name:

Service stub

Classification:

Base design patterns

Also known as:

Mock object

Intent:

By using this design pattern, it is possible to eliminate dependence on problematic services during testing.

Motivation, Structure, Implementation, and Sample code:

Let us assume that to integrate the identity information of people in the country, it is necessary to communicate with the web service of the National Organization for Civil Registration and receive the person's identity information while providing the national code. The problem is that the web service used for querying the person's information is not ready at the time of software development by us. In this case, the production team has to wait for the registration web service to be ready so that they can start their production work.

This method will delay production on our side. By using the service stub design pattern, we try to simulate the production processes on the side of the National Organization for Civil Registration in the simplest possible approach. In that case, there will be no need to stop the production and testing process.

To implement the preceding scenario, the following codes can be considered. Let us assume that it is agreed that the return response from the registry office has the following structure:

public class UserInformation

{

public string NationalCode { get; set; }

public string FirstName { get; set; }

public string LastName { get; set; }

public DateTime DoB { get; set; }

}

And assume that the real-person query service is supposed to take the username and return an appropriate response with the preceding structure. With this assumption, the InquiryService interface, which has the role of the gateway, can be defined as follows:

The InquiryService feature can be set using the plugin design pattern in the above interface. Based on the value of this feature, it is determined that the input request should be directed to the web service of the National Organization for Civil Registration, or the service stub defined in our program should be executed. The classes are responsible for directing the request to the National Organization for Civil Registration web service or service stub and must implement the preceding interface.

To connect to the web service of the National Organization for Civil Registration, the following codes are written:

public class InquiryService : IInquiryService

{

public UserInformation Inquiry(string nationalCode)

=> throw new NotImplementedException();

}

Obviously, in the Inquiry method, the necessary codes to connect to the National Organization for Civil Registration web service must be written. But currently, we do not have information on how to connect to that service.

The codes related to ServiceStub are as follows:

public class InquiryServiceStub : IInquiryService

{

public UserInformation Inquiry(string nationalCode)

{

return new UserInformation()

{

NationalCode = nationalCode,

FirstName = "Vahid",

LastName = "Farahmandian",

DoB = new DateTime(1989,09,07)

};

}

}

Now, to use this structure, it is enough to act as follows:

IInquiryService.InquiryService = new InquiryServiceStub();

var result = IInquiryService.InquiryService.Inquiry("1234567890");

As shown in the preceding code, the InquiryService property is set with a ServiceStub object so that the input request will be directed to the ServiceStub. In the future, whenever the web service of the National Organization for Civil Registration is ready, an object of the InquiryService class can be returned through the plugin so that the incoming requests are directed to the web service of the National Organization for Civil Registration.

Notes:

As much as possible, service stubs should be simple without any complexity.
The important point in using this design pattern is that the design is based on abstractions and interfaces, and the classes should not be directly dependent on each other.
Using the Microsoft Fakes framework, you can automatically create stubs from the interfaces in an assembly. This feature has some limitations in relation to static methods or sealed classes.
Combining this design pattern with gateway and plugin design patterns can provide a better design.
Consequences: Advantages

It is easier to test remote services.
Development does not depend on external services and modules, and the development speed increases.
Consequences: Disadvantages

If the class dependencies are not in the form of dependencies, using this design pattern will double the problem. Therefore, before using this design pattern, it is suggested to make sure that the dependencies between the involved classes are correct so that the Service Stub can be easily replaced with the actual implementation.
Applicability:

When an external service or module causes damage to the development process due to its internal problems, this design pattern can be used until the status of that service or module is stabilized so that the development process does not stop.
Related patterns:

Some of the following design patterns are not related to the service stub design pattern, but in order to implement this design pattern, checking the following design patterns will be useful:

Plugin
Gateway
Record set
Name:

Record set

Classification:

Base design patterns

Also known as:

---

Intent:

Using this design pattern, it is possible to provide a memory display of tabular data.

Motivation, Structure, Implementation, and Sample code:

We must get the author's table's data from the database, perform business logic validation, and deliver the UI. One way to do this is to embed the business logic into the database. The main problem with this method is that the business logic will be spread between the application codes and the database. A better approach is to fetch the data related to the authors from the database and put it in a table inside the DataSet. Then you can disconnect from the database and work with this DataSet instead. The DataSet in ADO.NET is the same as the record set; often, we will not need to implement a record set and can use existing features.

Notes:

One of the most important requirements of the record set is that it should be exactly like the database structure.
If the record set can be serialized, then the record set can also play the role of a data transfer object.
Due to the possibility of disconnecting from the database, you can use UnitOfWork and Optimistic Offline Lock when you need to do something to change the data.
There are two general ways to implement a record set. Using the implicit method and the explicit method. In the implicit method, to access the data of a specific table or column, we give its name in the form of a string to a method and get the result. In the explicit method, a separate class can be defined for each table, and the data can be received from the record set through the internal communication of the classes.
ADO.NET uses explicit methods, and with the help of XSD content, it identifies relationships between classes and produces classes based on these relationships.
By combining the design pattern of the record set and table module, domain logic can be placed in the table module. Therefore, the data can be fetched from the database and placed in the record set. Then through the table module, domain logic was implemented on this data and provided to the UI. When changing the data, the changes are given from the UI to the table module, where the business rules and necessary validations are performed. The record set is delivered, so the changes are applied to the database.
Consequences: Advantages

Using this design pattern, you can retrieve data from the database and then disconnect from the database and start working with the data.
Consequences: Disadvantages

In the explicit method, each table's corresponding class must be generated, increasing the code's volume and maintenance difficulty.
In the Implicit method, it will be difficult to understand what tables and columns the Record Set contains.
If a large amount of data is fetched from the database, memory consumption will increase, negatively affecting performance.
Applicability:

This design pattern can be useful when it is necessary to take the data from the database and work with the data in offline mode.
Related patterns:

Some of the following design patterns are not related to the record set design pattern, but in order to implement this design pattern, checking the following design patterns will be useful:

Data transfer object
Unitofwork
Optimistic offline lock
Table module
Conclusion
In this chapter, you got acquainted with the base design patterns and learned how to complete the implementations of other design patterns with the help of these design patterns. Also, in this chapter, you learned how to use these design patterns to implement things like converting currencies to each other and so on.

Join our book's Discord space

Join the book's Discord Workspace for Latest updates, Offers, Tech happenings around the world, New Release and Sessions with the Authors:

https://discord.bpbonline.com

NET 7 Design Patterns In-Depth 14. Session State Design Patterns

Chapter 14
Session State Design Patterns
Introduction
In order to organize the session state, the design patterns of this category can be divided into the following three main sections:

Client session state: Information about the user session is stored on the client side.
Server session state: Information about the user session is stored on the server side.
Database session state: Information about the session is stored in the database.
Structure
In this chapter, we will discuss the following topics:

Session state design patterns
Client session state
Server session state
Database session state
Objectives
In this chapter, you will learn the differences between stateless and stateful software. You will learn the different session maintenance and management methods, how to save the session on the client or server-side or even the database, and each method's differences, advantages, and disadvantages.

Session state design patterns
When we talk about transactions, we often talk about system and business transactions, and this continues into the discussion of stateless or stateful sessions. But first, it should be determined what is meant by stateful or stateless.

When we look at an object, this object consists of a series of data (status) and behaviors. In enterprise software, stateless means a state where the server does not keep the data between two requests. We will face stateful mode if the server needs to store data between two requests.

There are several considerations for dealing with the stateful method. Sometimes we need to store the data during a series of specific sessions. In this case, this data is the session state. Since the session state is often inside the business transaction, then the attributes proposed for the transaction will also be true for the session state.

The important thing about session states is how they are stored. If we want to store data on the client side, then using the client session state design pattern can be useful. The server session state design pattern will be a good option for storing the data in the server memory. The database session state design pattern will be more suitable if we want to store the data in the database.

If we face many users, we will need methods such as clustering to improve throughput. In this case, paying attention to session migration will often be necessary. For session migration, one request may be processed on server A and the next request, which is a continuation of the previous request, may be processed on server B. The opposite of this method will be session affinity. In this method, a server will be responsible for processing all requests for a specific session.

Client session state
Name:

Client session state

Classification:

Session state design patterns

Also known as:

---

Intent:

Using this design pattern, the information related to the user's session is stored on the client side.

Motivation, Structure, Implementation, and Sample code:

Let us suppose that a requirement has been raised, and it is necessary to implement the possibility of video sharing on the website. To implement this requirement, there are various methods, but what is important in this requirement is the possibility of video sharing. The easiest way to share videos on the web is to share via URL. For example, pay attention to the following URL:

http://jinget.ir/videos?id=1

The preceding address consists of several different parts, which are:

Exchange protocol: In the preceding address, it is the same as http
Domain name: jinget.ir in the preceding address.
Path: There are videos at the preceding address
Query parameters: id=1 in the preceding address
With the help of the preceding address, by sending a GET request, you can inform the server that we intend to receive the video with id equal to 1. In this type of connection, the server does not need to keep any information about the request; all the necessary information is stored on the client side. This method is one of the methods the client session state design pattern tries to provide. Suppose the user changes the preceding address and sends the parameter id=2 instead of id=1. Since the server has not saved any information about the user's session, it will process the request and return the video with an Id equal to 2 in response. Therefore, one of the problems of using this method will be its security.

One of the ways to connect the user’s session to the client and the server is to return the session's id to the client in the form of a URL. When the client sends its request to the server by providing the session ID, it helps the server to find the complete information of the previous session related to the given session ID. Often, in order to reduce the probability of estimating and guessing the session ID, this ID is generated as a random string. For example, consider the following URL:

http://jinget.ir/sessionId=ahy32hk

In the preceding URL, the value of ahy32hk is a random value for the sessionId. By providing it to the server, the client can cause the corresponding session to be found on the server and the information processed.

To protect security, suppose we want to send information related to the user's identity as a request. Usually, this information includes a token with long string content. The next problem in using this method is the limitation of the length of the address. Therefore, we need to either go for other methods or combine the preceding method with other methods. Another method is using the hidden field to save the session information on the user's side. Hidden fields in HTML can be defined as follows:

Hidden fields are uncommon nowadays, but this method allows for storing and sending more information to the server. Following the preceding requirement, we can get the user's identity information from the server and save it as a string in the hidden field. The important feature of a hidden field is that it is placed on the page as an HTML element but not displayed to the user. This does not mean that the user does not have access to its value, and the user can view the content stored in the hidden field by referring to the source of the HTML page.

Users can change this content and face the security threat when they see it. For this purpose, encryption methods can be used. The drawback of using encryption will be that it will disrupt the application's overall performance because each request and response will need to be encrypted and decrypted before and after processing.

The advantage of this method is that you can send any type of request to the server, and it is not limited to GET requests like the URL method. For example, the DTO received from the server can be serialized in XML or JSON format and stored in the hidden field. The important problem in using this method is that today, with the expansion of web service and Web API, the client does not necessarily include HTML codes that can store session information in them.

There is also a third method to solve the problem related to hidden fields, and that is to use cookies. The use of cookies has always been controversial and has its supporters and opponents. One of the most important problems with cookies is that cookies may be disabled in the user's browser. In this case, the program's performance will face problems, so when entering most websites that use cookies, the user is asked to allow the website to use cookies.

For example, in the following code, sessionState is set inside the web.config file. As this code snippet shows, the session state uses an id cookie to identify client sessions. Therefore, in every request and response, this cookie is transferred between the user and the server.

The important thing about the cookie is that the information included in the cookie is sent to the server with every request and returned to the client in the server's response. Like the URL method, cookies also have a size limit, which can be an important problem. Also, cookies, like hidden fields, have the problem that the user can see or change their content. Cookies provide more important features than hidden fields. For example, cookies can be set to only be read on the server side, and client codes do not have access to cookie values.

Notes:

Regardless of what design pattern is used to maintain the session or its information, this design pattern is required to communicate between the request on the client side with the related processing and the response on the server side.
An object is generated on the server side and stored in memory. A key is needed to identify this object, and we call this key SessionId. The response sent to the client contains this SessionId. Hence, the client, in the return request to the server by presenting the SessionId causes the object related to them to be identified among the many objects.
Regardless of which method is used to store and send session information to the server, the received information must be validated from the beginning whenever a request is sent.
Consequences: Advantages

To have a stateless server, using this design pattern is very useful. The stateless nature of the server makes it possible to implement clustering methods on the server side with the lowest cost.
Consequences: Disadvantages

This design pattern is always influenced by the amount of data we want to store. Usually, for a large volume of data, the location and method of data storage on the client side will be an important problem.
Since data is stored on the client side, security threats are important in this design pattern.
Applicability:

This design pattern can be used to implement the server in stateless mode.
Related patterns:

Some of the following design patterns are not related to the client session state design pattern, but to implement this design pattern, checking the following design patterns will be useful:

Server session state
Database session state
Server session state
Name:

Server session state

Classification:

Session state design patterns

Also known as:

---

Intent:

Using this design pattern, the information related to the user session is stored on the server side.

Motivation, Structure, Implementation, and Sample code:

In the example provided for the client session state design pattern, suppose we want to store information about the session on the server side. In this case, an object is generated on the server side for each session.

By default, this object is stored in the memory, and SessionId is used to refer to that object and find it among the many objects related to different sessions. SessionId is key to finding information about the session. This key must be provided to the client in some way, and in the next request that the client sends to the server, by providing the SessionId to the server, the information related to their session will be found, and the processing and response generation operations will be done. Figure 14.1 shows the overall process of the server session state design pattern:

Figure14.1.png
Figure 14.1 Server Session State

The problem with the preceding scenario is that with the increase in the number of users or the increase in the number of sessions, the number of objects in the memory will increase, and this will cause a significant amount of server memory to be allocated to store these objects. In addition to increasing the amount of memory consumption on the server side, the possibility of clustering and implementing load-balancing methods will also be difficult.

The session status can be serialized and stored in a data source using the memento design pattern to solve this problem. In this case, choosing the serial format will be important. As mentioned, data can be serialized and stored in string or binary formats. This will make the server stateful while providing clustering, load balancing, and so on the server side. This method will affect the overall performance of the program.

The location of the data source will be important in the preceding method. You can consider the data source on the same application server or use a separate server to store this information. It is evident that if these data are stored on the same server application, the flexibility will be reduced in front of the design and implementation of clustering solutions, and so on.

To implement this design pattern, suppose we need to use an In-Memory session provider, so we need to install Microsoft.AspNetCore.Session package to enable the use of session middleware. Run the following command to install this package:

dotnet add package Microsoft. AspNetCore.Session

Note that you can use different methods to install NuGet packages. For example, you can use package manager CLI as follows:

NuGet\Install-Package Microsoft.AspNetCore.Session

For more information about this NuGet package, please refer to the following link: https://www.nuget.org/packages/Microsoft.AspNetCore.Session

After installing this package, we need to configure the use of the session as follows in Program.cs file:

builder.Services.AddDistributedMemoryCache();

builder.Services.AddSession(options =>

{

options.Cookie.Name = "SessionInfo";

options.IdleTimeout = TimeSpan.FromMinutes(60);

});

In the preceding code, it is mentioned that an in-memory provider is going to be used. Also, it is stated that the session ID will be stored in the SessionInfo cookie in the client’s browser so that with each server request, this cookie will be sent to the server too. By providing this cookie to the server, the server will be able to find the client-related session and loads its data.

The IdleTimeout indicates how long the session can be idle before its contents are abandoned. Each session access resets the timeout, and note this only applies to the content of the session, not the cookie.

Finally, call the app.UseSession() adds the SessionMiddleware to enable the session state for the application automatically. After configuring the preceding settings, Program.cs file should look as follow:

var builder = WebApplication.CreateBuilder(args);

// Add services to the container.

builder.Services.AddControllersWithViews();

builder.Services.AddDistributedMemoryCache();

builder.Services.AddSession(options =>

{

options.Cookie.Name = "SessionInfo";

options.IdleTimeout = TimeSpan.FromMinutes(60);

});

var app = builder.Build();

// Configure the HTTP request pipeline.

if (!app.Environment.IsDevelopment())

{

app.UseExceptionHandler("/Home/Error");

}

app.UseStaticFiles();

app.UseRouting();

app.UseAuthorization();

app.MapControllerRoute(

name: "default",

pattern: "{controller=Home}/{action=Index}/{id?}");

app.UseSession();

app.Run();

When the client loads the application, a cookie will be created in the client’s browser, which will be used to find the session data related to the specific client. Figure 14.2 shows this cookie (SessionInfo cookie):

Figure14.2.png
Figure 14.2: Cookie created in the client browser

Now we can set the value to the session object as follows:

public IActionResult Index()

{

HttpContext.Session.SetString("Name", "Vahid Farahmandian");

HttpContext.Session.SetString("TimeStamp", DateTime.Now.ToString());

return View();

}

And get the value from the session object as follow:

public IActionResult Privacy()

{

ViewBag.data = HttpContext.Session.GetString("Name") +

HttpContext.Session.GetString("TimeStamp");

return View();

}

If we navigate the browser to the Index view, values will be set in the session object, and when we navigate to the Privacy view, we can see the values stored in the session object.

If we host the application in IIS, when we restart the application pool, all the data related to the session will be abandoned because the session information is stored in the server’s memory. The memory will be freed up by restarting the application pool.

Notes:

To store session information, you can use Serialized LOB design pattern.
Consequences: Advantages

The implementation and use of this design pattern is very simple.
Consequences: Disadvantages

It is difficult to implement solutions to improve efficiency, including clustering.
Applicability:

This design pattern can be used to implement the server in stateful mode.
Related patterns:

Some of the following design patterns are not related to the server session state design pattern, but to implement this design pattern, checking the following design patterns will be useful:

Client session state
Database session state
Memento
Serialized lob
Database session state
Name:

Database session state

Classification:

Session state design patterns

Also known as:

---

Intent:

This design pattern stores information about the session in the database.

Motivation, Structure, Implementation, and Sample code:

The database session state design pattern is similar to the server session state design pattern. The difference is that when the SessionId is sent from the client to the server, the server refers to the database and reads the information related to the SessionId from the database by providing this SessionId to the database. Any changes that occur to this information must be rewritten in the database.

Figure 14.3 shows the overall process of the Database Session State design pattern:

Figure14.3.png
Figure 14.3: Database Session State

When the session is closed, the information related to the session should also be deleted. Usually, SessionId is the primary key in the table where session information is stored, and SessionId can be used to delete a closed session. Sometimes the closing of the session may not be accompanied by a notification. In this case, by considering a timeout for the sessions, the sessions table can be referred to in certain time intervals, and the sessions whose timeout has reached can be deleted.

Suppose that we want to implement the mechanism of saving the session in the database using Microsoft SQL Server and ASP.NET Core.

Note that implementing this design pattern in ASP.NET Core differs from ASP.NET. For example, to implement this design pattern in ASP.NET, it was necessary to create several tables and procedures in the database using the aspnet_regsql command or tool, but in ASP.NET Core, these tables and stored procedures are no longer needed, and the design pattern can be implemented with just one table.

There are different ways to create this table, including using the NuGet package called Microsoft.Extensions.Caching.SqlConfig.Tools, or you can create the table manually and directly in the SQL Server.

According to the preceding explanations, to implement this design pattern in .NET, the following steps can be taken to create a table by using the NuGet package, run the following command to install the package:

dotnet add package Microsoft.Extensions.Caching.SqlConfig.Tools

Note that you can use different methods to install NuGet packages. For example, you can use Package Manager CLI as the following:

NuGet\Install-Package Microsoft.Extensions.Caching.SqlConfig.Tools

For more information about this NuGet package, please refer to the following link: https://www.nuget.org/packages/Microsoft.Extensions.Caching.SqlConfig.Tools

When the package is installed successfully, run the following command to create the corresponding table in the database:
dotnet sql-cache create

In the preceding command, is the connection string used to connect to the destination database, indicates that the desired table should be created under which schema in the database, and finally, is the name of the desired table. For example:

dotnet sql-cache create

"Data Source=jinget.ir; Initial Catalog=MyDb; Integrated Security=True;"

"dbo"

"SessionStore"

As specified in the preceding command, sessions will store in the SessionStore table, which is a member of dbo schema inside the MyDb database located in the SQL Server instance reachable at jinget.ir.

Instead of using the preceding NuGet package, which is also deprecated, the table can be created manually in the database, which is enough to run the following query:

USE MyDB

GO

CREATE TABLE [dbo].[SessionStore](

[Id] [nvarchar](449) NOT NULL,

[Value] [varbinary](max) NOT NULL,

[ExpiresAtTime] [datetimeoffset](7) NOT NULL,

[SlidingExpirationInSeconds] [bigint] NULL,

[AbsoluteExpiration] [datetimeoffset](7) NULL,

CONSTRAINT [PK_Index_Id] PRIMARY KEY CLUSTERED

(

[Id] ASC

)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,

IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)

ON [PRIMARY]

) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]

GO

CREATE NONCLUSTERED INDEX [NCI_Index_ExpiresAtTime]

ON [dbo].[SessionStore]

(

[ExpiresAtTime] ASC

)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,

SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF,

ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)

After creating the SessionStore table, the following settings should be applied in the Program.cs codes:
builder.Services.AddDistributedSqlServerCache(options =>

{

options.ConnectionString = @"

Data Source=.;

Initial Catalog=MyDb;

Integrated Security=True;";

options.SchemaName = "dbo";

options.TableName = "SessionStore";

});

services.AddSession(options => {

options.CookieName = "SessionInfo";

options.IdleTimeout = TimeSpan.FromMinutes(60);

});

Calling the app.UseSession() adds the SessionMiddleware to enable the session state for the application automatically. After configuring the preceding settings, Program.cs file should look as follow:

// Add services to the container.

builder.Services.AddControllersWithViews();

builder.Services.AddDistributedSqlServerCache(options =>

{

options.ConnectionString = @"

Data Source=.;

Initial Catalog=MyDb;

Integrated Security=True;";

options.SchemaName = "dbo";

options.TableName = "SessionStore";

});

builder.Services.AddSession(options =>

{

options.Cookie.Name = "SessionInfo";

options.IdleTimeout = TimeSpan.FromMinutes(60);

});

var app = builder.Build();

// Configure the HTTP request pipeline.

if (!app.Environment.IsDevelopment())

{

app.UseExceptionHandler("/Home/Error");

}

app.UseStaticFiles();

app.UseRouting();

app.UseAuthorization();

app.MapControllerRoute(

name: "default",

pattern: "{controller=Home}/{action=Index}/{id?}");

app.UseSession();

app.Run();

SQL session state internally uses memory cache to enable SQL Server session, Microsoft.Extensions.Caching.SqlServer NuGet package should be installed as a dependency along with Microsoft.AspNet.Session to enable the use of UseSession middleware. To install this package following command can be used:

dotnet add package Microsoft.Extensions.Caching.SqlServer

dotnet add package Microsoft.AspNet.Session

You can also install these two packages using NuGet CLI as follow:

NuGet\Install-Package Microsoft.Extensions.Caching.SqlServer

NuGet\Install-Package Microsoft.AspNet.Session

For more information about these two NuGet packages, refer to the following links:

https://www.nuget.org/packages/Microsoft.Extensions.Caching.SqlServer/8.0.0-preview.3.23177.8

https://www.nuget.org/packages/Microsoft.AspNet.Session/1.0.0-rc1-final

As specified in the Program.cs code, using AddDistributedSqlServerCache method, the session store table is configured. The configurations can be placed in the appsetting.json file, but they are hard coded in the given code for simplicity.
Now we can set the value to the session object as follow:
public IActionResult Index()

{

HttpContext.Session.SetString("Name", "Vahid Farahmandian");

HttpContext.Session.SetString("TimeStamp", DateTime.Now.ToString());

return View();

}

Also, the same as the server session state design pattern, when the client loads the application, a cookie will be created in their browser, as shown in Figure 14.2.

As soon as the data is set in the Session object, a record will also be inserted into the SessionStore table, as shown in Figure 14.4:

Figure14.4.png
Figure 14.4: SessionStore table

To get that value from the session object, we can do as follow:
public IActionResult Privacy()

{

ViewBag.data = HttpContext.Session.GetString("Name") +

HttpContext.Session.GetString("TimeStamp");

return View();

}

If we navigate the browser to the Index view, values will be set in the session object, and when we navigate to the Privacy view, we can see the values stored in the session object.

If we host the application in IIS, when we restart the application pool, all the data related to the session will be retained because the session information is persisted in the SQL Server database, so restarting the application pool will not lose session information.

Notes:

The session information stored in the database is final. If we are in the middle of an order processing and need to save its current status (not final), this information should not be combined with final information. To distinguish between finalized and unfinished information, a separate column or table can be used to determine which information is the final record and which information is temporary.
Consequences: Advantages

By saving the session information in the database, the server becomes stateless. Although this transformation facilitates the implementation of clustering infrastructure, it has a negative effect on performance because, with each request that enters the server, the relevant session information must be fetched from the database.
Consequences: Disadvantages

It can reduce the overall performance of the program.
Deleting information related to old and unused sessions is complicated.
Applicability:

This design pattern can be used when storing session information in the database.
Using this design pattern will be very beneficial in limiting the number of online users.
Related patterns:

Some of the following design patterns are not related to the Database Session State design pattern, but to implement this design pattern, checking the following design patterns will be useful:

Client session state
Server session state
Conclusion
In this chapter, you got acquainted with various storage and session management methods and learned how to use client, server, and database session state methods to manage the session status. Note that, as stated, the choice of each of these design patterns is directly related to the requirement.

In the next chapter, you will learn about base design patterns.

Join our book's Discord space

Join the book's Discord Workspace for Latest updates, Offers, Tech happenings around the world, New Release and Sessions with the Authors:

https://discord.bpbonline.com

NET 7 Design Patterns In-Depth 13. Offline Concurrency Design Patterns

Chapter 13
Offline Concurrency Design Patterns
Introduction
To organize offline concurrency, the design patterns of this category can be divided into the following four main sections:

Optimistic offline lock: By identifying the collision and rolling back the transaction, it prevents the occurrence of a collision between the business transactions simultaneously.
Pessimistic offline lock: By making data available to only one transaction, it is possible to prevent collisions between simultaneous transactions.
Coarse-grained lock: With the help of a lock, a lock can be defined on a set of related objects.
Implicit lock: Frameworks are responsible for managing locks.
Note that in writing this chapter, some images in Patterns of Enterprise Applications Architecture by Martin Fowler1's book are used.

Structure
In this chapter, we will cover the following topics:

Offline concurrency design patterns
Optimistic offline lock
Pessimistic offline lock
Coarse-grained lock
Implicit lock
Objectives
In this chapter, you will deal with different transaction concepts and learn how to manage problems related to concurrency and transaction management problems with the help of different design patterns. In this chapter, you will learn how to prevent unwanted problems caused by concurrency by locking different resources and managing requests and give your software the ability to process requests simultaneously and deliver better performance to users.

Offline concurrency design patterns
One of the most complicated parts of software production is dealing with concurrency-related topics. Whenever several threads or processes have access to the same data, there is a possibility of problems related to concurrency, so one should think about concurrency in production software. Different solutions are at different levels for working and managing concurrency in enterprise applications. For example, you can use transactions, internal features of relational databases, and so on. This reason does not prove that concurrency management can be blamed on these methods and tools.

One of the most frequent concurrent problems in programs is the lost update problem. During this problem, the update operation of one transaction is overwritten by another transaction, and the changes of the first transaction are lost. The next concurrency problem is inconsistent reading. During this problem, the data is changed between two reading operations; therefore, the data read at the beginning does not match the data read later.

Both problems endanger the accuracy of the data and the ability to trust the data, which will be the root of strange and wrong behavior in the program. If we focus too much on data accuracy, we end up with a solution where different transactions wait for previous transactions to finish their work. Besides increasing data accuracy, this expectation threatens the data's liveness, and it is necessary always to balance the data's accuracy and liveness.

To manage concurrency, there are different methods. One way is to allow multiple people to read the data, but when saving the changes, accept the changes from someone whose version of the data is the same as the version in the data source. This method is what optimistic offline lock tries to provide. Another method is to lock the data read by one person and not allow another person to read it until the first person finishes his transaction. This method is also what pessimistic offline lock tries to provide. The choice between these two methods is between collision detection and collision prevention.

Deadlock is the important thing that happens when using the pessimistic method. There are different methods to identify and manage deadlocks. One of these methods is sacrificing a party and canceling his operation, so his locks will also be released. The second method is to set the lifetime for the locks so that if the lock is not released within a certain period, the transaction is automatically canceled, and the lock is released so that the deadlock does not occur.

When talking about concurrency, likely, the transaction leg is also involved. The transaction has an important feature: either all the changes are applied in the database or none are applied. Software transactions have four important characteristics known as Atomicity-Consistency-Isolation-Durability (ACID):

Atomicity: Either the whole work is done, or none of the work parts are done. Obviously, in the process of doing the work, if one of the steps is not done, all the changes must be rolled back. Otherwise, the changes can be committed at the end of all the work. For example, in an application that transfers funds from one account to another, the atomicity property ensures that the corresponding credit is made to the other if a debit is made successfully from one account.
Consistency: All resources before and after a transaction must be consistent and healthy. For example, in an application that transfers funds from one account to another, the consistency property ensures that the total value of funds in both accounts is the same at the start and end of each transaction.
Isolation: The results obtained during a transaction should not be accessible to other transactions until the completion of that transaction. For example, in an application that transfers funds from one account to another, the isolation property ensures that another transaction sees the transferred funds in one account or the other but not in both.
Durability: In case of an error or problem, the results of successful transaction changes should not be lost. For example, in an application that transfers funds from one account to another, the durability property ensures that the changes made to each account will not be reversed.
Design transactions, there are three different methods:

Long transaction: Transactions that span several requests.
Request transaction: Transactions related to a request and the transaction are also determined upon completion of the request.
Late transaction: All reading operations are performed outside the transaction, and only data changes are included.
When using a transaction, knowing what will be locked during the transaction is very important. Locking a table during a transaction is usually dangerous. Because it will cause the rest of the transactions that need the table to wait until the lock is removed so they can do their work; also, different isolation levels can be determined by using transactions. Each of these isolation levels has different strengths and behavior. Isolation levels include serializable, repeatable read, read committed and read uncommitted. SQL server offers other isolation levels beyond this book's scope.

When managing concurrency in transactions, it is tried to use optimistic offline lock because it is easier to implement and delivers better output in terms of liveness. In addition to these advantages, the big drawback of this method is that the user will notice an error at the end of his work and when saving, which can lead to dissatisfaction. In this case, the pessimistic method can be useful, but compared to the optimistic method, it is more complicated to implement and has a worse output in terms of liveness.

Another method is not to apply the lock for each object but instead for a group of objects. That is how coarse-grained lock tries to deliver. The better method would be to use the implicit lock, where the supertype layer or the framework will manage the concurrency and apply and release the lock.

Optimistic offline lock
Name:

Optimistic offline lock

Classification:

Offline concurrency design patterns

Also known as:

---

Intent:

Using this design pattern, it is possible to prevent collisions between simultaneous business transactions by identifying the collision and rolling back the transaction.

Motivation, Structure, Implementation, and Sample code:

Suppose a requirement has been raised and requested in its format so that different users can update the authors' data. To implement this mechanism, you can easily implement the update operation. But the problem will be that two users may want to update the data of the same author at the same time. In this case, most of the update operations of one of the users will be lost, and the so-called lost update will occur. To clarify the scenario, consider the following sequence diagram:

Figure13.1.png
Figure 13.1: Update author sequence diagram

As shown in Figure 13.1 diagram, User1 first gets the data related to Author1 and starts changing this information. Meanwhile, User2 also gets Author1's data, changes it, and stores it in the database. Next, User1 saves their changes in the database. In this case, if User1's changes are written to the database, then User2's will be lost. To prevent this, you can prevent User1's changes from being saved and return an error to them.

The important thing is, how can you find out that Author1's data has changed before saving User1's changes? There are different ways to answer this question, one of which is to use the Version field in the database table.

According to the preceding explanation, the proposed requirement can be implemented as follows:

CREATE TABLE author(

AuthorId INT PRIMARY KEY,

FirstName VARCHAR(50),

LastName VARCHAR(50),

Version INT NOT NULL

)

According to the table's structure, the Version column will store the data version. In this way, every time the data of the record changes, the value in Version increases by one unit. In this case, whenever the UPDATE or DELETE operation is sent to the table, along with the conditions sent, the condition related to the Version is also sent. As follows:

SELECT * FROM author WHERE AuthorId = 1

By executing the preceding query, the data related to the author with Id 1 will be delivered to the user. Suppose the data is as follows:

Author: 1

FirstName: Vahid

LastName: Farahmandian

Version: 1

Now that this data has been delivered to User1, this user is busy making changes. Meanwhile, User2 also requests the same data, so the same data is also delivered to them. Next, User2 applies his changes and saves them in the database. For this, you must send the following query to the database:

UPDATE author SET FirstName='Ali', LastName='Rahimi', Version = Version +1

WHERE AuthorId=1 AND Version = 1

As specified in the WHERE section, the condition related to the Version is sent to the database along with other conditions. After applying these changes, the data in the table will change as follows:

Author: 1

FirstName: Ali

LastName: Rahimi

Version: 2

Then User1 sends his changes to the database. The point here is that the Version value delivered to User1 was equal to 1, so the following query will be delivered to the database:

UPDATE author SET FirstName='Vahid', LastName='Hassani', Version = Version +1

WHERE AuthorId=1 AND Version = 1

No record matches the given conditions in the database, and the database will reply that no record has been changed during the sent query. When no record has been changed, another person has already changed the desired record, and you can inform the user of this change by presenting an error. With the preceding explanation, the following codes can be considered for the Author:

public async Task Find(int authorId)

{

IDataReader reader = await new SqlCommand($"" +

$"SELECT * " +

$"FROM author " +

$"WHERE AuthorId={authorId}")

.ExecuteReaderAsync();

reader.Read();

return new Author()

{

AuthorId = (int)reader["AuthorId"],

FirstName = (string)reader["FirstName"],

LastName = (string)reader["LastName"],

Version = (int)reader["Version"]

};

}

public async Task ModifyAuthor(Author author)

{

var result = await new SqlCommand($"" +

$"UPDATE author " +

$"SET FirstName='{author.FirstName}', " +

$"LastName='{author.LastName}' " +

$"Version = Version +1 " +

$"WHERE AuthorId={author.AuthorId} AND " +

$"Version={author.Version}")

.ExecuteNonQueryAsync();

if (result == 0)

throw new DBConcurrencyException();

else

return true;

}

As seen in the preceding codes, when sending the UPDATE query, the Version condition is also sent along with the other conditions. According to the database's response, the collision event is identified, and the user is informed by sending an exception. The same process can be considered for the DELETE operation and send the Version condition to the database along with the other conditions in DELETE.

Notes:

Other methods can be used besides the version to implement this design pattern. For example, columns can be used to identify the person who changed or the time of change. This method has serious and important flaws. Focusing on time can cause errors and problems because sometimes the client and server clock settings differ. Another method is to specify all columns in the WHERE clause when modifying the record. The problem with this method is that the queries may be long, or the database may be unable to use the right index to speed up the work.
Normally, this design pattern cannot be used to identify inconsistent reads. To prevent this problem, you can also read the data with Version. It is necessary to have a proper isolation level in the database (Repeatable Read or stronger). Using this method will cost a lot. A better solution to this problem is using the coarse-grained lock design pattern.
One of the famous applications of this design pattern is in the design of Source Control Management (SCM) systems.
To optimally implement this design pattern, automatic integration strategies can be used during a collision.
Using the layer supertype design pattern to reduce the amount of code in the design of this pattern can be useful.
Using the entity framework, this design pattern will be very simple.
Using a single copy of a record during a transaction can be useful in using the identity map design pattern. During a transaction, we may encounter the phenomenon of inconsistent readings.
You can manage transactions better by combining the UnitOfWork design pattern with this one.
Consequences: Advantages

The implementation of this design pattern is very simple.
There is no need for record locking overhead when using this design pattern.
Consequences: Disadvantages

When the system load is high, and there are many simultaneous transactions, using this design pattern will cause many transactions to be rolled back, and the user experience will suffer.
Applicability:

When the probability of a collision between two different business transactions is low, using this design pattern can be useful; otherwise, using the pessimistic offline lock design pattern will be a better option.
Related patterns:

Some of the following design patterns are not related to the optimistic offline lock design pattern, but to implement this design pattern, checking the following design patterns will be useful:

Coarse-grained lock
Layer supertype
Identity map
Unit of work
Pessimistic offline lock
Pessimistic offline lock
Name:

Pessimistic offline lock

Classification:

Offline concurrency design patterns

Also known as:

---

Intent:

By using this design pattern and making data available to only one transaction, it is possible to prevent the collision between simultaneous transactions.

Motivation, Structure, Implementation, and Sample code:

As seen in the optimistic offline lock design pattern, this design pattern can be used to manage concurrency and prevent collisions. But the method used in this design pattern can identify the collision only at the end of the work and when the changes have been sent to the database to be saved. This will cause the application to face serious problems in scenarios with many concurrent transactions and increase dissatisfaction. Because after doing all the work, the user will find out that his operation could not be saved.

You can use the pessimistic offline lock design pattern to prevent this from happening. Using this design pattern, if the data is provided to a transaction, its operations will be performed, and the changes will be saved. Because by using this design pattern, when retrieving data from the database, a lock is placed on the data so that another transaction cannot retrieve that data.

The implementation of the pessimistic offline lock design pattern includes three steps:

Determining what type of lock is needed: To choose the right type of lock, factors such as providing the maximum possible concurrency, appropriately responding to business needs, and minimizing the complexity of the code should be considered. The different types of locks are:
Exclusive write lock: The data will be locked in modification, and only one transaction will be allowed to modify the data.
Exclusive read lock: The data will be locked for reading. Naturally, this method will impose stricter restrictions on data access.
Read/write lock: This type will combine the previous two locks. For this type of lock, the following rules apply:
Both read and write locks are exclusive; other transactions cannot obtain a write lock if a transaction has a read lock on a record. The opposite is also true, and if a transaction has a write lock on a record, other transactions will not be able to obtain a read lock.
It is possible to provide read locks for multiple transactions, increasing the program's concurrency.
Building the lock management module: The task of this module is to manage the access of transaction requests to get or release the lock. This module must manage access to know what is locked at any given moment and who has this lock. This information can be stored in a database table or an object in memory. If an object in memory is used, this object must be a singleton. The database table method would be reasonable if the web server is clustered. If you use a table, managing the concurrency of the table itself will be very important. For this purpose, you can use the serializable isolation level to access this table. Also, storing procedures and the appropriate isolation level can be useful in this case. The important point for the lock management module is that business transactions must be associated with the module and not access objects in memory or tables, regardless of where and how the lock is stored.
Definition of business procedures and their use in locks: in the definition of these procedures, questions must be answered, including:
What should be locked and when?
The question "when" must be answered first to answer this question. The lock must be applied before delivering the data to the program. Applying the lock or fetching the data is an event directly related to the transaction's isolation level, but applying the lock and then fetching the data can improve reliability.
After "when" is determined, "what" must be answered, what should be locked is usually the table's primary key value or whatever value is used to find the record.
When can the lock be released? The lock must be released whenever the transaction is completed (Commit or Rollback).
What should happen when it is not possible to provide a lock? In this case, the simplest thing that happens is to cancel the transaction. Because basically, the purpose of this design pattern is to inform the user at the beginning of the work if there is no possibility to access the data.
Suppose the optimistic offline lock design pattern complements the pessimistic offline lock in addition to the preceding three steps. In that case, it is necessary to determine which records should be locked.

For this design pattern, the following sequence diagram can be considered:

Figure13.2.png
Figure 13.2: Get author info using Pessimistic Offline Lock

As shown in Figure 13.2 diagram, User1 reads the data of Author1. At this stage, a record is inserted in the lock table as follows:

Owner: User1

Lockable: 1

Next, User1 is busy changing the data of Author1. At the same time, User2 requests access to the data of Author1, but because User1 locks this record, it is impossible to read it for User2 and encounters an error. Then User1 sends his changes to the database for storage and deletes the record in the lock table, which removes the lock.

According to the preceding description, the lock management module can be considered as follows:

public static class LockManager

{

private static bool HasLock(int lockable, string owner)

{

//check if an owner already owns a lock.

return true;

}

}

The HasLock method checks once before a record for the owner is registered in the table that the owner has not locked the desired record before. If he had already locked the record, inserting a new record into the lock table is unnecessary. The characteristic of the record in this example is the value of its primary key, and it is assumed that all the primary keys are of INT type:

public static void AcquireLock(int lockable, string owner)

{

if (!HasLock(lockable, owner))

{

try

{

//Insert into lock table/object

}

catch (SqlException ex)

{

throw new DBConcurrencyException($"unable to lock {lockable}");

}

}

}

Using this method, a lock is defined on a record and given to the owner. In this method, it is assumed that we put the lock on a column in the tables with INT data type, and their value is unique throughout the database. In actual implementations, this part will probably need to be changed. Also, the meaning of owner in these methods can be HTTP SessionID or anything else according to your needs. The important thing about lockable is that its value is unique within the table or object of the lock. Therefore, if two different owners want to lock a lockable, the database will save one of them, and for the second case, it will return a unique value violation error:

public static void ReleaseLock(int lockable, string owner)

{

try

{

//delete from lock table/object

}

catch (SqlException ex)

{

throw new Exception($"unable to release lock {lockable}");

}

}

public static void ReleaseAllLocks(string owner)

{

try

{

//delete all locks for given owner from lock table/object

}

catch (SqlException ex)

{

throw new Exception($"unable to release locks owned by {owner}");

}

}

The user of the two methods ReleaseLock and ReleaseAllLocks is also to release the lock. If we want to place the locks in the database table, this method will be a CRUD operation on the desired table. Now with the presence of the lock management module, it can be used as follows:

public class AuthorDomain

{

public bool Modify(Author author)

{

LockManager.ReleaseAllLocks("Session 1");

LockManager.AcquireLock(author.AuthorId, "Session 1");

// Implementation of transaction requirements

LockManager.ReleaseLock(author.AuthorId, "Session 1");

return true;

}

}

In the preceding method, before doing anything, we first delete all the locks that were for Session1, and then we define a lock for the author that we want to edit, and at the end of the work, we release the defined lock. Real implementation requirements will be more complex than this simple example, and this example is given only to clarify how this design pattern works.

Notes:

Choosing the right strategy for locking is a decision that should be made with the help of domain experts. Because this problem is not just a technical problem, it can affect the entire business.
Choosing an inappropriate locking strategy can make the system become a single-user system.
Identifying and managing inactive meetings is an important point that should be addressed. For example, the client shuts down its system after receiving the lock and in the middle of the operation. Now we are facing an open transaction, and a series of locks have been placed on a series of resources. In this case, you can use different mechanisms, such as the timeout mechanism on the web server. Or, an active time can be set for the records included in the lock maintenance table so that the lock is invalid and released after that time.
Locking everything in the system will cause many problems in the system. Therefore, it is better to use this design pattern only when needed and place it next to the pessimistic offline lock design pattern.
The preceding example assumes that the selected lock type is set in the form of defined transactions. Otherwise, the lock type can also be stored as a column in the lock table.
Consequences: Advantages

By using this design pattern, it is possible to prevent the occurrence of inconsistent readings.
Consequences: Disadvantages

Management of locks is a complex operation.
As the number of users or requests increases, the efficiency decreases.
There is a possibility of a deadlock using this design pattern. Therefore, one of the tasks of the lock management module is to return an error instead of waiting if it is not possible to grant a lock to prevent deadlocks as much as possible.
If this design pattern is used and there are long transactions in the system, the system's efficiency will suffer.
Applicability:

When the probability of collision between two different business transactions is high, using this design pattern can be useful.
Related patterns:

Some of the following design patterns are not related to the pessimistic offline lock design pattern, but to implement this design pattern, checking the following design patterns will be useful:

Pessimistic offline lock
Singleton
Coarse-grained lock
Name:

Coarse-grained lock

Classification:

Offline concurrency design patterns

Also known as:

---

Intent:

Using this design pattern, it is possible to define a lock on a set of related objects with the help of a lock.

Motivation, Structure, Implementation, and Sample code:

Usually, different objects need to be changed in different programs in the form of a transaction. With the approach in the design patterns of pessimistic offline lock and optimistic offline lock, it is necessary to define a lock on each object or record (resources) to manage locks. This process itself can be the source of various problems. For example, all resources must be processed in order, and locks must be created on them. This process becomes more complicated when faced with a complex graph of objects. On the other hand, with the approach that existed in the pessimistic offline lock design pattern, for each node in the graph, it will be necessary to define a record in the table related to the lock, which will cause us to face a large table.

Another approach is to define a lock on a set of related resources and manage concurrency in this way. This process is what the coarse-grained lock design pattern aims to provide. The most concrete example of implementing this method is aggregated. In fact, by using aggregates, a change point can be defined for a set of related objects (Root), and all related objects can be changed only through that point. This feature that aggregates provides the implementation of the coarse-grained design pattern. Because when we are facing aggregate, its members can be accessed from only one point, and the same point can be used to manage the lock (Root Lock). In this case, a lock can be considered for each aggregate. In other words, by applying the lock on aggregate, the lock will be applied to all the members of its subset.

Using root lock, it will always be necessary to use a mechanism to move from the subgroup members to the root and apply the lock on the root. There are different methods for this mechanism. The simplest method is to navigate each object and move it to the root. This method may cause performance problems in more complex graphs, which can be managed using the lazy load design pattern. Figure 13.3 shows a view of the root lock method:

Figure13.3.png
Figure 13.3: Root Lock method

Another way to implement the coarse-grained lock design pattern is to use the shared version mechanism. In this method, every object in a group shares a specific version. Therefore, the lock will be applied to the entire group whenever this shared version increases. This method is very similar to the method presented in the optimistic offline lock design pattern. If we want to present the same method on the design pattern of pessimistic offline lock, then it will be necessary for each member of a group to share a type of token. Since the pessimistic method is often used to complement the optimistic method, using the shared version as a token can be useful. Figure 13.4 shows the shared version method:

Figure13.4.png
Figure 13.4: Shared Version method

According to the preceding description, the shared version method can be implemented in the form of a Shared optimistic offline lock as follows:

Suppose we have the version table as follows:

CREATE TABLE version

(

Id INT PRIMARY KEY,

Value INT NOT NULL,

ModifiedBy VARCHAR(50) NOT NULL,

Modified DATETIME

)

As seen in the preceding table, the Value column will store the subscription version's value. Also, to work with the version table, the Version class can be considered as follows:

public class Version

{

public int Id { get; set; }

public int Value { get; set; }

public string ModifiedBy { get; set; }

public DateTime Modified { get; set; }

public Version(int id, int value, string modifiedBy, DateTime modified)

{

Id = id;

Value = value;

ModifiedBy = modifiedBy;

Modified = modified;

}

public static async Task FindAsync(int id)

{

//Try to get version from cache using Identity Map;

//IdentityMap.GetVersion(id);

Version version = null;

if (version == null)

{

version = await LoadAsync(id);

}

return version;

}

private static async Task LoadAsync(int id)

{

Version version = null;

var result = await new SqlCommand($"" +

$"SELECT * " +

$"FROM version " +

$"WHERE Id ={id}", DB.Connection)

.ExecuteReaderAsync();

if (result.Read())

{

version = new(

(int)result["Id"],

(int)result["Value"],

(string)result["ModifiedBy"],

(DateTime)result["Modified"]);

//put version in cache IdentityMap.Put(version);

}

else

{

throw new DBConcurrencyException($"version {id} not found!");

}

return version;

}

}

As it is clear in the FindAsync method, it tried to load the requested version from the cache. If the version is unavailable in the cache, then the information related to the version is retrieved from the database using the LoadAsync method. After fetching the version information from the database, this data is placed in the cache and returned. Suppose no record is found in the database for the provided id. In that case, another transaction has changed the version, which will be a sign of the possibility of a collision, and for this reason, a DBConcurrencyException exception will occur.

Naturally, if a new object wants to be created, its corresponding record in the version table should also be created. You can use the INSERT command to create a record in the version table. The point, in this case, is that after the version record is created in the table, that record must be placed in the cache:

public async void Insert()

{

await new SqlCommand($"" +

$"INSERT INTO " +

$"version " +

$"VALUES({Id},{Value},'{ModifiedBy}','{Modified}')",

DB.Connection).ExecuteNonQueryAsync();

//put version in cache IdentityMap.Put(version);

}

Regarding the Version class, a mechanism will be needed to increase its value. This can also be done using the UPDATE command. The important thing about this method is that if no record has been updated during the update operation, this can be a sign of a collision and should be reported to the user. Before changing the version value, it should be ensured that the previous version is not in use. That is, the desired records have not been locked before:

public async void Increment()

{

if(!Locked()){

var effectedRowCount = await new SqlCommand($"" +

$"UPDATE version " +

$"SET " +

$"Value = {Value}," +

$"ModifiedBy='{ModifiedBy}',"+

$"Modified='{Modified}' " +

$"WHERE Id = {Id}",

DB.Connection)

.ExecuteNonQueryAsync();

if (effectedRowCount == 0)

{

throw new DBConcurrencyException($"version {Id} not found!");

}

Value++;

}

}

Finally, when the Aggregate is deleted, it will be necessary to delete the version corresponding to that record, for which the DELETE operation can be used. Again, when deleting, if the database declares that no records have been deleted, the possibility of a collision should be reported to the user. The following code shows the Delete operation:

public async void Delete()

{

var effectedRowCount = await new SqlCommand($"" +

$"DELETE FROM version " +

$"WHERE Id = {Id}", DB.Connection)

.ExecuteNonQueryAsync();

if (effectedRowCount == 0)

{

throw new DBConcurrencyException($"version {Id} not found!");

}

}

Now that the necessary mechanism for version management has been prepared, you can use it as follows:

public abstract class BaseEntity

{

public Version Version { get; set; }

protected BaseEntity(Version version) => this.Version = version;

}

BaseEntity class is considered a layer supertype. It is used to set the value of the Version property:

public interface IAggregate { }

public class Author : BaseEntity, IAggregate

{

public string Name { get; set; }

public List

Addresses { get; set; } = new List
();

public Author(Version version, string name) : base(version)

{

Name = name;

}

public Author AddAuthor(string name)=>new Author(Version.Create(), name);

public Address AddAddress(string street)

{

Address address = new Address(Version, street);

Addresses.Add(address);

return address;

}

}

Author class as Aggregate inherits from BaseEntity class. In the AddAuthor method, as soon as an Author object is created, the corresponding version object is also created. The code related to creating the version object in the Version class is as follows:

public static Version Create()

{

Version version = new(NextId(),

0, //Initial version number

GetUser().Name,

DateTime.Now); //modification datetime

return version;

}

Next, when we want to add an address for the Author, we use the existing Version property. Also, whenever a request to update or delete an object is received, the Increment method in the Version class must be called:

public abstract class AbstractMapper

{

public void Insert(BaseEntity entity) => entity.Version.Increment();

public void Update(BaseEntity entity) => entity.Version.Increment();

public void Delete(BaseEntity entity) => entity.Version.Increment();

}

public class AuthorMapper : AbstractMapper

{

public new void Delete(BaseEntity entity)

{

Author author = (Author)entity;

//delete addresses

//delete author

base.Delete(entity);

author.Version.Delete();

}

}

As seen in the preceding code, to delete the author; first, the addresses corresponding to him are deleted, then the author is deleted. In the future, the version will be increased by one. If the version is not updated, there is a possibility of a collision, and an error will occur. The deletion operation can be completed by deleting the record related to the version.

You can also implement the shared version method as a shared pessimistic offline lock. The implementation of this method will be the same as the optimistic method. The only difference will be that in this method, we have to use a mechanism to find out that the data that has been uploaded is the latest version. A simple way to ensure this would be to execute the Increment method within the transaction and before the commit. If the Increment execution is executed successfully, we have the latest version of the data; otherwise, due to a DBConcurrencyException error, we will notice that the data is not updated, and the transaction will be rolled back.

The mechanism will be slightly different to implement the root optimistic offline lock method. Because in this method, there is no shared version. To implement this method, you can use the UnitOfWork design pattern. Before saving the changes in the database, navigate the object and increase the value of the parent version with the Increment method each time. As follows:

public class DomainObject : BaseEntity

{

public DomainObject(Version version) : base(version){}

public int Id { get; set; }

public DomainObject Parent { get; set; }

}

Suppose there is a model as above.

public class UnitOfWork

{

...

public void Commit()

{

foreach (var item in modifiedObjects)

{

if (item.Parent != null)

item.Parent.Version.Increment();

}

foreach (var item in modifiedObjects)

{

//save changes to database

}

}

}

Therefore, when saving the changes in the database, the Increment method is first called and then saved in the database.

Notes:

Both shared version and Root Lock methods have their advantages and disadvantages. For example, if the shared version is used, then to retrieve the data, it will always be necessary to join with the version table, which can have a negative effect on the performance. On the other hand, if root lock is used along with the optimistic method, the important challenge will be to ensure that the data is up to date.
The identity map design pattern will be crucial in implementing the shared optimistic offline lock method because all group members must always refer to a common version.
To implement this design pattern, you can use the layer supertype design pattern for simplicity of design and implementation.
Consequences: Advantages

Applying and releasing the lock in this design pattern will be simple and low-cost.
Consequences: Disadvantages

If this design pattern is not designed and used in line with business requirements, it will lock objects that should not be locked.
Applicability

This design pattern can be used when it is necessary to put a lock on an object along with the related objects.
Related patterns:

Some of the following design patterns are not related to coarse-grained lock design patterns, but to implement this design pattern, checking the following design patterns will be useful:

Pessimistic offline lock
Optimistic offline lock
Lazy load
Layer supertype
Unit of work
Identity map
Implicit lock
Name:

Implicit lock

Classification:

Offline concurrency design patterns

Also known as:

---

Intent:

Using this design pattern, the framework or Layer Supertype is responsible for managing locks.

Motivation, Structure, Implementation, and Sample code:

One of the major problems with offline concurrency management methods is that they are difficult to test. Therefore, a capability should be created so that programmers can use the capabilities created once instead of being involved in the daily implementation of concurrency management methods in the code. The reason for this is that if the concurrency management process is not implemented correctly, it can cause serious damage to the quality of data, the correctness of work, and the efficiency of the program.

The implicit lock design pattern helps use the layer supertype design pattern or any other pattern to implement concurrency management processes in the form of parent classes or framework facilities and be available to programmers for use.

For example, consider the following code:

public interface IMapper

{

DomainObject Find(int id);

void Insert(DomainObject obj);

void Update(DomainObject obj);

void Delete(DomainObject obj);

}

Public class LockingMapper : IMapper

{

private readonly IMapper _mapper;

public LockingMapper(IMapper mapper) => _mapper = mapper;

public DomainObject Find(int id)

{

//Acquire lock

return _mapper.Find(id);

}

public void Delete(DomainObject obj) => _mapper.Delete(obj);

public void Insert(DomainObject obj) => _mapper.Insert(obj);

public void Update(DomainObject obj) => _mapper.Update(obj);

}

As seen in the LockingMapper class, when a record is fetched, it is locked and then fetched. The important thing about this design pattern is that the business transactions do not know anything about the mechanism of locking and releasing locks to apply data changes. All these operations happen behind the scenes.

Figure13.5.png
Figure 13.5: Lock management process using Implicit Lock design pattern

In Figure 13.5, the transaction related to the editing of customer information delivers the request to retrieve customer information to LockingMapper. Behind the scenes, this mapper communicates with the lock management module and receives the lock. After receiving the lock, it retrieves the data and delivers it to the relevant transaction.

Notes:

Using this design pattern, programmers should still consider the consequences of using concurrency management methods and locks.
You can use the data mapper design pattern to implement this design pattern.
To design mappers as best as possible in the preceding design pattern, you can use the decorator design pattern.
Consequences: Advantages

It increases the code's quality and prevents errors related to the lack of proper management of concurrency management processes.
Consequences: Disadvantages

By using this design pattern, there is a possibility that programmers will cause the program to encounter various errors by not paying attention to the consequences of concurrency management methods and locks.
Applicability:

This design pattern should often be used to implement concurrency management mechanisms.
Related patterns:

Some of the following design patterns are not related to the implicit lock design pattern, but to implement this design pattern, checking the following design patterns will be useful:

Layer supertype
Data mapper
Decorator
Conclusion
In this chapter, you got acquainted with different design patterns, including pessimistic and optimistic offline locks, coarse-grained locks, and implicit lock design patterns. You also learned how to manage and solve problems caused by concurrency with the help of these design patterns. In this chapter, you learned that you could sometimes lock readers to solve concurrency problems and sometimes manage concurrency problems by simply locking writers.

In the next chapter, you will learn about session state design patterns.

1 https://www.amazon.com/Patterns-Enterprise-Application-Architecture-Martin/dp/0321127420

Join our book's Discord space

Join the book's Discord Workspace for Latest updates, Offers, Tech happenings around the world, New Release and Sessions with the Authors:

https://discord.bpbonline.com

NET 7 Design Patterns In-Depth 12 . Distribution Design Patterns

Chapter 12
Distribution Design Patterns
Introduction
To organize the distribution, the design patterns of this category can be divided into the following two main sections:

Remote Facade: By providing a coarse-grained view of fine-grained objects, effectiveness, and efficiency are increased throughout the network.
Data Transfer Object: It can move data between processes and reduce the number of calls to different methods.
Structure
In this chapter, we will cover the following topics:

Distribution design patterns
Remote Facade
Data Transfer Object
Objectives
In this chapter, you will learn the types of design patterns related to distributed software, such as Remote Facade and Data Transfer Object, and how to communicate in distributed software and move data between different parts.

Distribution design patterns
One of the hot discussions today is the discussion of distributed software, and today the need for distributed software is growing. The important thing is to know the boundaries and limits of distribution. For example, consider the following separations:

Separation of the client from the server
Separation of the server part of the program from the database
Separating the web server from the application server
In addition to the preceding separations, others can also be considered. In all these separations, it is necessary to pay attention to what effect this separation will have on efficiency. When designing the software, the boundaries and limits of distribution should be reduced as much as possible. It cannot be eliminated and must be properly designed and managed in some places. For example, connecting the front-end to the back-end codes may be necessary. To do this, you can use the remote facade design pattern. When there is a need to establish communication between the front and back end, a method will need to move the data. For this purpose, a data transfer object can be used.

Remote facade
Name:

Remote facade

Classification:

Distribution design patterns

Also known as:

---

Intent:

By using this design pattern, by providing a coarse-grained view of fine-grained objects, the effectiveness and efficiency increase throughout the network.

Motivation, Structure, Implementation, and Sample code:

Suppose a requirement is raised, and, in its format, it is requested to provide a web service to collect and display the information of authors and their books. It is only necessary to display the author's name next to the list of his books. There are different ways to collect and display this information. One method is to provide the following web services to the client for use:

Web service for registering and displaying the author's identity information.
Web service for recording and displaying information about the author's books.
The problem with using this structure is that at least 2 HTTP requests are needed to record this information. 2 HTTP requests mean moving data across the network twice, checking authentication and access control information twice, and so on. This volume of work, when the number of simultaneous requests increases, or the number of round trips during the network increases, can greatly overshadow efficiency and effectiveness.

Another way is to provide a coarse-grained view to the client, and the client delivers all the required data to the server in the form of an HTTP request, and the rest of the processing takes place on the server. With these explanations, the following codes can be considered to implement the preceding requirement:

We have the models related to the author and the book as follows:

public class Author

{

public int AuthorId { get; set; }

public string FirstName { get; set; }

public string LastName { get; set; }

public bool IsActive { get; set; }

public ICollection Books { get; set; }

}

public class Book

{

public int BookId { get; set; }

public string Title { get; set; }

public string Language { get; set; }

public ICollection Authors { get; set; }

}

As it is clear in the preceding models, each author can have several books, and each book can have several authors. According to the stated requirement, using these models will not work well in displaying the information of authors and their books because many of the features in these models will not be needed for display. Based on this, we need models that are the same as the client's needs. Therefore, we are going to define a series of DTOs. These DTOs are defined using the data transfer object design pattern:

public class AuthorDTO

{

public string FirstName { get; set; }

public string LastName { get; set; }

public ICollection Books { get; set; }

}

public class BookDTO

{

public string Title { get; set; }

public string Language { get; set; }

}

In the preceding code, for the sake of simplicity, AuthorId and BookId features are not included in DTO. In real implementations, these two features will probably be needed. Now that the DTOs the client requires are ready, a mechanism for the model to DTO and vice versa is needed. For this, you can use a mapper or any other method. Therefore, we have the following codes for conversion:

public class AuthorAssembler

{

public AuthorDTO ToDTO(Author author)

{

AuthorDTO dto = new()

{

FirstName = author.FirstName,

LastName = author.LastName

};

ConvertBooks(dto, author);

return dto;

}

public void ConvertBooks(AuthorDTO dto, Author model)

{

foreach (var book in model.Books)

{

dto.Books.Add(new BookDTO

{

Title = book.Title,

Language = book.Language

});

}

}

public Author ToModel(AuthorDTO author)

{

Author dto = new()

{

FirstName = author.FirstName,

LastName = author.LastName

};

ConvertBooks(dto, author);

return dto;

}

public void ConvertBooks(Author model, AuthorDTO dto)

{

foreach (var book in dto.Books)

{

model.Books.Add(new Book

{

Title = book.Title,

Language = book.Language

});

}

}

}

As you can see, in the preceding class named AuthorAssembler, there are two methods for converting the model to DTO and DTO to model. To get the list of authors and insert them, the following two methods are defined in the AuthorAssembler class:

public void CreateAuthor(AuthorDTO dto)

{

Author author =ToModel(dto);

author.AuthorId = new Random().Next(1, 100);

CreateBooks(dto.Books, author);

}

private void CreateBooks(ICollection dtos, Author author)

{

if (dtos != null)

{

if (dtos.Any(x => string.IsNullOrWhiteSpace(x.Title)))

throw new Exception("Book title cannot be null or empty");

foreach (var item in dtos)

{

var book = new Book()

{

Title = item.Title,

Language = item.Language

};

book.BookId = new Random().Next(1, 100);

author.Books.Add(book);

}

}

}

public List GetAuthors()

=> DatabaseGateway.GetAuthors().Select(x => ToDTO(x)).ToList();

Note that the implementations done in this class are considered simple for the sake of simplicity. Now, to receive user requests, the coarse-grained view can be defined as follows:

public interface IAuthorService

{

void AddAuthor(AuthorDTO dto);

ICollection GetAuthors();

}

public class AuthorService : IAuthorService

{

public void AddAuthor(AuthorDTO dto)

=> new AuthorAssembler().CreateAuthor(dto);

public ICollection GetAuthors()

=> new AuthorAssembler().GetAuthors();

}

As the preceding codes show, this view does nothing except translate coarse-grained methods to fine-grained ones. With the existence of AuthorService, there is no need to go back and forth in the network and its related negative effects, and the user's needs can be met with minimal back and forth.

Notes:

This design pattern will not be needed when all transactions are inside a process. But this design pattern can be very useful when the transactions are divided between several processes (within a machine or across the entire web).
The generated coarse-grained view must not contain any domain-related logic. The entire application should work correctly, regardless of the classes associated with these views.
If there are models with the same structure on both sides of the communication, DTO is unnecessary, but in reality, this is almost impossible. Usually, this design pattern is designed and used alongside the data transfer object design pattern.
There may be different methods for communicating with different objects in the design of a coarse-grained view. Having one or more coarse-grained views is a decision that can be made at the time of implementation.
Designed views can be stateless or stateful. If these views need to be stateful, you can use the design patterns in the session state category.
Designed views should usually be good places to control access or manage transactions.
An important feature of this design pattern is several processes or so-called remote use. If we take the feature of being remote from this design pattern, then this design pattern will be very similar to the service layer design pattern.
Consequences: Advantages

Reducing the number of requests during the network will increase productivity and efficiency.
By implementing this design pattern asynchronously, the efficiency can be increased even more.
Consequences: Disadvantages

It will increase the complexity if used in smaller programs or programs only inside a Process.
Applicability:

Improving effectiveness and efficiency in remote communication or outside the current process can be very useful. For example, connecting the front end to the back end in scenarios where the front end is separated from the back end.
Related patterns:

Some of the following design patterns are not related to the Remote Facade design pattern, but to implement this design pattern, checking the following design patterns will be useful:

Data transfer object
Session state
Service layer
Data transfer object
Name:

Data transfer object

Classification:

Distribution design patterns

Also known as:

---

Intent:

Using this design pattern, data can be moved between processes, and the number of calls to different methods can be reduced.

Motivation, Structure, Implementation, and Sample code:

Example 1:

suppose we need to deliver the authors' information and their books to the client to display to the user. There are several methods for this. , the client receives the list of authors once and the list of their books once. As mentioned in the remote facade design pattern, this work increases the number of round trips in the network and affects effectiveness and efficiency.

Another method is to deliver the data to the client as domain models, keeping the round trips to the server to a minimum. The problem with this method is that the structure we deliver to the client may differ from the structure of the domain models.

The data transfer object design pattern helps to deliver the data in the format required by the client while minimizing back and forth to the server.

With the preceding explanations, DTO can be used to implement the preceding scenario. The important point here is how DTO is related to the domain model. The dependence of these two on each other will not be a pleasant event because they can have completely different structures. You can get help from a mapper to solve this problem.

Now for the preceding scenario, the following codes can be considered for DTO and domain model:

public class Author{

public int AuthorId { get; set; }

public string FirstName { get; set; }

public string LastName { get; set; }

public bool IsActive { get; set; }

public ICollection Books { get; set; }

}

public class AuthorDTO{

public string FirstName { get; set; }

public string LastName { get; set; }

public string Status { get; set; }

public ICollection Books { get; set; }

}

For example, in the preceding code, the Author class has features such as AuthorId or IsActive. These features are not considered in the DTO definition because the client does not require these features. Also, in the definition of DTO, there is an attribute called Status, which does not exist in the Author model. The same structure is true for Book:

public class BookDTO

{

public string Title { get; set; }

public string Language { get; set; }

}

public class Book

{

public int BookId { get; set; }

public string Title { get; set; }

public string Language { get; set; }

}

Now, for the model and DTO to be connected, as mentioned, you can use a mapper. As follows:

public static class AuthorAssembler

{

public static Author ToModel(this AuthorDTO dto)

=> new Author{

FirstName = dto.FirstName,

LastName = dto.LastName,

IsActive = dto.Status == "Active",

Books = dto.Books.Select(x => x.ToModel()).ToList()

};

public static AuthorDTO ToDTO(this Author model)

=> new AuthorDTO{

FirstName = model.FirstName,

LastName = model.LastName,

Status = model.IsActive ? "Active" : "Deactive",

Books = model.Books.Select(x => x.ToDTO()).ToList()

};

private static Book ToModel(this BookDTO dto)

=> new Book{

Title = dto.Title,

Language = dto.Language

};

private static BookDTO ToDTO(this Book model)

=> new BookDTO{

Title = model.Title,

Language = model.Language

};

}

The preceding implementation is done using extension methods in C#.NET. As can be seen, the model is converted to DTO through the ToDTO method, and the DTO is converted to the model through the ToModel method. With the preceding structure, if we want to return the information of authors and their books to the client, it is enough to convert the result to DTO through the ToDTO method and return it to the client after preparing the model. Also, the methods related to Book in implementing this scenario are considered private because we only wanted to deliver the data as authors. (Each author object has an attribute for books).

Example 2:

Suppose that we want to receive and save the information of authors and their books from the client this time. To do this, separate DTOs can be defined, or existing DTOs can be used. If there is a big difference between read and write DTOs, then defining two different DTO classes would be the right thing to do. That said, the way to record the author's data will be very similar to the mode of reading the author's data.

Notes:

A new record feature was introduced in version 9.0 of the C#.NET programming language. By introducing this feature, DTO can also be called Data Transfer Record (DTR).
A DTO only contains data and does not have any behavior. In this case, the word ViewModel can also be used for DTO. ViewModel does not necessarily only contain data; in MVVM architecture, it can also contain a series of behaviors, in which case the terms ViewModel and DTO cannot be used interchangeably.
This design pattern can also implement the remote facade design pattern.
DTO usually contains more data than the client needs. In this case, the DTO can be used for various requirements, and in this way, it can prevent back and forth to the server. Whether to use the same DTO for requests and responses or to put different DTOs for them is a decision related to the request and response structure. If the structures are similar, a DTO can also be used.
Changeable or not is another decision that should be considered in DTO design. No general rule approves one of these structures and negates the other.
Record Set can be considered as the DTO defined for the database.
DTOs should always be able to be serialized so that they can be transferred along the network. For DTOs to be serializable, we must keep them as simple as possible. To serialize DTO using .NET, it is unnecessary to do much work, and .NET has provided the possibility of serializing to various formats such as JSON or XML.
DTOs can be generated automatically by using code generators. To automatically generate DTO, you can also benefit from reflection. The difference between these two methods is the simplicity and cost of implementation.
Mapper can be used to connect the domain model with DTO.
The lazy load design pattern can also use DTO in asynchronous mode.
Consequences: Advantages

This design pattern creates a loose connection between the domain and presentation layers.
Because of this, the data needed by the client is provided to him with the least back and forth to the server, and the efficiency is improved.
Consequences: Disadvantages

The number of DTOs is also proportionally increased in large programs with a lot of existence. This causes an increase in complexity, the volume of the code, and, finally, an increase in the coding time.
Applicability:

Using this design pattern can be useful to transfer data between the client and the server to minimize the back and forth to the server.
One of the uses of DTO is to transfer data between different software layers.
Related patterns:

Some of the following design patterns are not related to the Data Transfer Object design pattern, but to implement this design pattern, checking the following design patterns will be useful:

Remote facade
Lazy load
Conclusion
In this chapter, you got acquainted with the distribution design patterns, such as remote facades and data transfer object design patterns. You also learned how to design communication in distributed software suitably and efficiently and move the data across this software. In the next chapter, you will learn offline concurrency design patterns.

Join our book's Discord space

Join the book's Discord Workspace for Latest updates, Offers, Tech happenings around the world, New Release and Sessions with the Authors:

https://discord.bpbonline.com

NET 7 Design Patterns In-Depth 11. Web Presentation Design Patterns

Chapter 11
Web Presentation Design Patterns
Introduction
To organize the web presentation, the design patterns of this category can be divided into the following seven main sections:

Model view controller: Designing User Interface (UI) interactions are divided into three different roles.
Page controller: An object handles requests for a page or action on a website.
Front controller: A controller is responsible for managing all incoming requests.
Template view: By placing a series of markers in HTML pages, various information can be displayed on the pages at runtime.
Transform view: It can process the existing data and convert it to the user's preferred format.
Two-step view: The data in the model is converted into HTML format in two steps.
Application controller: It can centrally manage the movement between pages and the overall flow of the program.
Structure
In this chapter, we will cover the following topics:

Web presentation design patterns
Model view controller
Page controller
Front controller
Template view
Transform view
Two-step View
Application Controller
Objectives
In this chapter, you will learn how to organize the user interface. You will also learn about design patterns in this field. In this chapter, you will learn how to separate the data, logic, and display parts from each other so that each part can be implemented in a useful and appropriate way.

Web presentation design patterns
One of the most significant changes in enterprise applications in recent years is the penetration of web-based user interfaces. These interfaces come with various advantages, including that the client often does not need to install a special program to use them. Creating web applications is often accompanied by the generation of server-side codes. The request is entered into the web server, which delivers the request based on the web application's content or the corresponding website.

The model view controller design pattern can separate the display-related details from the data structure and logic. In this structure, the controller receives the request, and the data is received from the data source by the model. Then, based on the results, the controller returns the appropriate view in the response, and the user sees the desired view.

Sometimes, a central engine is needed in the structure to manage the movement between different pages. In this case, you can use an application controller. If there is different and complex logic for the order between the displays in the program, the application controller can have a significant impact.

You can benefit from three different design patterns for the view section: Transform view, template view, and two-step view. Markers are used in template view and then replaced with dynamic content. You can use transform view to create a view based on the domain model. Both patterns provide the possibility of single-step views. But sometimes, it is necessary to first convert the data into a logical representation, then convert the logical representation into the view desired by the user. In this case, a two-step view will be useful.

The next part to pay attention to in the discussion of web display is to pay attention to the controllers. One method for controllers is to have one controller per page on the website. This method, which is the simplest type, can be implemented through a page controller. But usually, having a controller for one page will complicate the code. Most of what the controller does can be divided into two parts. Part 1) receiving the HTTP request and part 2) deciding what processing should be done for an incoming request. Therefore, it is better to separate these two parts from each other. In this regard, you can get help from Front Controller.

Model view controller
Name:

Model view controller

Classification:

Web presentation design patterns

Also known as:

MVC

Intent:

Using this pattern, the design of user interface interactions is divided into three different roles, which are view, model, and controller:

Model: It is an object that provides some information about the domain. All the data and behaviors related to them are presented through the model. In an object-oriented approach, the model can be considered an object within the domain model.
View: Data can be displayed in the UI through this section. The only task of this section is to display information and manage user interactions. In web applications, this section usually includes HTML codes.
Controller: Through this section, the data is received from the view, placed in the model, and it causes the view to be updated accordingly.
Motivation, Structure, Implementation, and Sample code:

With the preceding explanation, UI can be considered a combination of view and controller. We are planning to design a login form, and we intend for the user to enter his/her page by entering his/her username and password and see his/her username and password next to his/her name.

According to the preceding requirement, the model class can be considered as follows:

public class LoginModel

{

public string UserName { get; set; }

public string Password { get; set; }

public string FullName { get; set; }

}

As you can see, the Model class has three properties for UserName, Password, and FullName.

The codes related to the controller can also be considered as follows:

public class UserController

{

private IView view;

public IView LoginIndex()

{

view = new LoginView();

view.Display();

return view;

}

public IView DashboardIndex(LoginModel model)

{

if (model == null)

return LoginIndex();

else

{

model.FullName = "Vahid Farahmandian";

view = new DashboardView(model);

view.Display();

}

return view;

}

public IView Login(LoginModel model)

{

if (model.UserName == "vahid" && model.Password == "123")

return DashboardIndex(model);

else

return LoginIndex();

}

public IView Logout() => LoginIndex();

}

In the preceding code, the LoginIndex method is responsible for returning the view related to Login. The login form is displayed to the user when this method is called. The DashboardIndex method is like this one and returns the View related to the user panel. Besides these two methods that return views, there are also two other methods. By receiving the user's inputs, the Login method performs the necessary validations and decides whether to log in to the user or not. The Logout method also receives the user's request and, while removing the user from the user account, returns him/her to the login page. As it is clear in the controller codes, the controller communicates with the model by receiving the user's inputs, values the object related to the model, and establishes the connection between View and Controller.

The codes related to View can also be imagined as follows:

public class LoginView : IView

{

readonly LoginModel model;

readonly UserController controller;

public LoginView()

{

model = new LoginModel();

controller = new UserController();

}

public void Display()

{

Console.WriteLine($"Enter username:");

model.UserName = Console.ReadLine();

Console.WriteLine($"Enter password:");

model.Password = Console.ReadLine();

}

public IView Login() => controller.Login(model);

}

The preceding codes show the codes related to the login view. This class has a method named Display which displays the corresponding form by calling it. Also, after entering the username and password, the user sends the information to the controller by pressing the login button. The Login method is embedded in this class to simulate this. The codes related to the view of the user panel will be similar to these codes. As follows:

public class DashboardView : IView

{

readonly LoginModel model;

readonly UserController controller;

public DashboardView(LoginModel model)

{

this.model = model;

controller = new UserController();

}

public void Display()

=> Console.WriteLine($"Username: {model.UserName},

Password: {model.Password},

FullName: {model.FullName}");

public IView Logout()

{

Console.WriteLine("User logged out successfully");

return controller.Logout();

}

}

As it is clear in the preceding codes, View receives data from the user and sends it to the Controller or takes it from the Controller and displays it in the output.

Note that in the preceding code, an attempt has been made to show the various components of the MVC design pattern in the form of C#.NET codes with a simple example.

Notes:

Transaction script can be considered a type of model because we have nothing to do with the user interface in writing transaction script.
Using the Microsoft ASP.NET MVC framework, web applications can be created using the MVC design pattern.
This design pattern can be used in non-web applications as well.
This design pattern indicates two types of separation. The first is to separate the View from the model, and the second is to separate the Controller from the View. It is very important to separate the model from the View. One of the reasons for this is that the way of display and the work related to the model are two different concerns and should not be addressed simultaneously. Also, this separation allows a model to be presented to the end user in different ways (Web Page, Web API, and so on) and in different formats (login form, user panel form, and so on). Separating the controller from the view is less important today, with different frameworks for UI. The role of the controller is changing, and most of the controller part is placed between the view and the model. Application controller design patterns can be very helpful in this case to clarify the issue.
Combining this design pattern with the observer design pattern can be very useful. In this case, the view can be updated automatically by applying changes to the model.
Consequences: Advantages

Due to the separation of the view from the model, it can be tested independently.
Using this design pattern, the complexity can be better managed with the help of three divisions.
It is easier to maintain and develop the code in general.
The principle of Separation of Concern (SoC) is better observed.
Consequences: Disadvantages

The amount of code in the controller can be large and threaten the code's maintenance.
It is more difficult to read and make changes to the code.
Applicability:

Except in small systems, where the model does not contain specific behavior, in the rest of the systems, using this design pattern can be very useful.
Related patterns:

Some of the following design patterns are not related to the Model View Controller design pattern, but to implement this design pattern, checking the following design patterns will be useful:

Transaction script
Application controller
Observer
Page controller
Name:

Page Controller

Classification:

Web presentation design patterns

Also known as:

---

Intent:

Using this design pattern, an object handles the requests of a page or action on a website. In fact, in this design pattern, there is a controller for each page.

Motivation, Structure, Implementation, and Sample code:

Suppose there is a requirement, and, in its format, we need to display the user's name and surname after receiving the user's national code. Using the page controller design pattern, there are two parts to implementing this requirement. The first part is the view part, and the second is the business logic part. The logic section is the combination of the controller and model.

Suppose we want to use ASP.Net web forms to implement this requirement. Using this technology, the view section is the same as the web form pages, which are composed of HTML codes, and the logic section is the code behind it. Now, to implement the preceding requirement, the following codes can be imagined for the View section:

<%@Page Language="C#" Codebehind="Inquiry.aspx.cs" AutoEventWireup="false" Inherits="Inquiry" %>

National Code:

As it is clear in the above code, in the first line, it is specified in which file the logic related to the above View is located. (CodeBehind="Inquiry.aspx.cs"). It is also clear from the preceding code that when the btnInquiry button is clicked, which method should be executed on the server side. The codes related to Code Behind are as follows:

public class Inquiry : System.Web.UI.Page

{

protected System.Web.UI.WebControls.TextBox name;

protected System.Web.UI.WebControls.Button MyButton;

protected System.Web.UI.HtmlControls.HtmlGenericControl mySpan;

public void btnInquiry_Click(Object sender, EventArgs e)

{

var info = DatabaseGateway.Find(txtNationalCode.Text);

if(info==null)

result.InnerHtml="Not found";

result.InnerHtml = $"First name and Last name:

{info.Name} {info.LastName}";

}

}

According to the preceding code, when the user enters the national code in txtNationalCode, he/she clicks on the inquiry button. By clicking on this button, the Click event related to the button is called, and in response to this event, the btnInquiry_Click method is called. In this method, the national code value is read, the information is fetched from the database using the DatabaseGateway helper class, and the result is sent to view again.

Notes:

The front controller design pattern and this design pattern describe two different methods for implementing the controller part in the MVC pattern.
The page controller design pattern concept is implemented by default in ASP.NET.
When using this design pattern, duplicate codes will also increase over time and as the number of pages increases. One class can be used as the parent of all classes to reuse codes, and common codes can be transferred to that class. For example, as follows:
Public class Inquiry: BaseController{…}
The preceding code has transferred the repeated codes to the BaseController class.
There are usually two categories of codes in Code Behind. Codes depend on the graphical interface, and codes are independent of the interface. It is usually possible to transfer independent codes from the graphical interface to the parent class. You can benefit from the template method design model to better implement this relationship.
Using the front controller design pattern is better to better manage the complexity in larger programs.
Consequences: Advantages

Since this design pattern is implemented by default in ASP.NET, you can use the features in the framework with minimal effort.
Since each controller has only one page, the codes in the controller include a smaller range of the program and increase simplicity.
By using parent controllers, the possibility of code reuse increases.
By using helper classes, you can develop tasks that can be done in the controller. For example, a helper class can communicate with an information link. Along with inheritance, this method is another method to increase code reusability.
Consequences: Disadvantages

Testing using this design pattern is difficult. Especially if the ASP.Net web form is used, the controllers must have inherited the System.Web.UI.Page class, and it will not be easy to sample from this class. In this case, the only possible way to test is to send an HTTP request and check the received response, which is also prone to errors.
As the number of pages increases or the program becomes more complex and requires more dynamic settings for pages, this design pattern will not be suitable for larger and wider programs.
The use of inheritance to increase reusability, along with all its benefits, makes the code more complicated. Over time, the inheritance hierarchy also becomes more complicated, and the maintenance of the code decreases greatly.
Applicability:

It can be useful for implementing small systems or systems with simple business logic.
Related patterns:

Some of the following design patterns are unrelated to the Page Controller design pattern. To implement this design pattern, checking the following design patterns will be useful:

Front controller
Model view controller
Template method
Front controller
Name:

Front Controller

Classification:

Web presentation design patterns

Also known as:

---

Intent:

Using this design pattern, a controller is responsible for handling all incoming requests. This controller is sometimes called a handler.

Motivation, Structure, Implementation, and Sample code:

Suppose a requirement is raised and requested in its format that before responding to any request, the request log is recorded first and then checked whether the user is authenticated in the system. If it was authenticated, their request will be processed; otherwise, an error message will be displayed to the user.

Let us also assume that we have two different controllers to manage requests related to books and publishers, called BookController and AuthorController:

public class BookController

{

public void Get()

{

Console.WriteLine("Design Patterns in .NET");

Console.WriteLine("C# Programming");

}

}

public class AuthorController

{

public void Get()

{

Console.WriteLine("Vahid Farahmandian");

Console.WriteLine("Ali Rahimi");

Console.WriteLine("Reza Karimi");

}

}

To implement the preceding requirement, a simple way is to register the log and check the authentication at the beginning of the methods of each of these controllers. This method has several important drawbacks, including writing repetitive codes and difficulty applying changes.

But another solution is that a controller receives all the requests, registers the log, performs the user authentication process, and finally sends the request to the appropriate controller.

For this, we first define a Dispatcher. Its duty is to direct the request to the controller after receiving the request as follows:

public class Dispatcher

{

BookController bookController;

AuthorController authorController;

public Dispatcher()

{

bookController = new BookController();

authorController = new AuthorController();

}

public void Dispatch(string request)

{

if (request.Contains("/book/"))

bookController.Get();

else

authorController.Get();

}

}

As it is clear in the preceding code, the Dispatch method receives the request and, according to that, directs the request to one of the BookController or AuthorController controllers.

Now that the Dispatcher has been implemented, the front controller is needed to receive the requests and perform log registration and authentication operations. For this, we define the front controller as follows:

public class MainHandler

{

private Dispatcher dispatcher;

public MainHandler() => dispatcher = new Dispatcher();

private bool IsAutheticated() => true;

private void SetLog(string request)

=> Console.WriteLine($"Request received: {request}");

public void ReceiveRequest(string request)

{

SetLog(request);

if (IsAutheticated())

dispatcher.Dispatch(request);

else

throw new Exception("Unauthenticated user error");

}

}

As seen in the preceding code, all requests are first entered into the MainHandler. Within this class and the ReceiveRequest method, we perform tasks related to log registration and authentication verification. If everything is correct, the request will be delivered to the Dispatch method in the Dispatcher class, and this method will deliver the request to the appropriate Controller. The preceding example is a very simple example of this design pattern.

Notes:

Combining this design pattern with the decorator design pattern can be very useful.
If the only reason for choosing this design pattern is to reduce the amount of code, then probably using the page controller design pattern and benefiting from inheritance to manage the duplicate code will be a better solution.
The dynamic approach can also be used in the design of the Dispatcher class. In this method, the target classes can be dynamically identified during execution, and the request can be directed to them.
Consequences: Advantages

Using this design pattern reduces the volume of repetitive codes, and it is easy to make changes.
Consequences: Disadvantages

Compared to the page controller design pattern, it will be more complicated and, therefore, unsuitable for simpler scenarios.
Applicability:

If it is necessary to perform an operational request such as authentication control, access control, and so on, then using this design pattern can be useful.
Related patterns:

Some of the following design patterns are unrelated to the Front Controller design pattern. To implement this design pattern, checking the following design patterns will be useful:

Decorator
Page controller
Template view
Name:

Template View

Classification:

Web presentation design patterns

Also known as:

---

Intent:

Using this design pattern, placing a series of markers in HTML pages, various information can be displayed on the pages at runtime.

Motivation, Structure, Implementation, and Sample code:

Suppose there is a requirement, and we want to display the list of authors on the page as a table. We also want to display the author's name with the most books on the table. When the content we want to present is static, there is no problem designing the view section. The complexity starts when the content of the view section itself becomes dynamic, and the content of this section needs to be the result of a series of calculations or processes.

There are different methods for this. One suitable method is to design the view with the help of a series of markers in the view section and then move these markers with real data in the controller section and use helper classes for each view. To implement this requirement, we consider the following model:

public class Author

{

public string FirstName { get; set; }

public string LastName { get; set; }

public int BooksCount { get; set; }

}

In the preceding model, the BooksCount feature displays the number of books by each author. To get the list of authors and the number of their books, refer to the following code to get help from the Helper class :

public class AuthorHelper

{

public List GetAuthors()

{

return new List()

{

new Author

{

FirstName="Vahid",

LastName="Farahmandian",

BooksCount=2

},

new Author

{

FirstName="Ali",

LastName="Rahimi",

BooksCount=1

},

new Author

{

FirstName="Hassan",

LastName="Abbasi",

BooksCount=3

}

};

}

}

In the GetAuthors method, you can connect to the database and get the list of authors from the database. ASP.NET web form has been used to design the View in this scenario so that the role and effect of the indicators can be easily displayed. With these explanations, View can be considered as follows:

<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="AuthorsList.aspx.cs" Inherits="WebApplication1.AuthorsList" %>

Best Author:

As shown in the preceding HTML code, inline expressions in .NET have been used as pointers to set the fistName and lastName labels. The code corresponding to the preceding View To set the BestAuthor attribute is also defined as follows in the Code Behind:

protected Author BestAuthor { get; set; }

protected void Page_Load(object sender, EventArgs e)

{

var helper = new AuthorHelper();

Authors = helper.GetAuthors();

BestAuthor = Authors.OrderByDescending(x => x.BooksCount).FirstOrDefault();

firstname.DataBind();

lastname.DataBind();

}

According to the preceding code, when the page is loaded, the author with the most books is identified, and his information is placed in the BestAuthor feature. Then, the available indicators are placed in the View with appropriate values, and the page is loaded. According to the preceding codes, the final HTML code sent to the user's browser will be as follows:

Best Author:

Hassan

Abbasi

As you can see, the markers in the View are placed with appropriate values. In the following, different ways can be used to display the list of authors. ASP.NET web form has provided a component called DataGrid for the tabular display of data, which can be used in View as follows:

The preceding code shows that the DataGrid will have three columns displaying the author's information. To assign values to this DataGrid, you can do the following in Code Behind:

var helper = new AuthorHelper();

Authors = helper.GetAuthors();

authorsGrid.DataSource = Authors;

authorsGrid.DataBind();

With the preceding description, the code for the View section will be as follows:

<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="AuthorsList.aspx.cs" Inherits="WebApplication1.AuthorsList" %>

Best Author:

Also, the final code in Code Behind will be as follows:

public partial class AuthorsList : System.Web.UI.Page

{

protected List Authors { get; set; }

protected Author BestAuthor { get; set; }

protected void Page_Load(object sender, EventArgs e)

{

var helper = new AuthorHelper();

Authors = helper.GetAuthors();

authorsGrid.DataSource = Authors;

authorsGrid.DataBind();

BestAuthor = Authors

.OrderByDescending(x => x.BooksCount)

.FirstOrDefault();

firstname.DataBind();

lastname.DataBind();

}

}

Notes:

The main idea behind this design pattern is to design HTML pages with a series of markers to provide dynamic content. When these pages are requested, the markers are placed on the server side with appropriate values, and the response is returned to the client.
Codes that contain programming logic and are placed in view are called scriptlets.
You can use Helper classes and implement the desired logic to prevent scriptlet placement on the page. The implementation of this logic is sometimes complicated. For example, it is possible to display male authors in yellow and female authors in blue in a table. Returning the HTML tag instead of the value in the server-side codes is one of the ways to prevent scriptlet placement in the View codes. The problem with this method will be the difficulty of maintaining the code.
To display a data list in View, there are different methods. One of these ways is to write a loop and form a line for each record. Another way is to use the templates provided by the technology used. For example, DataGrid can be useful for this purpose.
To implement the View section in MVC, choose between template view, transform view, and two-step view design patterns.
Consequences: Advantages

If there are not many scriptlets on the page, the graphic design of the pages is possible according to the structure of the pages in a simple way. This makes it easy for graphic designers to design pages.
Consequences: Disadvantages

If many scriptlet codes are placed in View, it will be difficult for non-programmers, such as graphic designers, to understand View.
Testing using this design pattern is difficult.
Applicability:

This design pattern can be used to implement the View section in MVC.
Related patterns:

Some of the following design patterns are not related to the Template View design pattern, but to implement this design pattern, checking the following design patterns will be useful:

Transform view
Two-step view
Model view controller
Transform view
Name:

Transform view

Classification:

Web presentation design patterns

Also known as:

---

Intent:

Using this design pattern, the existing data can be processed and converted into the user's preferred format. The preferred format is usually HTML, but sometimes it may be necessary to convert the data to XML or JSON, or any other format.

Motivation, Structure, Implementation, and Sample code:

Suppose we need to get the list of authors from the database, convert it into HTML format, and display it in the output. Due to the problems of the template view design pattern, we want to use the transform view design pattern for this.

For this, we first need to get the list of authors from the database or any other data source. Therefore, we have the following codes for this:

public class Author

{

public string FirstName { get; set; }

public string LastName { get; set; }

public int BooksCount { get; set; }

}

public class DatabaseGateway

{

public List GetAuthors()

{

return new List()

{

new Author

{

FirstName="Vahid",

LastName="Farahmandian",

BooksCount=2

},

new Author

{

FirstName="Ali",

LastName="Rahimi",

BooksCount=1

},

new Author

{

FirstName="Hassan",

LastName="Abbasi",

BooksCount=3

}

};

}

}

According to the preceding codes, we have received the list of authors. Now, before we deliver this list to View, we need to process it and convert it to the HTML format the user desires. There are different ways to do this. One of the ways is to convert this list into XML format and then convert it to desired HTML format with the help of XSLT. Another simple way is to process the list and generate the HTML tags using a loop. For the sake of simplicity, we will consider the second way here. For this purpose, the desired method can be written as follows:

public string Transform(List authors)

{

StringBuilder sb = new();

sb.AppendLine("");

sb.AppendLine("");

sb.AppendLine("");

sb.AppendLine("

");

sb.AppendLine("

");

foreach (var item in typeof(Author).GetProperties())

{

sb.AppendLine($"

");

}

sb.AppendLine("

");

foreach (var item in authors)

{

sb.AppendLine("

");

sb.Append($"

");

sb.Append($"

");

sb.Append($"

");

sb.AppendLine("

");

}

sb.AppendLine("

{item.Name}
{item.FirstName} {item.LastName} {item.BooksCount}

");

sb.AppendLine("");

sb.Append("");

return sb.ToString();

}

As seen in the preceding method, the data is entered into this method as input, and inside this method, it is converted into the user's desired format. After the output is ready, it can be presented and displayed directly to the user. This method is called Transformer.

Notes:

Using generics, reflection, or nested loops and return methods, you can implement the Transformer method in such a way that it can be used to convert various types of objects to HTML.
If Transformer is defined as generic, it can reuse the code.
Consequences: Advantages

Using this design pattern makes it possible to test better than the Template View design pattern.
Consequences: Disadvantages

Providing views of the most complex design pattern will increase complexity.
Applicability:

This design pattern can be used to implement the View section in MVC.
Related patterns:

Some of the following design patterns are not related to the Transform View design pattern, but to implement this design pattern, checking the following design patterns will be useful:

Template view
Model view controller
Two-step view
Name:

Two Step View

Classification:

Web presentation design patterns

Also known as:

---

Intent:

Using this design pattern, the data in the model are converted into HTML format in two steps. In the first stage, this data structure is converted into a logical structure; in the second stage, this logical structure is converted into HTML.

Motivation, Structure, Implementation, and Sample code:

Suppose we are displaying the list of authors in the form of a table to the user. Besides the list of authors, we also have the list of publishers and books. A requirement has been raised for the table rows to have one of the different background colors. We need to refer to and change each table to apply this change. But there is another solution, and that is to use the two-step view design pattern.

If we can convert existing models into an intermediate logical structure, then we can convert this logical structure into HTML codes using a converter. For this, we first examine the first stage of this design pattern: the production of a logical structure.

There are different methods to generate a logical structure, but we use XML and XSLT methods for easier and better understanding. Suppose we have the authors' data in XML format in the following form:

Vahid"+

Farahmandian

2

Ali

Rahimi

1

Hassan

Abbasi

3

If you are unfamiliar with XML, it is recommended to get familiar with it through the link: https://www.w3schools.com/xml/xml_whatis.asp.

However, to clarify the example, the above XML structure and the following JSON structure are equivalent. Of course, we will proceed with the XML structure in the following explanations:

{

"authors": {

"author": [

{

"firstname": "Vahid",

"lastname": "Farahmandian",

"booksCount": 2

},

{

"firstname": "Ali",

"lastname": "Rahimi",

"booksCount": 1

},

{

"firstname": "Hassan",

"lastname": "Abbasi",

"booksCount": 3

}

]

}

}

We convert the preceding XML structure into an expected logical structure using an XSLT converter to generate a logical structure. The XSLT converter can be defined as follows:

As seen in the preceding XSLT structure, the model previously mentioned in the XML structure is converted into the expected logical format with this structure. For example, in this format, it is said that whenever you reach authors, define the equivalent structure

...

. To apply this XSLT to the said XML, you can use the following code:

public void FirstStepTransformer()

{

var myXslTrans = new XslCompiledTransform();

myXslTrans.Load(@"firststep-style.xslt");

myXslTrans.Transform(@"input.xml", @" logical.xml");

}

The result of executing this code will be the following output:

Vahid"+

Farahmandian

2

Ali

Rahimi

1

Hassan

Abbasi

3

As seen in the preceding output, the format defined in XSLT is applied to the input XML and the output is generated. This output is the first step in the two-step view design pattern. To produce the output of the second step, which is often the output in HTML format, similar steps but with a different XSLT format must be followed. Therefore, we can define the XSLT required to generate the desired HTML as follows:

grey

white

As seen in the preceding code, the output of the first step is converted to HTML using this format. Using this XSLT structure, HTML output will be generated as follows:

Vahid Farahmandian 2
Ali Rahimi 1
Hassan Abbasi 3

As seen in the preceding HTML output, the rows of the table have different background colors among them. In two steps, a series of data were converted to HTML and displayed to the user.

Notes:

The first step in this design pattern does not contain any HTML code. In other words, no code related to the appearance and display is placed in the output of the first stage (including the background color and so on)
In the second stage converter, you can add header and footer pages and other sections.
For websites where each page has its design, it is usually better to use template view or transform view templates.
Consequences: Advantages

It is easy to apply changes, especially graphic changes, on the entire website.
Consequences: Disadvantages

Codes are difficult to read for non-programmers (including graphic designers).
For websites with complex graphic designs for each page, it is usually not appropriate to use this design pattern.
Applicability:

When we want to change the entire design of the pages, using this design pattern will be useful because it will only be necessary to change the XSLTs related to the HTML generation.
You can use this design pattern to implement the View section in MVC.
Related patterns:

Some of the following design patterns are not related to the Two Step View design pattern, but to implement this design pattern, checking the following design patterns will be useful:

Template view
Transform view
Model view controller
Application controller
Name:

Application controller

Classification:

Web presentation design patterns

Also known as:

---

Intent:

By using this design pattern, you can centrally manage the movement between pages and the overall flow of the program.

Motivation, Structure, Implementation, and Sample code:

Suppose a requirement is raised and, in its form, it is requested to implement the car rental process. To simplify the presentation this process has been simplified. For this requirement, we have the state diagram as follows:

Figure%2011.1.png
Figure 11.1: Car rental process

According to the Figure 11.1 diagram, the following rules are valid for a car:

When the vehicle is in On Lease status, and the return command is received, the status of the vehicle is changed to In Inventory. In this case, the rented car has been returned and added to the list of available cars. In this case, a page should be displayed to the user, and the information related to the return should be entered.
When the vehicle is in On Lease or In Inventory status, and a damage order is received, the status of the vehicle will be changed to In Repair. In this case, depending on whether the car is rented or in the parking lot, different pages are displayed to the user to enter the information related to the repair.
Receiving wrong commands in different situations will lead to displaying the error page.
There are different ways to implement this scenario. All the preceding rules can be placed in their appropriate controllers. But this method will gradually increase the complexity. A better solution is to receive all incoming requests with the help of the front controller design pattern, process it, and return the corresponding view. according to the incoming request The application controller design pattern has two main tasks:

Identifying what processing should happen.
Identifying which view should be displayed.
According to the preceding description, the following enum can be defined for different situations:

public enum State : byte

{

OnLease=1,

InInventory=2,

InRepair=3

}

According to the preceding explanation, the command design pattern can be used to implement task number 1, which is process detection. Therefore, for this section, we have the following:

public interface IDomainCommand

{

abstract public void run(NameValueCollection @params);

}

public class ReturnDetailCommand : IDomainCommand

{

public void run(NameValueCollection @params)

=> Console.WriteLine("return detail data recorded.");

}

public class IllegalActionCommand : IDomainCommand

{

public void run(NameValueCollection @params)

=> Console.WriteLine("Illegal action requested.");

}

public class LeaseDamageCommand : IDomainCommand

{

public void run(NameValueCollection @params)

=> Console.WriteLine("Lease damage data recorded.");

}

public class InventoryDamageCommand : IDomainCommand

{

public void run(NameValueCollection @params)

=> Console.WriteLine("Inventory damage data recorded.");

}

According to the preceding code, we have defined four commands for four processes. ReturnDetailCommand is responsible for processing the vehicle return. IllegalActionCommand is responsible for processing wrong requests. LeaseDamageCommand is in charge of processing requests related to the repair of the leased car, and finally, InventoryDamageCommand is in charge of processing requests related to the repair of the car in the parking lot.

Now, with commands, we need to define the relationship of each node of the provided status diagram through a structure. To define this class structure, we have considered the following:

public struct ResponseStore

{

public string Command { get; set; }

public State State { get; set; }

public Response Response { get; set; }

}

An object of the ResponseStore class determines what command has been issued (Command)? what state are we in (State)? And what response should be given (Response)? The response that is given should be able to return the request processing class and View. With these explanations, the Response class can be defined as follows:

public class Response

{

private Type domainCommand;

private string view;

public Response(Type domainCommand, string view)

{

this.domainCommand = domainCommand;

this.view = view;

}

public IDomainCommand GetDomainCommand()

=> (IDomainCommand)Activator.CreateInstance(domainCommand);

public string GetView() => view;

}

Using the Response class, you can return an object of the IDomainCommand interface implementing class. You can use the GetCommand method for this. For the sake of simplicity, exception management has not been considered in implementing this method. You can also return the View related to each command using this class. For this, the GetView method is used.

Now that the command and response-related structures are prepared, the IApplicationController interface and the CarLeasingApplicationController class can be implemented as follows.

public interface IApplicationController

{

IDomainCommand GetCommand(string command, NameValueCollection @params);

string GetView(string command, NameValueCollection @params);

}

The preceding interface specifies that each ApplicationController should be able to recognize the command and view associated with the input request. For example, the CarLeasingApplicationController class, which implements this interface, works as follows:

public class CarLeasingApplicationController : IApplicationController

{

private readonly List events = new();

public CarLeasingApplicationController()

{

AddResponse("return", State.OnLease,

typeof(ReturnDetailCommand), "return");

AddResponse("return", State.InInventory,

typeof(IllegalActionCommand), "illegalAction");

AddResponse("damage", State.OnLease,

typeof(LeaseDamageCommand), "leaseDamage");

AddResponse("damage", State.InInventory,

typeof(InventoryDamageCommand), "inventoryDamage");

}

}

As you can see, first, we implement the mentioned rules in the class constructor. For example, consider the following rule:

AddResponse("return", State.OnLease, typeof(ReturnDetailCommand), "return");

According to this rule, if the return command is issued and we are in the OnLease state, the ReturnDetailCommand is tasked with processing the request, and the return page must be displayed to the user.

private Response GetResponse(string command, State state)

=>events.FirstOrDefault(x=>x.Command==command && x.State==state).Response;

private State GetCarState(NameValueCollection @params)

=> (State)Convert.ToByte(@params["state"]);

public IDomainCommand GetCommand(string command, NameValueCollection @params)

=> GetResponse(command, GetCarState(@params)).GetDomainCommand();

In the preceding code snippet, with the help of the GetCommand method, by receiving the issued command (Command) along with the sent parameters of the request, the appropriate class for processing the request is identified:

public string GetView(string command, NameValueCollection @params)

=> GetResponse(command, GetCarState(@params)).GetView();

In the preceding piece of code, with the help of the GetView method, after receiving the issued command (Command) along with the sent parameters of the request, the appropriate View is returned:

public void AddResponse(string command, State state, Type domainCommand, string view)

{

Response response = new(domainCommand, view);

if (events.All(x => x.GetType() != domainCommand))

events.Add(new ResponseStore()

{

Command = command,

Response = response,

State = state

});

else

{

var @event = events.FirstOrDefault(x => x.Command == command);

@event.State = state;

@event.Response = response;

}

}

And finally, using the preceding method, you can save the rules in a set. So far, the application controller has been deployed. To use this structure, as mentioned in the beginning, you can benefit from the front controller design pattern. In this case, the front controller can be considered as follows:

public class FrontController

{

public void ReceiveRequest(Uri requestUrl)

{

IApplicationController controller =

GetApplicationController(requestUrl.AbsoluteUri);

NameValueCollection requestParams =

HttpUtility.ParseQueryString(requestUrl.Query);

IDomainCommand command =

controller

.GetCommand(requestUrl.Fragment.TrimStart('#'), requestParams);

command.run(requestParams);

string view = controller.GetView(

requestUrl.Fragment.TrimStart('#'), requestParams);

Console.WriteLine($"navigating to view: {view}");

}

private IApplicationController GetApplicationController(string requestUrl)

{

if (requestUrl.Contains("/leasing/")|| requestUrl.Contains("/leasing?"))

return new CarLeasingApplicationController();

else

return null;

}

}

According to the preceding code, if the request address contains the leasing value, the ApplicationController related to CarLeasing is returned. Then, based on the input request and its parameters, the values related to the command and view are identified. For example, if a request is entered with the following address:

http://abc.com/leasing?model=bmw&state=2&date=20220101#damage

Since the address contains the leasing value, the Application Controller related to CarLeasing will be selected. Then, according to the defined rules, since the damaged command has been issued and we are now in state 2, that is, InInventory, the InventoryDamageCommand should be processed. The View whose name is inventoryDamage should be returned.

Notes:

You can use the command design pattern to implement the two tasks proposed for this design pattern. You can also benefit from other methods, such as reflection instead of command.
You can use the front controller design pattern to implement the entry point to this design pattern.
If there is no connection between the application controller and the UI, the testing capability of the application controller will be improved.
The application controller can be considered an intermediate layer between the display and domain layers.
To implement more complex programs, several application controllers can be used. For smaller programs, only one application controller will be enough.
Consequences: Advantages

In complex programs, code repetition is avoided by placing the logic related to the program flow in one place, which improves maintainability.
Consequences: Disadvantages

If the flow of the program is simple, using this design pattern has no special advantage and will increase the complexity.
Applicability:

Implementing tasks such as wizards or the like in which different flows are formed based on specific rules and situations can be useful.
Related patterns:

Some of the following design patterns are not related to the Application Controller design pattern, but to implement this design pattern, checking the following design patterns will be useful:

Front controller
Command
Reflection
Conclusion
In this chapter, you got acquainted with the types of design patterns related to web displays and learned how to design and implement the view layer in a suitable way using the design patterns of this category.

In the next chapter, you will get to know the types of design patterns related to Distribution Design Patterns

Join our book's Discord space

Join the book's Discord Workspace for Latest updates, Offers, Tech happenings around the world, New Release and Sessions with the Authors:

https://discord.bpbonline.com

NET 7 Design Patterns In-Depth 10. Object-Relational Metadata Mapping Design Patterns

Chapter 10
Object-Relational Metadata Mapping Design Patterns
Introduction
To organize the object-relational metadata mapping, the design patterns of this category can be divided into the following three main sections:

Metadata mapping: Tries to store information about the object in metadata.
Query object: An object is responsible for creating queries.
Repository: Tries to implement CRUD processes optimally by placing a layer between the data access layer and the rest of the system and providing it to the user.
Structure
In this chapter, we will discuss the following topics:

Object-relational metadata mapping design patterns
Metadata mapping
Query object
Repository
Objectives
In this chapter, you will learn how to store information about objects and make centralized database queries. Next, using the repository design pattern, you will learn how to separate the data source from the rest of the system by placing a layer between the data access layer and the rest of the program. These design patterns are suitable for enterprise applications with extensive and sometimes complex business logic and where it is important to separate business logic concerns from data access and storage concerns.

Object-relational metadata mapping design patterns
When we are producing software, we need to implement the mapping between tables and classes. The software production process will be a process that contains a significant amount of repetitive code, and this will increase production time. To avoid this, you can use the metadata mapping design pattern. In this case, it is possible to avoid writing duplicate codes and extracting relationships from metadata using code generators or techniques related to reflection.

When the necessary infrastructure for creating queries is provided using metadata mapping, queries can be created and presented using query objects. In this case, the programmer no longer needs to know SQL and how to make queries. Now, if all the necessary queries to be sent to the database are presented through the query object, then with the help of a repository, the database can be hidden from the rest of the program.

Metadata mapping
Name:

Metadata mapping

Classification:

Object-relational metadata mapping design patterns

Also known as:

---

Intent:

By using this design pattern, information about the object is stored in the form of metadata.

Motivation, Structure, Implementation, and Sample code:

Often, a series of repetitive codes have been written in the examples that have been reviewed so far. The codes related to the mapping of objects to tables and attributes to columns have been constantly repeated. The metadata mapping design pattern helps avoid writing duplicate codes by extracting and storing the metadata related to the object and storing it. For example, consider the following class:

public class Person

{

public int Id { get; set; }

public string FirstName { get; set; }

public string LastName { get; set; }

public int Age { get; set; }

}

Suppose this class is mapped to the people table in the database. The properties of this class are also mapped exactly to the columns of the people table. With this assumption, to be able to write a query to get the list of all people, the following T-SQL code should be written:

SELECT Id, FirstName, LastName, Age FROM people

Now if we assume that we also have the following class:

public class Car

{

public int Id { get; set; }

public string Name { get; set; }

}

If we assume that this class is also mapped to the cars table in the database, then we will have the following query to get the list of all cars:

SELECT Id, Name FROM cars

After preparing the queries and executing them in the database, when the results are returned, each column's data should be mapped into its corresponding property in the target object to be presented to the user.

What happens in the preceding codes is that some codes must be repeated constantly. Suppose we have the metadata (or, in other words, the required information about the classes) and the list of tables and know how to map the tables and classes while avoiding writing duplicate codes. We can use this metadata to create and execute various queries.

To prepare metadata, we first need to get table and class information. The most important information we need is which class is mapped to which table and which column is mapped to each class property. To do this, we define the DataMap class as follows:

public class DataMap

{

public Type DomainClass { get; set; }

public string TableName { get; set; }

public ICollection ColumnMaps { get; set; }

public DataMap(Type domainClass, string tableName)

{

DomainClass = domainClass;

TableName = tableName;

ColumnMaps = new List();

}

public string? GetKeyColumn()

=>ColumnMaps.FirstOrDefault(x=> x.IsKey)?.ColumnName;

public string GetColumns()

{

StringBuilder sb = new();

if (ColumnMaps.Any())

sb.Append(ColumnMaps.First().ColumnName);

foreach (var column in ColumnMaps.Skip(1))

{

sb.Append($",{column.ColumnName}");

}

return sb.ToString();

}

}

In the preceding class, the DomainClass property is used to store the class information, TableName is used to store the table name, and ColumnMaps is used to store the column mapping information. Also, in this class, the GetKeyColumn method returns the name of the column that is the primary key. Please note that if the primary key is composite, the code of this method must be changed. The GetColumns method also returns a string containing the names of the columns, and this string separates column names with commas.

Now, we need to check how to implement the ColumnMap class. This class maps class properties and table columns to each other:

public class ColumnMap

{

public string ColumnName { get; }

public string PropertyName { get; }

public bool IsKey { get; }

public PropertyInfo Property { get; private set; }

public DataMap DataMap { get; }

public ColumnMap(

string columnName, string propertyName,

DataMap dataMap,bool isKey = false)

{

DataMap = dataMap;

ColumnName = columnName;

FieldName = fieldName;

IsKey = isKey;

Property = DataMap.DomainClass.GetProperty(FieldName);

}

public void SetValue(object obj, object columnValue)

=> Property.SetValue(obj, columnValue);

public object GetValue(object obj) => Property.GetValue(obj);

}

In this class, the ColumnName property is used for the column name, PropertyName is used for the property name, IsKey is used to determine whether the column is the primary key or not, Property is used to store property information to receive and store the value, and DataMap is used to specify the related class. The SetValue method is used to put the returned value from the database into the property, and the GetValue method is used to read the property value and send it to the database.

So far, we have reviewed the codes to prepare and obtain metadata so far. In the following, we will see how these codes can be used. For example, suppose that we want to implement the search operation. The search operation can include creating and sending queries to the database, receiving values, and placing them in the desired object. For this, we check the following codes:

Public abstract class Mapper

{

protected DataMap DataMap { get; set; }

public object Find(TKey key)

{

string query = $"" +

$"SELECT {DataMap.GetColumns()}" +

$"FROM {DataMap.TableName}" +

$"WHERE {DataMap.GetKeyColumn()} = {key}";

var reader = new SqlCommand(query, DB.Connection).ExecuteReader();

reader.Read();

var result = Load(reader);

return result;

}

public object Load(IDataReader reader)

{

var obj = Activator.CreateInstance(DataMap.DomainClass);

LoadProperties(reader, obj);

return obj;

}

private void LoadProperties(IDataReader reader, object obj)

{

foreach (var item in DataMap.ColumnMaps)

{

item.SetValue(obj, reader[item.ColumnName]);

}

}

}

The preceding code defines the Mapper class as an abstract class. This class is responsible for creating queries based on metadata, sending them to the database, receiving the results, and mapping them to the class's properties according to the metadata. The DataMap property in this class specifies which metadata we will work with, and this property is later set by classes that inherit Mapper. The Find method in this class is designed to search for records in the table based on the primary key. In this method, according to the methods available in the DataMap class, the information of the table and columns is read, and a query is made. After creating the query, the query is sent to the database, and in return, the results are sent to the Load method.

The Load method first creates an object of the desired class and then uses the LoadProperties method to set the value of each of the properties of this object based on the columns returned from the database.

According to the preceding codes, the search operation can be performed based on the metadata of the classes. Finally, to perform the search in the Person class, we create the PersonMapper class. This class is responsible for setting Person metadata:

public class PersonMapper : Mapper

{

public PersonMapper() => LoadDataMap();

protected void LoadDataMap()

{

DataMap = new DataMap(typeof(Person), "people");

DataMap.ColumnMaps.Add(

new("personId", nameof(Person.PersonId), DataMap, true));

DataMap.ColumnMaps.Add(

new("firstName", nameof(Person.FirstName), DataMap));

DataMap.ColumnMaps.Add(

new("lastName", nameof(Person.LastName), DataMap));

DataMap.ColumnMaps.Add(

new("age", nameof(Person.Age), DataMap));

}

public Person Get(int personId) => (Person)Find(personId);

}

The LoadDataMap method is placed in the PersonMapper constructor in the preceding code. Since this information is immutable, this method can be executed only during the initial loading of the program. Finally, if we want to communicate with the cars table, it will only be enough to implement the CarMapper class like the PersonMapper class.

To get the metadata of the tables, it is enough to serialize the DataMap property to the desired format. The following is a part of the Person class metadata in JSON format:

{

"DomainClass": "Person",

"TableName": "people",

"ColumnMaps": [{

"ColumnName": "personId",

"PropertyName": "PersonId",

"IsKey": true

}, {

"ColumnName": "firstName",

"PropertyName": "FirstName",

"IsKey": false

}, {

"ColumnName": "lastName",

"PropertyName": "LastName",

"IsKey": false

}, {

"ColumnName": "age",

"PropertyName": "Age",

"IsKey": false

}]

}

Notes:

Metadata can be produced in two different ways at two different times:

The first method is using code generators. Using this method, metadata can be given as input to the code generator. The Mapper classes can be delivered and placed next to the rest of the source code. You can even prepare these codes in the build process before starting to compile. Since code generators create these codes and classes during the Build process, it is unnecessary to include them in the source code controller, such as Git and so on.
The second method is Reflection (as in the example provided). This method gives good flexibility to the codes, and on the other hand, it reduces the efficiency. This loss of efficiency should be checked against the advantage that is provided.
It is often possible to save the generated metadata in XML or JSON format. You can even save the metadata of the tables in the database. When the metadata is stored in an external data source, to use it, in the LoadDataMap method seen in PersonMapper, instead of providing the necessary settings, we just need to deliver the existing metadata to the DataMap.

Consequences: Advantages

Using this design pattern significantly reduces the work and codes required for database mapping.
Adding a new model or table is enough to write the relevant Mapper and present the desired logic for mapping (LoadDataMap) in its format.
Consequences: Disadvantages

Using this design pattern makes the code refactoring process difficult. Because after generating the metadata, if the class's property name changes or the table's column name changes, then the program will face a serious problem.
Applicability:

When faced with many models and we must map each to a table in the database, then using this design pattern can be useful.
Related patterns:

Some of the following design patterns are not related to metadata mapping design patterns, but to implement this design pattern, checking the following design patterns will be useful:

Query Object
Query object
Name:

Query object

Classification:

Object-relational metadata mapping design patterns

Also known as:

---

Intent:

By using this design pattern, an object takes the task of creating queries.

Motivation, Structure, Implementation, and Sample code:

Suppose we are implementing a software requirement. The team responsible for generating and implementing this requirement does not have information about how to write queries in T-SQL language. In this case, implementing the requirement will be a bit difficult. The members of this team may not be able to write good queries, which may seriously damage the program's overall efficiency.

In this situation, the way of generating queries can be entrusted to a class, and the users of this class, instead of being related to the table and its columns, face the classes and properties that they have produced in the form of models. As an example, consider the example of the person that we discussed in the metadata mapping design pattern section. The class related to the Person model can be considered as follows:

public class Person

{

public int PersonId { get; set; }

public string FirstName { get; set; }

public string LastName { get; set; }

public int Age { get; set; }

}

Suppose we want to generate the following query using the query object design pattern:

SELECT PersonId, FirstName, LastName, Age FROM Person WHERE Age > 33

We learned how metadata can be created and used in the metadata mapping design pattern. Also, in that design pattern, we learned how to map the properties of a class to table columns. The remaining point is how to generate the condition section or WHERE clause. The condition created in the WHERE section is often in the Property Operator Value format (For example, Age > 33).

With these explanations, to generate the WHERE section, we define the Criteria class as follows:

public class Criteria

{

public string @Operator { get; set; }

public string Property { get; set; }

public object Value { get; set; }

public Criteria(string @operator, string property, object value)

{

Operator = @operator;

Property = property;

Value = value;

}

}

As shown in the preceding code, the @Operator property is considered for the operator (such as >, <, =, and so on), the Property is considered for the property name, and the Value property is for the value. According to the preceding sample query, this class should be able to make the condition related to Age > 33. For this purpose, we define the GreaterThan method as follows within the Criteria class:

public static Criteria GreaterThan(string property, int value)

=> new Criteria(">", property, value);

As it is clear in the preceding query, the GreaterThan method returns a Criteria object after receiving the property name and value. Next, we need to convert these created Criteria into an understandable query for the T-SQL language. For this purpose, we define the GenerateTSQL method as follows in the Criteria class:

public string GenerateTSQL(PersonMapper mapper)

{

var columnMap = mapper.DataMap.ColumnMaps

.FirstOrDefault(x => x.PropertyName == Property);

return $"{columnMap.ColumnName} {Operator} {Value}";

}

Upon receiving a PersonMapper type object, the preceding method reads the relevant metadata (this class has already been implemented in the metadata mapping design pattern) and returns the column related to the provided property and the condition statement from the metadata in T-SQL language.

The objects related to the condition section have been created, and the corresponding T-SQL code has been generated. We should be able to assign it to our query. Therefore, we define the QueryObject class as follows:

public class QueryObject

{

public ICollection Criterias { get; set; }

}

The QueryObject class has a property called Criteria, which stores all the conditions of a query. Since the number of conditions in the WHERE section can be more than one condition, this property is defined as an ICollection. Also, the QueryObject class has a method called GenerateWhereClause, which adds various conditions. The implementation of this method will be as follows:

public string GenerateWhereClause()

{

StringBuilder sb = new();

foreach (var item in Criterias)

{

if (sb.Length > 0)

{

sb.Append("AND");

}

sb.Append(item.GenerateTSQL(mapper));

}

return sb.ToString();

}

In the preceding code, for the case of simplicity, different conditions are linked to each other using the AND operator.

According to the preceding codes, the WHERE section is made. To use and run a query, you can define the search method in the Mapper class as follows:

public IDataReader FindByWhere(string where)

{

string query = $"" +

$"SELECT {DataMap.GetColumns()}" +

$"FROM {DataMap.TableName}" +

$"WHERE {where}";

return new SqlCommand(query, DB.Connection).ExecuteReader();

}

Next, using the Execute method in the QueryObject class, you can issue a query execution request:

public IDataReader Execute() => mapper.FindByWhere(GenerateWhereClause());

And also, we can use it as follows:

QueryObject qb = new QueryObject();

qb.Criterias.Add(Criteria.GreaterThan(nameof(Person.Age), 33));

var result = qb.Execute();

As you can see, the programming team can create and execute T-SQL queries using classes and their properties without needing T-SQL knowledge.

Notes:

This design pattern aligns with the interpreter design pattern from the GoF design patterns.
Instead of writing the QueryObject class with all the capabilities, it is recommended to create its capabilities based on the need over time.
The metadata mapping or UnitOfWork design pattern can be very beneficial in using this design pattern.
Today, Object-Relational Mappers (ORMs) do exactly what can be done with query objects, so if you use ORM in your program, you will not need to use this design pattern.
Using this design pattern, the database structure is encapsulated.
Consequences: Advantages

When the database structure is different from the structure of the models, using this design pattern can be useful.
Combining this design pattern with the identity map design pattern can increase efficiency. For example, suppose that the list of all people has already been read from the database, and the result is placed in the identity map. Now, a new query has been created to apply a condition on the list of people. The query can be answered without sending it to the database and only by referring to the data in the identity map.
Consequences: Disadvantages

Usually, producing queries that answer the needs of a program, with this design pattern, can be a time-consuming or complex task, and often using existing tools can be a better option.
Applicability:

When the database structure is different from the structure of the models, using this design pattern can be useful.
If the production team lacks the knowledge to write queries, this design pattern can be useful by producing an interpreter.
Related patterns:

Some of the following design patterns are not related to query object design patterns, but to implement this design pattern, checking the following design patterns will be useful:

Interpreter
Metadata mapping
UnitOfWork
Identity map
Repository
Name:

Repository

Classification:

Object-relational metadata mapping design patterns

Also known as:

---

Intent:

This design pattern tries to implement CRUD1 processes optimally and provide them to the user by placing a layer between the data access layer and the rest of the system.

Motivation, Structure, Implementation, and Sample code:

Suppose it is requested to provide a function through which a new user can be defined in the system, edit and delete existing users, find a specific user, and display the list of users. There are different ways to implement this scenario. One of these ways is to design a class to make appropriate queries. The problem with this type of design is that as new entities are added, it will be necessary to write separate classes for each one. Naturally, we will face many duplicate codes in this method.

Another method is to define the desired operation for working with data in the form of an interface, and each entity implements this interface. In this implementation, each entity defines and sets its appropriate queries. This approach is what the repository design pattern tries to provide. According to the preceding explanations, the following class diagram can be imagined:

Figure%2010.1.png
Figure 10.1: Repository design pattern UML diagram

As shown in Figure 10.1 diagram, the IRepository interface defines the necessary methods for working with data. The user entity in the UserRepository class implements this interface and creates appropriate queries for each operation. With these explanations, the following codes can be considered for the repository design pattern:

public class UserDbSet

{

public static List Users = new()

{

new User() { Id = 1, Name = "Vahid" },

new User() { Id = 2, Name = "Ali" },

new User() { Id = 3, Name = "Reza" },

new User() { Id = 4, Name = "Maryam" },

new User() { Id = 5, Name = "Hassan" }

};

}

public class User

{

public int Id { get; set; }

public string Name { get; set; }

public override string ToString() => $"Id: {Id}, Name: {Name}";

}

public interface IRepository

{

User Find(int id);

List GetAll();

void Add(User user);

void Update(User user);

void Delete(int id);

}

public class UserRepository : IRepository

{

public void Add(User user)

{

if (UserDbSet.Users.All(x => x.Id != user.Id))

UserDbSet.Users.Add(user);

}

public void Delete(int id) => UserDbSet.Users.Remove(Find(id));

public User Find(int id)

{

UserDbSet.Users.ToDictionary(x => x.Id).TryGetValue(id, out User result);

return result;

}

public List GetAll() => UserDbSet.Users;

public void Update(User user)

{

var originalUser = Find(user.Id);

if (originalUser != null)

{

originalUser.Name = user.Name;

}

}

}

Now, to use this Repository, you can proceed as follows:

IRepository repository = new UserRepository();

repository.Add(new User { Id = 6, Name = "Narges" });

repository.Update(new User { Id = 3, Name = "Alireza" });

repository.Delete(4);

Console.WriteLine(repository.Find(1));

foreach (User user in repository.GetAll())

{

Console.WriteLine(user);

}

As can be seen, to work with data, the Client does not need to engage with queries and write queries and submits his request only through the Repository of the data source.

If we pay attention to the interface code of IRepository, we will notice that this Repository receives a User object in a method like Add. This method will create a problem: if we need to define a Repository for many entities, we will need to define a separate interface for each. To solve this problem, the repository design pattern can be implemented using the concept of Generics in the C#.NET programming language. For this purpose, the preceding code can be rewritten as follows:

public interface IRepository

{

TEntity Find(TKey id);

List GetAll();

void Add(TEntity user);

void Update(TEntity user);

void Delete(TKey id);

}

Instead of being dependent on the user, the IRepository interface receives the required types at runtime. This interface has two generic types in the preceding code. TEntity is the entity we want to add or edit to the data source. TKey is also a data type of the entity key field, with the help of which we find or delete the entity from a data source. To have a better interface, you can also define a limit on the above Generic types to prevent possible runtime errors. According to the above interface, the UserRepository class will be rewritten as follows:

public class UserRepository : IRepository

{

// Class implementation…

}

With the preceding change applied to the IRepository definition, defining an interface for each entity is unnecessary.

The next point about this design pattern is that implementing the IRepository interface methods may be consistent across all entities. In any case, defining the Repository associated with each entity will be necessary. To prevent this from happening, the entire Repository implementation can be done using Generic. For this purpose, it will be necessary to define a general class that implements the IRepository interface and provides the implementation of the methods. Then any entity that needs a repository should use this class. If an entity needs its Repository or needs to make changes in the default implementation of the provided Repository, it can do so by using inheritance.

With these explanations, the Repository implementation can be changed as follows:

public class Repository : IRepository

{

DbSet _set;

public Repository()

{

_set = DbContext.Set();

}

//…

}

As seen in the preceding code, the Repository class is also defined as Generic, like the IRepository interface. In the implementation of this class, the important point is that with the help of the Set method provided by the Entity Framework, by providing the table type, you can access the corresponding object (_set object). When this object is available, it is easy to implement various data operations. This way of implementing the repository design pattern is called a generic repository. By implementing this pattern in the generic repository method, there will often be no need to define an interface or class for each entity, and the amount of code will be significantly reduced.

Notes:

Today, Entity Framework has implemented the Repository design pattern within itself. The question arises: Although this design pattern is implemented in the entity framework, do we need to re-implement it in our codes? The answer to this question can be very challenging. But it seems that for small systems, re-implementing the Repository design pattern may not be justified, but for larger systems, placing an additional layer between the client and entity framework can be very useful. Some of the advantages of this method are:
LINQ queries will enter the business code if the client and entity framework is directly connected, increasing the dependency between the two layers. While in the presence of a repository layer between these two layers, all queries will be placed inside this intermediate layer, and loose coupling between the business layer and the data access layer will happen.
When we need to write unit tests in the presence of the Repository, we can simply connect the tests to a mock repository and write and run the tests easily.
This design pattern is very similar to the query object design pattern, and combining query objects with metadata mapping produces the required queries.
In using the repository design pattern, it is not necessary to have relational databases.
Usually, this design pattern is used next to the UnitOfWork design pattern.
Consequences: Advantages

The codes related to the business layer are separated from the data access layer. As a result, if the data source needs to be changed, applying this change will be accompanied by minimal changes in the code.
Using this approach, codes can be tested more easily.
Repetitive queries will be reduced during the program.
Separation of Concern (SoC) is observed to separate the business logic from data access.
Consequences: Disadvantages

Considering that an additional abstraction layer is created, this design pattern for small programs can increase complexity.
Applicability:

This design pattern can be useful when there is a need to separate the business logic layer from the data access layer.
Using this design pattern can be very useful for large and complex programs.
Related patterns:

Some of the following design patterns are not related to the repository design pattern, but to implement this design pattern, checking the following design patterns will be useful:

Query object
Metadata mapping
UnitOfWork
Conclusion
In this chapter, you learned how to extract class metadata using different methods, such as reflection. You also learned how to focus the creation of T-SQL queries (Or any other database-related queries) with the help of query object design patterns. Also, in this chapter, you learned how to create a layer between the data access layer and the rest of the system and use it to communicate with the data source to perform various CRUD operations.

In the next chapter, you will learn about Web Presentation design patterns.

1 Create-Read-Update-Delete

Join our book's Discord space

Join the book's Discord Workspace for Latest updates, Offers, Tech happenings around the world, New Release and Sessions with the Authors:

https://discord.bpbonline.com

NET 7 Design Patterns In-Depth 9. Object-Relational Structures Design Patterns

Chapter 9
Object-Relational Structures Design Patterns
Introduction
To organize the object-relational structures, the design patterns of this category can be divided into the following ten main sections:

Identity field: Tries to maintain the data identity between the object and the database record by storing the value of the database identity column(s) inside the object.
Foreign key mapping: Tries to map the foreign key relationship between tables in the database as a relationship between objects.
Association table mapping: Tries to map the many-to-many relationship between tables in the database as relationships between objects.
Dependent mapping: Attempts to make a class responsible for managing child class communication with the database.
Embedded value: It can map an object to several table columns related to another object.
Serialized LOB: It can convert a graph of objects into a LOB (Large Object) and store it in a database field.
Single table inheritance: It mapped the classes that participated in the inheritance structure to a table.
Class table inheritance: Tries to map each class to one table in the database in an inheritance structure.
Concrete table inheritance: Tries to map each non-abstract class to one table in the database.
Inheritance mappers: Tries to organize the various Mappers to manage the inheritance structure.
Structure
In this chapter, we will cover the following topics:

Object-relational structures design patterns
Identity field
Foreign key mapping
Association table mapping
Dependent mapping
Embedded value
Serialized LOB
Single table inheritance
Class table inheritance
Concrete table inheritance
Inheritance mappers
Objectives
In this chapter, you will learn how to map different relationships between database tables (such as one-to-many and many-to-many relationships) to classes. In this chapter, you will learn how to map an inheritance structure to tables in a database. Also, you will learn how to work with dependent data and map them with the data in the database tables.

Object-relational structures design patterns
There are important points to pay attention to when mapping relationships. First, to associate an object with a record in a table, it is often necessary to insert a primary key value into the object. For this, using the identity field design pattern will be useful. Now, one table may be related to another through a foreign key relationship, and the foreign key mapping design pattern can be used to implement this relationship between objects.

The connection between objects may be very long or circular. Therefore, when loading one of the objects, it is necessary to fetch the data related to the dependent object and repeat this process continuously. As mentioned earlier, a lazy load design pattern can be useful in this section. But there is a situation where there is a one-to-many relationship between A and B, and B's data can only be used through A, and we have nothing to do with them outside of A. In this case, A can be considered the owner of B and benefit from the dependent mapping design pattern. This type of communication becomes complicated when the one-to-many relationship becomes a many-to-many relationship. In this case, you can use the association table mapping design pattern.

As mentioned earlier, identity field design patterns can be used for communication between objects. But some data should not be mapped to specific tables, and value objects are among these data. The embedded value design pattern is a better way to store value objects. Sometimes the amount of data that must be stored in this method can be very large. To solve this problem, you can use the serialized LOB design pattern.

The preceding points were related to composition-type relationships. Sometimes there may be an inheritance relationship between classes. This type of relationship does not exist in relational databases, so there are three options for implementing this type of relationship: one table for the entire inheritance structure (Single table inheritance design pattern), one table for each non-abstract class (Concrete table inheritance design pattern), and finally one table for each class (Class table inheritance design pattern).

Identity field
Name:

Identity field

Classification:

Object-relational structures design patterns

Also known as:

---

Intent:

This design pattern tries to maintain the data identity between the object and the database record by storing the value of the database identity column(s) inside the object.

Motivation, Structure, Implementation, and Sample code:

Suppose a requirement is raised and it is necessary to retrieve the data in the books table or to define a new book. Suppose the following structure exists for the books table in the database:

BookId

Title

PublishDate

1

Design Patterns in .NET

2021-10-01

2

Programming in C#.NET

2021-11-01

Table 9.1: Book table in the database

When using the identity field design pattern, we store the identity value of the books table in the object of the Book class. This identifier value must have an important feature, and that is that its value must be unique. The reason for this requirement is that we must be able to uniquely identify each record and each object with the help of this value. The best candidate for an identifier will often be the primary key value since the primary key always has a unique value. With this assumption, the code for the Book class will be as follows:

public class Book

{

public int BookId { get; set; }

public string Title { get; set; }

public DateOnly PublishDate { get; set; }

}

As it is clear from the code related to the Book class, the BookId property will be used to store the value of the BookId column of the database. Based on this structure, the following objects can be considered:

Book book1 = new()

{

BookId = 1,

Title = "Design Patterns in .NET",

PublishDate = new DateOnly(2021, 10, 01)

};

In the preceding code, the value of the BookId property of the book1 object is equal to 1, so this object refers to the record with the value of the bookId=1 column in the Books table.

To insert a new book, you can do the following:

public bool Create(Book book)

{

book.BookId = GetNewBookId();

return Save(book);

}

private int GetNewBookId()

{

/*

* In this section, the desired method to generate the

* new value of the key must be implemented.

*/

}

According to the type of key (simple or composite) and the selected method to generate the new value of the key, the GetNewBookId method should be implemented.

Notes:

Key selection is very important in this design pattern. There are two main options for choosing a key. Using a meaningful key and using a meaningless key:
Meaningful key: It is also called natural key; its value is always unique and has a meaning for the user. For example, the ISBN is a unique value for a book and has a meaning for the end user.
Meaningless key: It is also called a generated key. It is a key whose value is unique but has no meaning for the user. For example, the primary key column, which is automatically and sometimes randomly set, is often a meaningless key for the user.
Choosing between these two types of keys is important. Although the meaningful key has a unique value, it has an important shortcoming. It is expected that the key value is immutable and unique. Immutability is not necessarily true for a meaningful key because a mistake may occur when entering its value, and the user wants to edit it. In this case, the immutability condition for the key will be violated. The value of the key can be changed until the changes are sent to the database, and the database has not checked the uniqueness of the value; the object can be used. This can affect reliability. For small scenarios or special situations, a meaningful key can be useful.

The keys can be defined as simple or composite according to their nature. For example, in the database design, the bookId column may be available and has the role of the primary key. In this case, the key is simple. In another case, for example, for the sales information table, storeId and bookId columns together maybe keys. In this case, it is a composite key. Managing uniqueness and immutability constraints for composite keys will be important and sensitive. To implement the composite key, using the key class can be helpful. In any case, regarding the key, two types of operations should be considered. The first operation compares the values, and the second is related to generating the next value. In the following code, you can see how to implement the key class:
public override bool Equals(object? obj)

{

if (obj is not CompoundKey)

return false;

CompoundKey other = (CompoundKey)obj;

if (keys.Length != other.keys.Length)

return false;

for (int i = 0; i < keys.Length; i++) if (!keys[i].Equals(other.keys[i])) return false; return true; } The preceding method shows the value comparison process for the composite key. As it is clear in the code, inside the for loop, the value of all the members of the composite key is compared one by one. According to the requirement, the preceding implementation may be varied and different to compare the values of the keys. Also, to make a composite key, you can proceed as follows: public class CompoundKey { private object[] keys; public CompoundKey(object[] keys) => this.keys = keys;

}

Different constructors with different input types can be defined to create composite keys with specified members or values, and the preceding code can be improved.

Keys can be on two levels in terms of uniqueness. The first level may require that the key value in a table is always unique, and the second level may require that the key value be unique in the entire database. It is usually very common to use the first level, while the advantage of the second level will be that an identity map will be needed.
Depending on the data type selected for the key, we may reach the maximum value when using it. In this case, to generate the next value, the value of the key used for the deleted records can be used. This method will be provided that the deletion has occurred physically. 64-bit data types or combination keys can reduce the possibility of reaching the maximum allowed value. Usually, the mistake is using data types such as GUID. Although the value of these data types is always unique, they occupy a lot of space. Therefore, before using them, the necessary checks must be done.
When creating a new object and storing its value in the database, generating a new value for the key will be necessary. Different ways to generate a new value for the key include setting the key column in the database to auto-generate, using the GUID for the key, or implementing your mechanism:
Using the auto-generate method is simpler. However, it may cause problems in the record and object mapping process. Because until the record is inserted in the database and then the same data is not retrieved from the database, the value assigned to the key will be unknown. An alternative method for auto generate in Microsoft SQL server database is to use sequence, which plays the role of database level counter.
Using the GUID method, as mentioned earlier, may cause performance problems.
To implement a custom mechanism, methods such as using the MAX function to find the last value of the key may also be used. This method may be useful for small tables with a lower simultaneous change rate. It will not be a good method for larger tables or tables with a high simultaneous change rate because, while having a negative effect on performance, it may be due to not having the appropriate isolation level for the transaction. Multiple objects get a common value for the key. Another way to implement this method is to use a key table. Using this table, you can store the next value for the key in this table and read the next value when you need to create a new object. It is better to access the key table through a separate transaction so that the rest of the transactions that need the key do not wait for the lock to be released. The problem with this method is that if the insert transaction is canceled for any reason, the generated key value will not be usable.
An alternative method for using the identity field design pattern is the identity map design pattern.
Consequences: Advantages

Using this design pattern, a common feature can be placed between the database records and the object created in the memory. The object and its corresponding record in the database can be identified through that feature. Without using this design pattern, in databases, the primary key value is usually used to identify the uniqueness of a record. In .NET, objects have their unique identifier, but there is no relationship between these two values.
Consequences: Disadvantages

If the data type of the key is a string, generating a new key or comparing the values of the keys can be time-consuming or heavy operations.
If the new value of the key is duplicated for any reason, if this value is not sent to the database, it cannot be noticed that it is duplicated. Obviously, in this case, performance will suffer.
Applicability:

When there is a need to map records in the database to objects, using this design pattern can be useful. This mapping usually uses domain models or row data gateway design patterns.
Related patterns:

Some of the following design patterns are not related to identity field design patterns, but to implement this design pattern, checking the following design patterns will be useful:

Identity map
Domain model
Row data gateway
Foreign key mapping
Name:

Foreign key mapping

Classification:

Object-relational structures design patterns

Also known as:

---

Intent:

Using this design pattern, the foreign key relationship between tables in the database is mapped as a relationship between objects.

Motivation, Structure, Implementation, and Sample code:

Continuing the example presented in the identity field design pattern, we assume we need additional information for each book, located in a separate table next to the book object. The relationship between the books and the bookDetail is one-to-one. One-to-one relationship means that each book has only one record in the bookDetail table, and each record in the bookDetail table corresponds to one book. In other words, the following relationship can be imagined between the books and bookDetail tables:

Figure%209.1.png
Figure 9.1: Relationship between books and bookDetail tables

As it is clear from Figure 9.1, ER diagram, the books table has a primary key column called bookId, and the bookDetail table also has a foreign key column called bookId. In the bookDetail table, the bookId column also plays the role of the primary key. To implement this relationship, you can do as follows:

public class Book

{

public int BookId { get; set; }

public string Title { get; set; }

public DateOnly PublishDate { get; set; }

private BookDetail detail;

public BookDetail Detail

{

get => detail;

set

{

detail = value;

if (detail.Book == null)

detail.Book = this;//’this’ refers to the current instance of the class }

}

}

public class BookDetail

{

public int BookId { get; set; }

public byte[] File { get; set; }

public Book Book { get; set; }

}

As it is clear from the preceding code, each Book and BookDetail class has designed the identity field design pattern. The BookId property in these two classes is mapped to the bookId column in the books and bookDetail tables. Also, the Book class has a property called Detail related to the BookDetail class. The BookDetail class also has a property called Book and is related to the Book class. This relationship maps the database's foreign key to the relationship between objects. In the preceding code, a one-to-one relationship is implemented in a bidirectional way, which means that the corresponding BookDetail object can be accessed through a Book object and vice versa. Removing one of the Book or Detail properties is enough to implement this relationship unidirectionally.

In this one-to-one implementation, inserting, editing, and deleting will be simple operations. To insert or edit a record in the supplementary information table of the book, create the BookDetail object and send it to the database. The only important point in constructing this object is to comply with the principles related to the foreign key constraint.

Next, suppose that we need to implement the comments that the readers have given to the book. For this purpose, the comments table is designed in the database. There is a one-to-many relationship between the books and comments tables, which means that there can be several comments for one book, and on the other hand, each comment is only specific to one book. With these explanations, the ER diagram can be imagined as follows:

Figure%209.2.png
Figure 9.2: Relationship between books and comments tables

As shown in the Figure 9.2 ER diagram, the comments table has a column called bookId, which plays the role of a foreign key and is connected to the bookId column in the books table. Also, as determined by the relationship type, several records may be in the comments table for each bookId. With these explanations, the following code can be considered to implement this relationship:

public class Book

{

public int BookId { get; set; }

public string Title { get; set; }

public DateOnly PublishDate { get; set; }

private ICollection comments = new List();

public ICollection Comments

{

get => comments;

set

{

comments = value;

foreach (var item in comments.Where(x => x.Book == null))

item.Book = this; //’this’ refers to the current instance of the class

}

}

private BookDetail detail;

public BookDetail Detail

{

get => detail;

set

{

detail = value;

if (detail.Book == null)

detail.Book = this;

}

}

}

public class Comment

{

public int CommentId { get; set; }

public string Text { get; set; }

private Book book;

public Book Book

{

get => book;

set

{

book = value;

if (!book.Comments.Contains(this))

{

book.Comments.Add(this);

}

}

}

}

As it is clear in the preceding code, the Book class has an ICollection property called Comments, through which you can see all the comments related to that book. Also, the Comment class has a property called Book, which can connect the comment to its corresponding book. This method will also represent the implementation of one-to-many relationships. Remember that the preceding implementation is bidirectional, and it is possible to implement this relationship unidirectionally, as discussed in BookDetail.

In implementing the one-to-many relationship. There are three different ways to implement the editing process:

Delete and insert: This method is the easiest way to implement the editing operation. In this method, all the comments related to the book are removed from the comments table, and all the comments in the Book object are inserted into the comments table. This method has various problems, including that the comments related to the book may not have changed. In this method, we will have to delete and insert them again. Also, the bigger problem with this method is that this method can only be used if the comments have not been used elsewhere.
Reconciliation: Can be done in two ways:
Reconciliation with the data in the database: In this method, the comments related to the book should be read from the database and compared with the current comments. Any comment in the database but not in the list of current comments should be deleted. Any comment in the current list and not among the comments read from the database should be inserted.
Reconciliation with previously fetched data: In this method, storing the book comments fetched in the beginning will be necessary. This method will work better than the previous method because there is no need to refer to the database, but the data reliability will be low. The comparison method in this method will be the same as the previous method.
Using a bidirectional relationship instead of a unidirectional relationship: This method, also called the Back Pointer, allows us to access the related book through each comment instead of accessing the list of comments through the book. In this case, we will have a one-to-one relationship with the book for each comment, and as the one-to-one relationship explained earlier, the editing process can be implemented.
For the delete operation, the process will be almost similar. Deleting a comment often means that the comment is completely deleted, which naturally should also be deleted from the table. Or, it means that the comment has been transferred from one book to another, in which case, the comment record will also be edited when processing that book. In some cases, the comment may not be related to a specific book. In such cases, the value of the bookId column in the comments table will be equal to null (This method is only possible if the corresponding foreign key constraint allows storing null in the foreign key column table).

Notes:

If the relation is of immutable type, then the editing operation will be meaningless. For example, comments sent for a book cannot be edited or transferred to another book.
Using the UnitOfWork design pattern to implement editing, inserting, or deleting scenarios can be very helpful.
In implementing this design pattern, the occurrence of a loop is very likely. You can benefit from the lazy load or identity map design pattern to prevent stack overflow and properly manage the loop.
In the one-to-many relationship, you can use join to write a query to receive the list of book comments instead of sending several queries to the database. By sending one query to the database, all the information is required to be received. In this method, the number of remote calls or references to the database will be reduced, and the efficiency will be significantly improved.
Consequences: Advantages

Using this design pattern, the first principle of normalization can be observed.
Consequences: Disadvantages

Implementing a many-to-many relationship using this design pattern is impossible because, according to the first principle of normalization, the value of several foreign keys cannot be stored in one field.
Applicability:

To implement relationships between tables, this design pattern can be used. The association table mapping design pattern should be used to implement many-to-many relationships.
Related patterns:

Some of the following design patterns are not related to foreign key mapping design patterns, but to implement this design pattern, checking the following design patterns will be useful:

Identity field
Unit of work
Lazy load
Identity map
Association table mapping
Association table mapping
Name:

Association table mapping

Classification:

Object-relational structures design patterns

Also known as:

---

Intent:

Using this design pattern, the many-to-many relationship between tables in the database is mapped as relationships between objects.

Motivation, Structure, Implementation, and Sample code:

In the continuation of the example presented for the foreign key mapping design pattern, let us assume that we need to have the information of the authors of the book along with the book object. The book information is in the tblBooks table, and the author information is in the authors table. Each author may have written several books and may have several authors. With these explanations, we can conclude that the relationship between the book and the author is many-to-many, and the foreign key mapping design model cannot implement this model. The foreign key mapping model is useful when one side of the relationship is a single value (such as one-to-one or one-to-many relationships). Now we are facing a relationship where both sides are multi-valued.

In the database world, there is a famous solution to solve this problem, and that is to use an intermediate table. The task of this intermediate table is to convert the many-to-many relationship into two one-to-many relationships. This intermediate table is sometimes called a link table. The name of this intermediate table in the given example is bookAuthors. The relationship between the tblBooks and the authors can be imagined as follows:

Figure%209.3.png
Figure 9.3: Relationship between tblBooks and authors tables

As shown in Figure 9.3, ER diagram, the tblBooks table has a one-to-many relationship with the bookAuthors table. Also, the authors table has the same one-to-many relationship as the bookAuthors table. The primary key in the preceding link table is a composite key, and the primary key is defined from the combination of the bookId and authorId columns. Now that the many-to-many relationship problem has been solved in the database, you can use the association table mapping design pattern to solve this problem at the model level and also fix the .NET codes:

public class Book

{

public int BookId { get; set; }

public string Title { get; set; }

public DateOnly PublishDate { get; set; }

public ICollection Authors { get; set; } = new List();

}

public class Author

{

public int AuthorId { get; set; }

public string FirstName { get; set; }

public string LastName { get; set; }

public ICollection Books { get; set; } = new List();

}

As it is clear in the preceding code, the Book class is connected to the Author class through the Authors property, and the Author class is also connected to the Book class through the Books property. The important thing about this design pattern is that, as seen in the preceding code, the link table designed in the database has no related objects.

Suppose we need to get the list of authors of a specific book. There are two ways to read the information in this structure:

Refer to the link table and get different authorIds for the desired book and then refer to the authors table and get the information of each author. If the data is not in the memory and it is necessary to refer to the database for each step, we will face a performance problem, and otherwise, if the data is in the memory, this process can be simple.
Using Join in the database and receiving information at once. In this case, going back and forth to the database will be reduced, which will reduce the number of remote calls and improve efficiency; on the other hand, complexity will be created to map the results on the models.
To perform the editing operation, considering that the records of the link table are often not related to another table, therefore, the editing operation can be easily implemented in the form of deletion and insertion operations (Contrary to what we saw in the foreign key mapping design pattern). If the records of the link table are related to other tables, then the same tasks as mentioned in the foreign key mapping design pattern should be performed to perform editing operations.

Notes:

Sometimes the link table, apart from maintaining the values of the foreign keys, also carries additional information. In this case, the link table will have its class. For example, suppose we need to know how much each author contributed to writing a book, and the contribution value is the data that should be stored in the bookAuthors table.
Consequences: Advantages

Using this design pattern, you can connect two different tables without adding a new column to the table and simply by creating an intermediate table.
Consequences: Disadvantages

This design pattern can be used to implement other types of communication, but it is not recommended due to the complexity of this task.
Applicability:

To implement many-to-many relationships, this design pattern can be used.
Related patterns:

Some of the following design patterns are not related to the association table mapping design pattern, but to implement this design pattern, checking the following design patterns will be useful:

Foreign key mapping
Dependent mapping
Name:

Dependent mapping

Classification:

Object-relational structures design patterns

Also known as:

---

Intent:

Using this design pattern, a class becomes responsible for managing the communication of the child class with the database.

Motivation, Structure, Implementation, and Sample code:

Following the examples presented in the previous sections, suppose we need to have the contact information and address of the authors. The relationship between the contact information table or contactInfo and the authors table is one-to-many. In this way, each author can have several contact information. For example, one author can provide a phone number, mobile phone number, and work address in the form of contact information, and another author can provide a home address and work address.

To implement this scenario, it is very important to pay attention to one point: the identity and nature of the contact information, as the author is meaningful. In other words, the contact information cannot be accessed and worked by itself, and all transactions must be done through the author. This structure is exactly what the dependent mapping design pattern seeks to provide.

In this design pattern, we face a series of dependent elements and an owner for that dependent element. All transactions necessary for fetching, inserting, editing, and deleting the dependent element will happen only through its owner. Now, with the explanations provided, consider the following codes to implement the requirements:

public class Author

{

public int AuthorId { get; set; }

public string FirstName { get; set; }

public string LastName { get; set; }

public ICollection ContactInfo { get; set; }

public void UpdateAuthor(Author author)

{

this.FirstName = author.FirstName;

this.LastName = author.LastName;

UpdateContactInfo(author);

}

// Editing contact information includes deleting and re-inserting it

public void UpdateContactInfo(Author author)

{

RemoveAllContactInfo();

foreach (var item in author.ContactInfo)

{

AddContactInfo(item);

}

}

/*

All communication of contact information with the database

happens through the owner

*/

public void AddContactInfo(ContactInfo contactInfo)

=> ContactInfo.Add(contactInfo);

public void RemoveContactInfo(ContactInfo contactInfo)

=> ContactInfo.Remove(contactInfo);

public void RemoveAllContactInfo() => ContactInfo.Clear();

}

public class ContactInfo

{

public ContactType ContactType { get; set; }

public string Value { get; set; }

}

public enum ContactType : short

{

HomePhone,

MobilePhone,

HomeAddress,

WorkAddress

}

The preceding code shows that the Author class has a property called ContactInfo. This property maintains a set of contact information for the author. This class provides the necessary methods to insert and delete the author's contact information (according to the requirement, different implementations may be provided for these methods, and the code sample is provided here for simplicity). On the other hand, the ContactInfo class only has ContactType for the type of contact information (mobile phone number, home number, residential address, and work address) and Value to store the amount of contact information. As it is clear in the preceding code, acting through the Author class is necessary to communicate with the data related to ContactInfo in the database. Therefore, the Author class has the same role as the owner, and the ContactInfo class has the role of the dependent element. In implementing the UpdateContactInfo method in the Author class, the editing operation can be deleted and reinserted by using this design pattern and assuming the dependent element to be immutable.

Notes:

Each dependent element must have only one owner.
When using active record or row data gateway design patterns, the class related to the dependent element will not contain the necessary code to communicate with the database. As mentioned before, these communications should be established only through the owner, and these codes will be placed in the owner's mapper class when using the data mapper design pattern. Finally, there will be no dependent element when using the table data gateway design pattern.
Usually, when reading the owner's information from the database, related elements are also retrieved. If reading the information on dependent elements is costly, or the information related to dependent elements is of little use, the lazy load design pattern can be used to improve efficiency.
The dependent element does not have an identity field, so it is not possible to place it in the identity map; so, like other models, the method that returns a specific record by receiving the key value is meaningless for this element. All these works will be in the scope of the owner's duties.
A dependent element can be the owner of another dependent element. In this case, the main owner has the same duty towards the second dependent element as the first. Therefore, all the communications of the second dependent element with the database must happen through the owner of the first dependent element.
The foreign key of any table should not be inside the dependent element table unless the owner of both tables is the same.
Using UML, the relationship between the owner and the dependent element can be displayed through the Composition relationship.
The use of this design pattern requires two necessary conditions:
A dependent element must have only one owner.
No reference should be from any object other than the owner to the dependent element.
As much as possible, the communication graph using this design pattern should not be too long. The longer and larger this graph is, the more complicated the operation of managing the dependent element through the owner will be.
If the UnitofWork design pattern is used, it is not recommended to use this design pattern. Because UnitofWork does not recognize dependent elements, which can damage the principles of the dependent mapper design pattern.
Consequences: Advantages

If we consider the dependent element immutable, then the change process in the dependent element and how to manage it will be very simple. Editing can be simulated by removing all dependent elements and inserting them again.
Consequences: Disadvantages

Using this design pattern, it will be difficult to track changes. For example, whenever the dependent element changes, this change should be notified to the owner so that the owner can prepare and send the changes to be applied to the database.
Applicability:

This design pattern can be used when an object is available through another object. Often, the type of this relationship is one-to-many.
Using this design pattern will be useful when there is a one-to-many relationship between two models, but their connection is implemented unidirectionally.
Related patterns:

Some of the following design patterns are not related to dependent mapping design patterns, but to implement this design pattern, checking the following design patterns will be useful:

Active record
Row data gateway
Data mapper
Table data gateway
Lazy load
Identity map
UnitofWork
Embedded value
Name:

Embedded value

Classification:

Object-relational structures design patterns

Also known as:

Aggregate mapping or composer

Intent:

Using this design pattern, one object can be mapped to several table columns related to another object.

Motivation, Structure, Implementation, and Sample code:

Following the previous examples, suppose we need to store the information of the publisher of the book. The book publisher has information such as name and address. The address of the book publisher has been designed and implemented as a value object in which the address consists of country, province, city, and street sections.

To implement this structure, one way is to consider two different classes for the publisher and their address, which have a one-to-one relationship with each other, and then map each of these classes to two different tables in the database and connect through the foreign key. If we do the design in this way, then the following structure can be considered for them:

Figure%209.4.png
Figure 9.4: Mapping Publisher and Address to separate tables

In Figure 9.4, two classes, Publisher and Address, are on the left side, each of which is mapped separately to the Publisher and PublisherAddress tables. The problem with this design is that we must use Join whenever we need to fetch and display the publisher's address next to their name. PublisherId also needs to be repeated in the Publisher and PublisherAddress classes, which leads to unnecessary duplicate data. The following code shows the sample query required to retrieve the name and address of the publisher with publisherId=1 using SQL:

SELECT p.Name, a.Country, a.Province, a.City, a.Street

FROM Publisher as p

INNER JOIN PublisherAddress as a

ON p.publisherId = a.publisherId

Where a.publisherId = 1

The preceding design and implementation have no terms of functionality, but the problem in the preceding design and implementation is performance. The embedded value design pattern helps to solve this performance problem by mapping one object to several columns of a table related to another object. In fact, by using this design pattern, the preceding model changes as follows:

Figure%209.5.png
Figure 9.5: Mapping Publisher and Address to one table

In Figure 9.5, two classes, Publisher and Address, are mapped to one table. Now, if we want to rewrite the previously presented query for this model, we will reach the following SQL code:

SELECT Name, Country, Province, City, Street

FROM Publisher WHERE PublisherId = 1

The preceding query will perform better than the previous query due to no need to use JOIN.

With these explanations, the proposed requirements can be implemented as follows:

public class Publisher

{

public int PublisherId { get; set; }

public string Name { get; set; }

public Address AddressInfo { get; set; }

public async Task FindAsync(int publisherId)

{

var reader = await new SqlCommand($"" +

$"SELECT * " +

$"FROM Publisher " +

$"WHERE ID = {publisherId}", DB.Connection).ExecuteReaderAsync();

reader.Read();

Publisher result = new()

{

PublisherId = (int)reader["Id"],

Name = (string)reader["Name"],

AddressInfo = new Address

{

Country = (string)reader["Country"],

Province = (string)reader["Province"],

City = (string)reader["City"],

Street = (string)reader["Street"]

}

};

return result;

}

}

public class Address

{

public string Country { get; set; }

public string Province { get; set; }

public string City { get; set; }

public string Street { get; set; }

}

As it is clear in the preceding code, the Address class does not have any method or code to communicate with the database, and all the communication of this class with the database happens through its owner, that is, the Publisher class. In other words, AddressInfo in the Publisher class is a value object. As it is evident in the Address class, this value object has no identity, and its identity finds its meaning through the owner. This is precisely why the Address class is not mapped to a separate table but only to a series of special columns of its owner object.

Notes:

Understanding when two objects can be stored in the embedded value format is a point that should be considered. The important point in making this decision is to pay attention to data retrieval and storage processes. The question must be answered whether the dependent element is useful outside the owner's domain. If yes, then should we probably store both in the same table (note, it is said probably because this decision is subject to other conditions as well), or should the dependent element data also be fetched whenever the owner data is fetched?
This design pattern is usually used for one-to-one relationships.
This design pattern is a special mode of the dependent mapping design pattern.
Consequences: Advantages

Using this design pattern, you can easily work with value objects.
Consequences: Disadvantages

This design pattern is only suitable for simple dependent elements. If the dependent elements are complex, this design pattern will not be suitable, and the serialized LOB design pattern should be used.
Applicability:

When we are faced with a table that has many columns, but in terms of modeling and object formation, we need to divide this table into different models and put methods specific to each class; in this case, this design pattern can be useful.
Related patterns:

Some of the following design patterns are not related to embedded value design patterns, but to implement this design pattern, checking the following design patterns will be useful:

Dependent mapping
Serialized LOB
Serialized LOB
Name:

Serialized LOB

Classification:

Object-relational structures design patterns

Also known as:

---

Intent:

Using this design pattern, a graph of objects can be converted into a Large Object (LOB) and stored in a database field.

Motivation, Structure, Implementation, and Sample code:

In the continuation of the previous example for the embedded value design pattern, suppose a requirement is raised and, in its format, it is necessary to select and save the publisher's field of activity from thematic categories. The subject category has a tree structure as follows:

General
Novel
Science fiction
Motivational
Academic
Computer field
Software
Hardware
Industries
Law
International law
English language
Children
Teenagers
Adults
For example, for publisher X, we should be able to create and store the following tree:

Academic
Computer field
Software
English language
Children
Teenagers
Adults
There are different ways to implement this scenario, but one of the simplest ways is to convert each publication tree into a LOB and store the LOB result in a table field. In other words, the following code can be imagined to implement the above requirement:

public class Publisher

{

public int PublisherId { get; set; }

public string Name { get; set; }

public ICollection Categories { get; set; }

}

public class Category

{

public string Title { get; set; }

public ICollection SubSet { get; set; }

}

As you can see, the Publisher class has a property called Categories, where the LOB result is supposed to be stored. The structure of the Publisher table in the database is as follows:

CREATE TABLE Publisher(

PublisherId INT PRIMARY KEY IDENTITY(1,1),

Name NVARCHAR(200) NOT NULL,

Categories NVARCHAR(MAX)

)

As it is clear from the table structure, the Categories column in the Publisher table has a string data type, so the Categories property in the Publisher class must be somehow converted into a string. Although there was no requirement for this column to be a string, it could be binary. With this explanation, there are different methods to generate LOB, which are:

Binary LOB (BLOB): The object is converted into binary content and stored. The advantage of this method is the simplicity of production and storage and its small volume. The drawback of this method is that the result is not human-readable (of course, this may be an advantage in terms of security).
Character LOB (CLOB): The object is converted into a string and stored. The advantage of this method is that the result is human-readable, and it is also possible to run queries on them in the database without the need to convert them into objects again. The drawback of this method is that the content is bulky compared to the BLOB method.
After choosing a method to convert the object to LOB, the mechanisms for converting an object to LOB and LOB to an object must be implemented. We assume we want to use the CLOB method in the above example. Therefore, the following methods can be considered to insert or convert LOB into the expected object:

public async Task AddAsync(Publisher publisher)

{

return (await new SqlCommand($"" +

$"INSERT INTO Publisher " +

$"(Name, Categories) VALUES " +

$"N'{publisher.Name}',

N'{JsonConvert.SerializeObject(publisher.Categories)}'",

DB.Connection).ExecuteNonQueryAsync()) > 0;

}

public ICollection GetCategories(string serializedCategories)

=> JsonConvert.DeserializeObject>(serializedCategories);

As shown in the AddAsync method, when inserting publisher information, the information related to Categories is converted into a string with a JSON structure and then saved. Also, using the GetCategories method, the CLOB stored in the table is converted into the expected object. Newtonsoft Json.NET1 tool was used for these conversions.

Notes:

Sometimes it may be necessary to place the LOB in another table instead of the main table. This dependent table usually has two columns, one for ID and another for LOB.
In this design pattern, you should always be sure that the LOB content will be available only through its owner.
Suppose there is a separate database or data warehouse for reporting. In that case, the content of the LOB can be converted into a suitable table in the destination data source so that we do not face performance problems in reporting.
By using compression methods, it is possible to reduce the space consumption of CLOB. In this case, the effect of the most important drawback of this method is slightly reduced.
Consequences: Advantages

Converting the less used part of the object to LOB makes it possible to save the used space, improving efficiency.
Consequences: Disadvantages

When using this design pattern, if data outside the LOB has a reference inside the LOB, it will cause this design pattern to cause problems, including performance, for the program.
Some database management systems do not allow queries in XML or JSON formats; therefore, it is difficult to work with this data type on the database side. (Microsoft SQL Server allows writing queries on both XML and JSON).
Applicability:

When we are faced with an object where some of its data are not used much, then by using this design pattern, we can convert that part of the object into a LOB and store it in a field.
Related patterns:

Some of the following design patterns are not related to serialized LOB design patterns, but to implement this design pattern, checking the following design patterns will be useful:

Embedded Value
Single table inheritance
Name:

Single table Inheritance

Classification:

Object-relational structures design patterns

Also known as:

Table per hierarchy or Discriminated mapping

Intent:

Using this design pattern, you can map the classes that participated in the inheritance structure to one table.

Motivation, Structure, Implementation, and Sample code:

Following the example in the previous sections, suppose that authors who work with publishers collaborate in two formats. Some authors work hourly, and others work with the publisher on a fixed monthly salary. Now a requirement has been raised, and we want to have the financial information of the authors as well. Specifically, the design that can be considered for this requirement can be seen in the form of the following class diagram:

Figure%209.6.png
Figure 9.6: Relation between the Author and his/her collaboration format

According to Figure 9.6 class diagram, authors are of two types. Authors who are paid monthly, whose monthly receipt information is in the MonthlyPaidAuthor class, and authors who are paid hourly and their hourly working information is in the HourlyPaidAuthor class. The Figure 9.6 class diagram can be implemented as follows:

public abstract class Author

{

public int AuthorId { get; set; }

public string FirstName { get; set; }

public string LastName { get; set; }

}

public class HourlyPaidAuthor : Author

{

public int HourlyPaid { get; set; }

public int HoursWorked { get; set; }

}

public class MonthlyPaidAuthor : Author

{

public int Salary { get; set; }

}

As the preceding code shows, the HourlyPaidAuthor and MonthlyPaidAuthor classes inherit from the Author class. Now, this inheritance structure can be mapped to a table in the database so that the authors' data can be retrieved and worked with without using JOIN. Therefore, for the whole preceding structure, the following table can be considered:

CREATE TABLE Author(

AuthorId INT PRIMARY KEY IDENTITY(1,1),

FirstName NVARCHAR(200) NOT NULL,

LastName NVARCHAR(200) NOT NULL,

HourlyPaid INT,

HoursWorked INT,

Salary INT

)

As shown in the preceding T-SQL code, all the properties in the inheritance structure are presented as a table in the database. The important point is that there should be a way to distinguish between the records related to HourlyPaidAuthor and MonthlyPaidAuthor classes. To create this feature, we need to use a separate column. This column contains information about the corresponding class, such as the class name or a unique code for each class. This column is called Discriminator. Therefore, with these explanations, the structure of the table changes as follows:

CREATE TABLE Author(

AuthorId INT PRIMARY KEY IDENTITY(1,1),

FirstName NVARCHAR(200) NOT NULL,

LastName NVARCHAR(200) NOT NULL,

HourlyPaid INT,

HoursWorked INT,

Salary INT,

Type VARCHAR(100) NOT NULL

)

In the preceding table, the Type column plays the role of Discriminator. When the information of authors with monthly income is placed in this table, then the values of HourlyPaid and HoursWorked columns will be empty. The same rule is also true for records related to hourly paid authors, and for them, the value of the Salary column will be empty.

The following code shows how to retrieve records for each class:

public abstract class Author{

private readonly string discriminator;

protected Author(string discriminator) => this.discriminator = discriminator;

public int AuthorId { get; set; }

public string FirstName { get; set; }

public string LastName { get; set; }

protected async Task GetAllAsync()

=> await new SqlCommand($"" +

$"SELECT * " +

$"FROM Authors " +

$"WHERE Discriminator=N'{discriminator}'",

DB.Connection).ExecuteReaderAsync();

}

public class HourlyPaidAuthor : Author{

public HourlyPaidAuthor() : base(nameof(HourlyPaidAuthor)) { }

public int HourlyPaid { get; set; }

public int HoursWorked { get; set; }

public async Task> ReadAllAsync(){

var result = new List();

var reader = await base.GetAllAsync();

while (reader.Read())

result.Add(new HourlyPaidAuthor

{

AuthorId = (int)reader["AuthorId"],

FirstName = (string)reader["FirstName"],

LastName = (string)reader["LastName"],

HourlyPaid = (int)reader["HourlyPaid"],

HoursWorked = (int)reader["HoursWorked"]

});

return result;

}

}

public class MonthlyPaidAuthor : Author{

public MonthlyPaidAuthor() : base(nameof(MonthlyPaidAuthor)) { }

public int Salary { get; set; }

public async Task> ReadAllAsync(){

var result = new List();

var reader = await base.GetAllAsync();

while (reader.Read())

result.Add(new MonthlyPaidAuthor

{

AuthorId = (int)reader["AuthorId"],

FirstName = (string)reader["FirstName"],

LastName = (string)reader["LastName"],

Salary = (int)reader["Salary"]

});

return result;

}

}

As specified in the GetAllAsync method in the Author class, the discriminator filter is applied when sending a request to retrieve records. To implement this section, you may take help from other design patterns, but here we have tried to display the code in the simplest possible way. Also, in the preceding code, the Author class is defined abstractly.

Notes:

Discriminator content can be either a class name or a unique code. If the class name is used, its advantage is the ease of use, and its drawback is the large space that the string content occupies in the database. Also, if the code is used, the advantage is that it takes up less space, and its drawback is the need for a converter to find out which class should be instantiated for each code.
When inserting or retrieving records, records related to a specific class can be accessed by providing the Discriminator value.
To simplify the implementation, the inheritance mapper design pattern can be used.
Consequences: Advantages

Using this design pattern, we only deal with one table in the database, simplifying the work in the database.
Since different data are in the same table, there is no need to use Join to retrieve and read records.
It is possible to refactor the code without changing the structure of the database. You can move various features in the inheritance tree without changing the database structure.
Consequences: Disadvantages

All the fields in this type of structure are not necessarily related to each other, and this can increase the complexity when using the table.
Using this design pattern, some columns in the table will have empty content, which can negatively impact performance and used space. Of course, many database management systems today provide different methods for compressing empty spaces.
In this case, we will face a very large and bulky table. Increasing the volume and size of the table means the presence of different indexes and the placement of multiple locks on the table, which can have a negative effect on performance.
Applicability:

When we face an inheritance structure and want to store the data related to the entire structure in a table, we can use this design pattern.
Related patterns:

Some of the following design patterns are not related to the Single Table Inheritance design pattern, but to implement this design pattern, checking the following design patterns will be useful:

Inheritance mapper
Class table inheritance
Concrete table inheritance
Class table inheritance
Name:

Class table inheritance

Classification:

Object-relational structures design patterns

Also known as:

Root-leaf mapping

Intent:

Using this design pattern, in an inheritance structure, each class is mapped to a separate table in the database.

Motivation, Structure, Implementation, and Sample code:

Suppose that in the example provided in the single table inheritance section, we want to put the information of each class in its table. Suppose we need to store the author's general information in the authors table, financial information related to authors with hourly payments in the hourlyPaidAuthor table, and monthly payments in the monthlyPaidAuthor table.

With these explanations, the codes presented in the single table inheritance design pattern can be rewritten as follows:

public class Author

{

public int AuthorId { get; set; }

public string FirstName { get; set; }

public string LastName { get; set; }

public async Task GetAllAsync() => await new SqlCommand($"" +

$"SELECT * " +

$"FROM authors", DB.Connection).ExecuteReaderAsync();

}

public class HourlyPaidAuthor : Author

{

public int HourlyPaid { get; set; }

public int HoursWorked { get; set; }

public async Task GetAllAsync() => await new SqlCommand($"" +

$"SELECT * " +

$"FROM authors AS a " +

$"INNER JOIN hourlyPaidAuthor as h" +

$"ON a.AuthorId = h.AuthorId", DB.Connection).ExecuteReaderAsync();

}

public class MonthlyPaidAuthor : Author

{

public int Salary { get; set; }

public async Task GetAllAsync()

=> await new SqlCommand($"" +

$"SELECT * " +

$"FROM authors AS a " +

$"INNER JOIN monthlyPaidAuthor as m" +

$"ON a.AuthorId = m.AuthorId", DB.Connection).ExecuteReaderAsync();

}

As it is clear in the preceding code, the Author class has the GetAllAsync method. By using this method, the author's general information is fetched from the authors table. If we need to retrieve the information of hourly authors, we can use the GetAllAsync method available in the HourlyPaidAuthor class. In this class, Join is required and used to retrieve the author's information.

Notes:

There is no requirement to map the entire inheritance structure with one method, and you can benefit from different mapping patterns when dealing with an inheritance structure.
The important point for this design pattern will be how to read data from the database. For this, you can send a specific query for each table, which will cause performance problems. Another method would be to use Join to read the data. In using Join, the Outer Join method may also be used depending on the conditions.
Another point that should be paid attention to is how different records in different tables are related. There are different ways for this relationship. One of these ways is to use the common value in the primary key field. Another method is that each class has its primary key value and has the foreign key of the parent table.
To simplify the implementation, the Inheritance Mapper design pattern can be used.
Consequences: Advantages

By using this design pattern, there will be no waste of space in the database.
The connection between the domain model and the tables is clear.
Consequences: Disadvantages

To retrieve the data, it is necessary to read it using Join from several tables or send several queries.
Any refactoring that changes the location of properties in the inheritance structure will change the table structure.
Tables that are close to the root of the inheritance tree can cause problems due to frequent references.
Applicability:

When we face an inheritance structure, and we want to store the data related to each class in the inheritance structure in its table, we can use this design pattern.
Related patterns:

Some of the following design patterns are not related to the class table inheritance design pattern, but to implement this design pattern, checking the following design patterns will be useful:

Single table inheritance
Inheritance mapper
Concrete table inheritance
Concrete table inheritance
Name:

Concrete table inheritance

Classification:

Object-relational structures design patterns

Also known as:

Leaf level inheritance

Intent:

Using this design pattern, each non-abstract class is mapped to a table in the database.

Motivation, Structure, Implementation, and Sample code:

In the example provided for the class table inheritance design pattern, suppose a requirement is raised so that we have two types of authors, the first type is paid hourly, and the second type is paid monthly. And we want to put the data of each type in its tables. Regardless of the type of author, each author has a series of common characteristics with other authors. Including first and last names, and so on. Because of this feature, usually, the following structure can be considered:

Figure%209.7.png
Figure 9.7: Relation between the Author and his/her collaboration format

As it is clear in the Figure 9.7 class diagram, the Author class is of an abstract type, and the two inherent classes MonthlyPaidAuthor and HourlyPaidAuthor are inherited from it. Using the concrete table inheritance design pattern, each intrinsic class is mapped to a table in the database. This means that in the database, we will encounter two tables monthlyPaidAuthor and hourlyPaidAuthor tables, each of which will contain columns related to themselves and their parent. Therefore, we will have the following tables in the database:

CREATE TABLE monthlyPaidAuthor(

AuthorId INT PRIMARY KEY IDENTITY(1,1),

FirstName NVARCHAR(200) NOT NULL,

LastName NVARCHAR(200) NOT NULL,

Salary INT

)

CREATE TABLE hourlyPaidAuthor(

AuthorId INT PRIMARY KEY IDENTITY(1,1),

FirstName NVARCHAR(200) NOT NULL,

LastName NVARCHAR(200) NOT NULL,

HourlyPaid INT,

HoursWorked INT

)

As can be seen, all the properties related to the parent class are repeated in each of the tables. With the preceding explanation, the following code can be imagined:

public abstract class Author

{

public int AuthorId { get; set; }

public string FirstName { get; set; }

public string LastName { get; set; }

public abstract Task GetAllAsync();

}

public class HourlyPaidAuthor : Author

{

public int HourlyPaid { get; set; }

public int HoursWorked { get; set; }

public override async Task GetAllAsync()

=> await new SqlCommand($"" +

$"SELECT * " +

$"FROM hourlyPaidAuthor", DB.Connection).ExecuteReaderAsync();

}

public class MonthlyPaidAuthor : Author

{

public int Salary { get; set; }

public override async Task GetAllAsync()

=> await new SqlCommand($"" +

$"SELECT * " +

$"FROM monthlyPaidAuthor", DB.Connection).ExecuteReaderAsync();

}

As it is clear in the preceding code, the Author class is defined abstractly and will not be mapped to any table in the database. The HourlyPaidAuthor class is inherited from the Author class to be mapped to the hourlyPaidAuthor table in the database. Also, this class has the implementation of the GetAllAsync method, and in this way, it reads the records related to its table. The implementation of insert, edit and delete methods will be the same.

Notes:

There is no requirement to map the entire inheritance structure with one method, and you can benefit from different mapping patterns when dealing with an inheritance structure.
To simplify the implementation, you can use the inheritance mapper design pattern.
Using this design pattern, you should consider the primary key's value. Usually, the primary key column is in an abstract class. If the code for generating the primary key is also in that class, duplicate values may be generated for the primary key.
Consequences: Advantages

Each table contains all its related data, and there is no need for additional columns.
There is no need to use Join to read the table data; all the parent class features are repeated in the child classes, resulting in their tables.
Consequences: Disadvantages

Primary key value management can be difficult.
In the case of refactoring the codes and changing the location of the properties, it will be necessary to change the structure of the tables as well. This change is less than required in the class table inheritance design pattern and more than in the single table inheritance design pattern.
If the parent class is changed, all tables related to child classes must be changed.
Applicability:

When faced with an inheritance structure and we want to store the data related to each class inherent in the inheritance structure in our table, we can use this design pattern.
Related patterns:

Some of the following design patterns are not related to concrete table inheritance design patterns, but to implement this design pattern, checking the following design patterns will be useful:

Inheritance mapper
Class table inheritance
Single table inheritance
Inheritance mappers
Name:

Inheritance mappers

Classification:

Object-relational structures design patterns

Also known as:

---

Intent:

Organizing different existing mappers to manage the inheritance structure using this design pattern is possible.

Motivation, Structure, Implementation, and Sample code:

In the examples presented for different methods of mapping the inheritance structure to tables, it is often necessary to use a structure to prevent the generation of duplicate codes. There is also a need for a class to be responsible for loading and storing records related to a domain class. All these things can be done with the help of inheritance mappers' design patterns. Following the example provided for authors, suppose we want to implement fetch, insert, delete, and edit operations for each type of author. Consider the example of the concrete table inheritance design pattern. According to that example, we have an abstract class called Author and two intrinsic classes called HourlyPaidAuthor and MonthlyPaidAuthor.

Usually, we have a mapper for each model. The following class diagram can be considered for mappers:

Figure%209.8.png
Figure 9.8: Relation between AuthorMapper and different collaboration format mappers

As seen in the Figure 9.8 class diagram, the AuthorMapper class is abstract. We must use this model to implement fetching, inserting, editing, and deleting operations.

For retrieving records, since retrieving records from the database will result in the corresponding method returning one or a set of intrinsic objects. Therefore, according to the nature of this operation, the fetching operation must be placed in each of the inherent Mappers and cannot be placed in the parent’s abstract Mapper. Therefore, for fetching, the following codes for HourlyPaidAuthorMapper can be imagined:

public abstract class AuthorMapper { }

public class HourlyPaidAuthorMapper : AuthorMapper

{

protected HourlyPaidAuthor Load(IDataReader reader)

{

return new HourlyPaidAuthor()

{

AuthorId = (int)reader["AuthorId"],

FirstName = (string)reader["FirstName"],

LastName = (string)reader["LastName"],

HourlyPaid = (int)reader["HourlyPaid"],

HoursWorked = (int)reader["HoursWorked"]

};

}

public async Task> GetAllAsync()

{

var result = new List();

var reader = await new SqlCommand($"" +

$"SELECT * " +

$"FROM hourlyPaidAuthor", DB.Connection).ExecuteReaderAsync();

while (reader.Read())

result.Add(Load(reader));

return result;

}

}

The implementation of these methods for the monthlyPaidAuthorMapper class is also the same. The mechanism can be different from inserting or editing, and you can get help from the AuthorMapper class. For example, as follows:

public abstract class AuthorMapper

{

private readonly string tableName;

public AuthorMapper(string tableName) => this.tableName = tableName;

protected string GetUpdateStatementById(Author author)

{

return $"UPDATE {tableName} " +

$"SET FirstName =N'{author.FirstName}',

LastName=N'{author.LastName}' " +

$"#UpdateToken# " +

$"WHERE AuthorId = {author.AuthorId}";

}

protected async Task SaveAsync(string query)

=> await new SqlCommand(query, DB.Connection).ExecuteNonQueryAsync() > 0;

}

public class HourlyPaidAuthorMapper : AuthorMapper

{

public HourlyPaidAuthorMapper() : base("hourlyPaidAuthor") { }

public async Task UpdateAsync(HourlyPaidAuthor obj)

=> await SaveAsync(base.GetUpdateStatementById(obj)

.Replace("#UpdateToken#", $"HoursWorked = {obj.HoursWorked}"));

}

For the sake of simplicity, the codes related to data retrieval have been removed. As can be seen in the preceding code, the UpdateAsync method in the HourlyPaidAuthorMapper class performs editing operations with the help of the GetUpdateStatementById and SaveAsync methods defined in the parent mapper. The insertion mechanism is also similar to this mechanism. The deletion process is also like this mechanism, which can be seen in the following code:

public abstract class AuthorMapper

{

private readonly string tableName;

public AuthorMapper(string tableName) => this.tableName = tableName;

public string GetDeleteStatementById(int authorId)

=> $"DELETE FROM {tableName} WHERE AuthorId = {authorId}";

protected async Task SaveAsync(string query)

=> await new SqlCommand(query, DB.Connection).ExecuteNonQueryAsync() > 0;

public virtual async Task DeleteAsync(int authorId)

=> await SaveAsync(GetDeleteStatementById(authorId));

}

public class HourlyPaidAuthorMapper : AuthorMapper

{

public HourlyPaidAuthorMapper() : base("hourlyPaidAuthor") { }

}

The codes related to editing and data retrieval have been removed for simplicity. As seen in the preceding code, the AuthorMapper class has implemented data deletion logic, and child mappers can use these methods.

Notes:

This design pattern is usually used alongside single, class, or concrete table inheritance design patterns.
Consequences: Advantages

It removes duplicate codes and thus increases the maintainability of the code.
Consequences: Disadvantages

For small scenarios, it can increase the complexity.
Applicability:

This design pattern can be useful when a database mapping based on inheritance exists.
Related patterns:

Some of the following design patterns are not related to inheritance mappers' design pattern, but to implement this design pattern, checking the following design patterns will be useful:

Single table inheritance
Class table inheritance
Concrete table inheritance
Conclusion
In this chapter, you got acquainted with various methods for data mapping and mapping the structure of classes and tables in databases. Also, in this chapter, you learned how to store and retrieve Value Objects with a proper table structure.

In the next chapter, you will learn about Object-relational metadata mapping patterns.

1 Newtonsoft Json.NET is a popular high-performance JSON framework for .NET

Join our book's Discord space

Join the book's Discord Workspace for Latest updates, Offers, Tech happenings around the world, New Release and Sessions with the Authors:

https://discord.bpbonline.com

NET 7 Design Patterns In-Depth 8. Object-Relational Behaviors Design Patterns

Chapter 8
Object-Relational Behaviors Design Patterns
Introduction
To organize the object-relational behaviors, the design patterns of this category can be divided into the following three main sections:

Identity map: Tries to provide a method to fetch each record from the database only once.
Lazy load: Tries to load an object's data when needed.
Unit of work: Tries to send business transactions to the database as a single transaction.
The choice between these three design patterns depends on the level of logical complexity that we want to implement.

Structure
In this chapter, we will cover the following topics:

Object-relational behaviors design patterns
Unit of work
Identity map
Lazy load
Objectives
In this chapter, you will be introduced to object-relational behaviors design patterns and learn how to manage business transactions properly. You will also learn how to improve performance and efficiency by reducing references to the data source or when necessary.

Object-relational behaviors design patterns
Chapter 7, Data Source Architectural Patterns, tried to explain how different objects can be linked to tables in the database. In this chapter, we face another challenge: paying attention to behaviors. Behaviors mean how the data should be fetched from the database or how it should be stored in it. For example, suppose a lot of data is fetched from the database, and some have changed. It will be very important to answer the question of which of the data has changed or how to store the changes again in the database, provided that the data consistency is not disturbed.

Ensuring that the data is not read and changed by someone else is an issue related to concurrency management and design patterns. How to manage and apply changes is something that the unit of work design pattern can be useful for. When using the unit of work to improve efficiency, we need to ensure that the data that has been read will not be re-read. This problem can be solved with the identity map design pattern.

When the domain model is used, most models connect with other models. Reading a model will lead to retrieving all its relationships, again jeopardizing efficiency. To solve this problem, you can use the lazy load design pattern.

Unit of work
Name:

Unit of work

Classification:

Object-relational behaviors design patterns

Also known as:

---

Intent:

This design pattern tries to send business transactions to the database as a single transaction.

Motivation, Structure, Implementation, and Sample code:

Suppose we are implementing an infrastructure in which we want to manage the projects and people involved in an organization's project. The project and team member information of each project will be stored in separate tables in the database. There are different ways to implement this scenario. A simple way is to send each business request directly to the database and manage the transaction by placing and defining the transaction at the database level. This method has a problem, and sending a series of small requests to the database is necessary. In a system with high requests and user volume, these small requests will become the root of a big problem.

Another way is to manage these business requests at the application level and, after completing the business transaction process, send the entire business transaction to the database in the form of a single transaction. The unit of work design pattern can be useful in achieving this design.

Suppose we assume that we are facing only one model named project. The events that happen in this model are the events that lead to the creation of a new project, the deletion or editing of the existing project, and the reading of the project. With this hypothesis, to track changes at the application level, you can simply define several collections or arrays to track the changes and send them to the database at the right time.

There are different ways to implement the unit of work pattern. One of the most common methods is to combine the implementation of this design pattern with the repository design pattern. For example, suppose we have two repositories named UserRepository and BankAccountRepository. These two repositories have their own DbContext object and are connected to a common database. Figure 8.1 shows this relationship:

Figure%208.1.png
Figure 8.1: User and Bank account repositories with separated DbContexts

In this case, because UserRepository and BankAccountRepository have different objects from DbContext, their work is sent to the database in the form of two transactions (two units of work). These two repositories must use a shared DbContext object to solve this problem. Figure 8.2 shows this arrangement:

Figure%208.2.png
Figure 8.2: User and Bank account repositories with shared DbContext

As shown in Figure 8.2, the two repositories UserRepository and BankAccount Repository use a shared DbContext object so that the work of these two repositories can be sent to the database in the form of one transaction (one unit of work). As stated in the repository design pattern, the generic repository can also be used to implement this design pattern. Next, we will implement the unit of work design pattern with a generic repository.

According to the preceding explanations, the class diagram of this model in the presence of the generic repository model can be considered as follows:

Figure%208.3.png
Figure 8.3: UnitofWork with generic repository UML diagram

As can be seen in Figure 8.3 class diagram, the GenericRepository class is defined as generic and implements the IRepository interface. The UserRepository class can also inherit from the GenericRepository class and change some of its implementations (we have not implemented this part and assumed that the behaviors of the UserRepository are consistent with the GenericRepository). The UnitOfWork class implemented the IUnitOfWork interface and used the IRepository interface inside. This is because all access to repositories happens through the UnitOfWork object. Refer to the following code for this structure:

public interface IUnitOfWork

{

void Commit();

}

public class UnitOfWork : IUnitOfWork

{

private SampleDbContext _context = new();

private IRepository _userRepository;

private IRepository _bankAccountRepository;

public IRepository UserRepository

{

get

{

if (_userRepository == null)

_userRepository = new GenericRepository(_context);

return _userRepository;

}

}

public IRepository BankAccountRepository

{

get

{

if (_bankAccountRepository == null)

_bankAccountRepository =

new GenericRepository(_context);

return _bankAccountRepository;

}

}

public void Commit() => _context.SaveChanges();

}

As you can see in the preceding code, when creating a repository object, a DbContext object is sent to it, and this makes both repositories have the same DbContext object. Also, for the preceding code, the generic repository design pattern is implemented as follows:

public interface IRepository

{

TEntity Find(TKey id);

List GetAll();

void Add(TEntity user);

void Update(TEntity user);

void Delete(TKey id);

}

public class GenericRepository : IRepository

where TEntity : class

{

internal SampleDbContext _context;

internal DbSet _dbSet;

public GenericRepository(SampleDbContext context)

{

_context = context;

_dbSet = context.Set();

}

public virtual List GetAll() => _dbSet.ToList();

public virtual TEntity Find(TKey id) => _dbSet.Find(id);

public virtual void Add(TEntity entity) => _dbSet.Add(entity);

public virtual void Delete(TKey id) => _dbSet.Remove(_dbSet.Find(id));

public virtual void Update(TEntity entityToUpdate)

{

_context.Entry(entry).State = EntityState.Modified;

}

}

Remember that this implementation can be improved from the repository and unit of work model. It is presented in the simplest implementation mode just to facilitate content transfer. Now, to use this structure, you can proceed as follows:

IUnitOfWork unitOfWork = new UnitOfWork();

unitOfWork.UserRepository.Add(new User { Id = 1, Name = "Ahmad" });

unitOfWork.BankAccountRepository.Add(

new BankAccount { Id = 101, AccountNumber = 12345});

unitOfWork.Commit();

In the preceding code, a UnitOfWork object is created. A new user is added through the UserRepository, and a bank account is defined through the BankAccountRepository. Finally, the changes are sent to the database as one transaction.

Notes:

To track changes using this design pattern, it will be necessary to record the changes somewhere. There are two ways to do this:
Caller registration: The user must register the changes in the unit of work object to be applied in the database. The advantage of this method is its flexibility in sending changes to apply to the database. The problem with this method is that the user may forget to register the changes for any reason.
Object registration: In this method, the registration process is done through the methods in the target object. Usually, when performing the retrieval operation, the desired object is registered as a clean version. Then, as soon as a change occurs on this object, the Dirty version is formed. Despite having overhead, two versions of the object can significantly help detect changes.
This design pattern is usually used next to the repository design pattern.
The significant point is that the entity framework has implemented this change tracking process and unit of work design pattern in the SaveChanges method. Here the question arises whether this feature in the entity framework still needs to implement this design pattern in the systems or not. As with the repository design pattern, the answer to this question is also challenging, but implementing a unit of work in systems can be very useful.
Consequences: Advantages

Provides better control for transaction management.
Due to the reduction in the number of visits to the database, it provides better efficiency for batch operations.
The ability to maintain flexibility and testability of codes will be improved, and unit tests can be compiled and executed easily using mock mechanisms.
With the existence of a unit of work, there will be no need to use classes like DbContext and so on in the business layers. It will enable a loose coupling between the business layer and the framework used in the data access layer to easily make changes in the data access layer without changing the business layer (for example, entity framework can be replaced with another ORM).
Consequences: Disadvantages

For business logic and simple data access, often the use of this design pattern increases the complexity because today, the majority of ORMs offer the features presented in this design pattern. Using this design pattern will be useful when a new feature is added to them apart from the features provided by ORM.
Applicability:

This design pattern can be useful when there is a need to separate the communication of the business layer from the data layer and optimize the number of communication times with the database.
Using this design pattern in Domain-Driven Design (DDD) is very practical.
This design pattern will be useful when we face a series of requests and want to process them in one transaction.
Related patterns:

Some of the following design patterns are not related to the unit of work design pattern, but to implement this design pattern, checking the following design patterns will be useful:

Repository
Identity map
Identity map
Name:

Identity map

Classification:

Object-relational behaviors design patterns

Also known as:

---

Intent:

This design pattern tries to provide a method to fetch each record from the database only once. For this purpose, the records fetched for the first time are kept in a mapping set to be retrieved from this set whenever necessary.

Motivation, Structure, Implementation, and Sample code:

For example, consider the following code:

User user1 = userService.GetByName("Vahid");

User user2 = userService.GetByName("Vahid");

In the preceding code, data is read twice from the database and placed in two objects. In this situation, making changes to user1 has nothing to do with user2. Even if we want to save both objects in the database, one of the objects will be written on the other object. With this method, there is a risk that will result in concurrency problems. The identity map design pattern tries to solve this problem.

The identity map design pattern stores the records that have been read once from the database in a set so that when that record is needed later, instead of retrieving it from the database, the record is returned from the set that formed the identity map. In this way, the overall efficiency of the application will be improved. To implement an identity map, you can have a mapping set for each table in the database. Also, when implementing an identity map, issues related to concurrency should be considered.

Among the applications of this design pattern, we can mention the ability to cache data. The following class diagram shows the identity map design pattern:

Figure%208.4.png
Figure 8.4: Identity Map design pattern UML diagram

The Figure 8.4 class diagram shows that the UserMap class has a set of mappings named _mappings. This class has methods to add, delete and read records from the mapping set. Suppose we assume that the UserMap class is available from somewhere like the repository before sending the request to read the record to the database. In that case, that record is first searched by the Get method in the UserMap mapping set. If the desired record is not found in this set, the request to the database is sent, and the received response is first recorded in the mapping set using the Add method, and then the response is returned to the requester. With these explanations, the Figure 8.4 class diagram can be implemented as follows:

public class UserDbSet

{

public static List Users = new()

{

new User() { Id = 1, Username = “Vahid”, Email = “vahid@gmail.com” },

new User() { Id = 2, Username = “Ali”, Email = “ali@yahoo.com” },

new User() { Id = 3, Username = “Reza”, Email = “reza@gmail.com” },

new User() { Id = 4, Username = “Maral”, Email = “maral@gmail.com” },

new User() { Id = 5, Username = “Hassan”, Email = “hassan@yahoo.com” }

};

}

public class User

{

public int Id { get; set; }

public string Username { get; set; }

public string Email { get; set; }

}

public class UserMap

{

private readonly Dictionary _mappings = new();

public void Add(User user)

{

if (!_mappings.ContainsKey(user.Id))

_mappings.Add(user.Id, user);

}

public User? Get(int id)

{

if (_mappings.ContainsKey(id))

return _mappings[id];

return null;

}

public void Remove(int key)

{

if (_mappings.ContainsKey(key))

_mappings.Remove(key);

}

}

public interface IUserRepository

{

User Get(int id);

}

public class UserRepository : IuserRepository

{

private readonly UserMap _usermap;

public UserRepository() => _usermap = new UserMap();

public User Get(int id)

{

var cachedUser = _usermap.Get(id);

if (cachedUser == null)

{

var user = UserDbSet.Users.FirstOrDefault(x => x.Id == id);

_usermap.Add(user);

return user;

}

return cachedUser;

}

}

As seen in the preceding code, when the Get method is executed in the UserRepository class, the mapping set in the UserMap is first checked through the Get method. If this record already exists in this mapping set, then the record is returned without referring to the database; otherwise, the record is read from the database and registered in the mapping set, and returned.

Notes:

Retrieving data from the database multiple times and placing it in several objects can damage its accuracy. The same multiple retrieving from an external source can affect performance.
There will be one mapping for each database table in a similar structure (database structure and model structure are the same).
Consider the following code:
var query1 = ctx.Users.Where(x => x.Name == “Vahid”);

var user1 = query1.FirstOrDefault();

var user2 = query1.FirstOrDefault();

var query2 = ctx.Users.Where(x => x.Id == 1);

var user3 = query2.FirstOrDefault();

Console.WriteLine(user1.GetHashCode());

Console.WriteLine(user2.GetHashCode());

Console.WriteLine(user3.GetHashCode());

By executing the preceding code, we realize that all three objects, user1, user2, and user3, are one object. This means that the entity framework does the same thing as the identity map behind the scenes and does not allow loading an object more than once.
A key will be needed to design the mapping set, so the record can be found later based on that key. The best option for the key is to choose the table’s primary key, although other combinations can also be used as keys.
This design pattern can also be implemented generically. In this case, the whole process can be implemented using one class. The key selection is important in implementing the generic method. Because, in this case, all sets of mappings must have a fixed formula for the key.
How and where to store the mapping set is also important in this design pattern. This collection should be stored in a way that is different for each session. For some read-only data that never change, the storage method and location will not matter, and those data can be shared between sessions.
Since concurrency management is important when using this design pattern, the optimistic offline lock and design patterns are widely used.
Usually, using this design pattern in the unit of work is a better design. Because the unit of work is the data entry and exit point, if there is no unit of work, the presence of this design pattern next to the registry design pattern can be useful.
There is no need to use this design pattern for immutable objects. The reason for this is also clear. When an object is immutable, its value will not change; when the value does not change, there is no need to worry about the anomalies caused by the change. Among the most important immutable objects are value objects.
Consequences: Advantages

This design pattern can be used to implement the cache mechanism and thus reduce the number of references to the database.
Consequences: Disadvantages

This design pattern can manage the collision event in one session but cannot do anything for the collision event between several sessions. To solve this problem, you must use optimistic offline and pessimistic offline locks.
Applicability:

This design pattern can be useful when data needs to be read only once from the source.
Related patterns:

Some of the following design patterns are not related to the identity map design pattern, but to implement this design pattern, checking the following design patterns will be useful:

Unit of work
Optimistic offline lock
Pessimistic offline lock
Registry
Lazy load
Name:

Lazy load

Classification:

Object-relational behaviors design patterns

Also known as:

---

Intent:

This design pattern tries to load an object’s data when needed. In this case, while object initialization, no data is loaded, which can positively affect performance.

Motivation, Structure, Implementation, and Sample code:

Suppose there is a requirement in which the customer’s information is returned along with his orders. But in some places of the program, only customer information is needed, and in other places, orders are needed. In this case, receiving customer orders for each request will not be pleasant and can harm efficiency. Another way is to fetch the customer’s information from the database in every request but not fetch his orders. Wherever orders are needed, then the orders are picked up. After retrieving orders, if new orders are needed, the same list of previous orders can be returned.

The preceding method implements lazy load using the lazy initialization method. There are other methods to implement this design pattern, such as virtual proxy, value holder, and ghost, which we will learn about later.

Lazy initialization method
In Figure 8.5, you can see the class diagram of the Lazy Initialization method for the lazy load design pattern:

Figure%208.5.png
Figure 8.5: Lazy Initialization UML diagram

As shown in Figure 8.5 diagram, CustomerService receives order information from ExternalSource only once and only when needed. According to Figure 8.5 class diagram, the following codes can be considered:

public class Order

{

public int Id { get; set; }

public double Price { get; set; }

public int CustId { get; set; }

}

public class Customer

{

private List _orders;

public int Id { get; set; }

public string Name { get; set; }

public List Orders

{

get

{

if (_orders == null)

{

Console.WriteLine($"Loading orders for customer: {this.Name}");

_orders = OrderDbSet.Orders

.Where(x => x.CustId == this.Id).ToList();

}

return _orders;

}

}

}

As it is clear in the preceding code, inside the Orders property, it is checked whether the list of orders has already been loaded. If it is not loaded, the list of orders will be loaded. If you prefer to Orders again, the list of orders will not be loaded from the beginning, and the same list as before will be referred to. It can also be seen in the preceding code that Orders have no value when the Customer information is fetched, and when it is needed, the values inside will be filled. To use this code, you can proceed as follows:

1. Customer customer = CustomerDbSet.Customers.FirstOrDefault(x => x.Id == 1);

2. List orders = customer.Orders;

3. List orders2 = customer.Orders;

In line 1, customer information is fetched. This information was not fetched in this line because Orders information was unnecessary.
In line 2, Orders are needed for the first time, so in this line, the list of customer orders will be fetched from the database.
In line 3, where Orders are needed again, fetching has not happened, and the same list of previous orders is returned.
NOTE: .NET has a class called Lazy, which you can easily delay object initialization until it is used. For example, consider the following code:

Lazy lazyCustomer = new Lazy();

According to the preceding code, no instance of Customer is created. To create n instance, the instance should be received through lazyCustomer.Value. Referencing lazyCustomer.Value again will return the same object as before. The Lazy class allows you to provide the initialization process in the form of a Lambda expression to the constructor of the Lazy class:

Lazy lazyCustomer = new Lazy(()=>

{

Customer obj = new Customer();

//do something more with obj

return obj;

});

Now, using the Lazy class, you can rewrite the Customer class as follows:

public class Customer

{

private Lazy> _orders;

public int Id { get; set; }

public string Name { get; set; }

public Customer()

{

_orders = new Lazy>(() =>

{

return OrderDbSet.Orders.Where(x => x.CustId == this.Id).ToList();

});

}

public List Orders => _orders.Value;

}

Virtual proxy method

In this method, the virtual proxy object has the same structure as the main object. The main object is created the first time the virtual proxy object is requested. Then, the originally created object is returned whenever a reference is made to the virtual proxy. This implementation method combines proxy design patterns and lazy initialization method:

public interface IService

{

public List Orders { get; }

public int Id { get; set; }

public string Name { get; set; }

}

public class Customer : IService

{

private List _orders;

public int Id { get; set; }

public string Name { get; set; }

public Customer() =>_orders = OrderDbSet.Orders

.Where(x => x.CustId == this.Id).ToList();

public List Orders => _orders;

}

public class CustomerProxy : IService

{

private IService _service;

private void InitIfNeeded()

{

if (_service == null)

_service = new Customer();

}

public List Orders

{

get

{

InitIfNeeded();

return _service.Orders;

}

}

public int Id { get; set; }

public string Name { get; set; }

}

Ghost method
In this method, the desired object is incompletely loaded, and only information related to the primary key or main information is loaded. Other information that has not been loaded will be loaded on the first reference to them. The first reference to data that has not been loaded will cause all data to be loaded.

public class Customer

{

private string _name;

private List _orders;

private bool _isOrdersLoaded;

private bool _isloaded;

public int Id { get; set; }

public string Name

{

get

{

if (!_isloaded)

Load();

return _name;

}

}

public List Orders

{

get

{

if (!_isOrdersLoaded)

LoadOrders();

return _orders;

}

}

private void Load()

{

var customer = CustomerDbSet.Customers

.FirstOrDefault(x => x.Id == this.Id);

this._name = customer.Name;

_isloaded = true;

}

private void LoadOrders()

{

_orders = OrderDbSet.Orders.Where(x => x.CustId == this.Id).ToList();

_isOrdersLoaded = true;

}

}

Value holder method
In this method, another object called the value holder is responsible for managing lazy loading, and this object is used in the main object. The problem with this method is that the user must be aware of the presence of the value holder:

public class Customer

{

private ValueHolder _valueHolder;

public int Id { get; set; }

public string Name { get; set; }

public Customer() => _valueHolder = new ValueHolder(this.Id);

public List Orders => _valueHolder.GetOrders();

class ValueHolder

{

private List _orders;

private readonly int _id;

public ValueHolder(int id) => _id = id;

public List GetOrders()

{

if (_orders == null)

_orders = OrderDbSet.Orders.Where(x => x.CustId == _id).ToList();

return _orders;

}

}

}

Notes:

Using this design pattern when producing web applications or websites is very important. For example, loading an image or iframe only when the page is scrolled enough can have a significant impact on the initial loading speed of the web page.
The opposite of lazy loading is the eager loading method. The eager method will load the required resources once the code is executed.
Consequences: Advantages

Because it is not necessary to set all the objects, it increases the efficiency.
Using this design pattern in web-based applications will reduce overall bandwidth consumption.
Consequences: Disadvantages

Using this design pattern can increase the complexity because we always need to check whether the desired object has been loaded or not, or basically whether the desired object needs to be loaded lazily or not.
Applicability:

This design pattern can be very effective in implementing the singleton design pattern.
IQueryable or IEnumerable types support a type of lazy load called deferred execution.
The lazy load can be used in entity Framework when we need to get data from the Join of several tables.
When the complete loading of the object is not needed or is expensive, using this design pattern can be useful.
Related patterns:

Some of the following design patterns are not related to the lazy load design pattern, but to implement this design pattern, checking the following design patterns will be useful:

Eager loading
Singleton
Proxy
Conclusion
In this chapter, you learned how to properly design and manage business transactions using the unit of work design pattern. You also learned how to prevent redundant references to data sources using an identity map design pattern. Finally, you learned how to use a lazy load design pattern to retrieve only the required data from the data source. Now that you are familiar with these design patterns in this chapter, you will be familiarized with object-relational structures design patterns in the next chapter.

Join our book's Discord space

Join the book's Discord Workspace for Latest updates, Offers, Tech happenings around the world, New Release and Sessions with the Authors:

https://discord.bpbonline.com

NET 7 Design Patterns In-Depth 7. Data Source Architecture Design Patterns

Chapter 7
Data Source Architecture Design Patterns
Introduction
To organize the data source architecture, the patterns of this category can be divided into the following four main sections:

Active record: An object has the task of a gateway to a record from the database, and along with this task, it also implements the business logic related to the domain of that record.
Data mapper: A layer of mappers is placed between the domain and the database, and in this way, they eliminate the dependency on the database and the domain.
Row data gateway: An object acts as a gateway to a record in the data source, and there is a different object for each record in the data source.
Table data gateway: An object acts as a gateway to a database table or view, and communication with all the records of that table or view happens through this object.
The choice between these four design patterns depends on the level of logical complexity that we want to implement.

Structure
In this chapter, we will discuss the following topics:

Data source architecture design patterns
Table data gateway
Row data gateway
Active record
Data mapper
Objectives
In this chapter, you will learn to know data source architecture design patterns and how to simulate the data source structure in your programs using different design patterns. You will also learn how to facilitate the construction and management of various SQL queries. In this chapter, you will learn how to link the data source structure to the domain model using mapping mechanisms.

Data source architecture design patterns
The most important task of the data access layer is facilitating communication with different data sources so the program can do its work. Most of today's data sources are relational databases, and with the help of the SQL language, you can easily communicate with all types of data sources and exchange data. Along with all the advantages of SQL, there are still some problems related to data sources. One of these problems is sometimes the lack of sufficient knowledge of SQL among programmers, which leads to writing wrong or ineffective queries.

With the preceding explanation, putting the SQL-related codes in a separate layer from the domain seems useful. The best way to implement this type of data infrastructure is to implement it according to the structure of database tables. In this case, there will be one class for each table in the database, and these classes will form gateways.

There are also different ways to have a gateway. One of these ways is to have a gateway object for each table record. This method is what the row data gateway is trying to express. Another way is to have one gateway object for each table. This method is also what the table data gateway is trying to express. Usually, in the table data gateway method, the result of the query sent to the database is returned as a record set. The connection and coordination between the table data gateway and record set also help to use this design pattern in the table module.

When using the domain model, due to the existing complexities, we may want each domain model to have the task of loading and changing its related data. This is where active records can come in handy.

With the increase of complexity within the domain, the logic of the domain is broken and divided between several classes. This causes a mismatch between the domain model and the database. Because there is no feature like inheritance in the database, on the other hand, with increasing complexity, it is necessary to test the domain's logic without the need to connect with the database. All these points and problems make us go for an indirect communication method between the domain model and the data source. In this case, completely separating the domain model from the database would be a better method. However, an interface will be needed to map domain objects to database tables. The data mapper design pattern will be useful in this section.

Choosing between a data mapper and an active record is a choice that depends on the level of complexity. For simpler scenarios, the active record will be useful, and as the complexity increases, using a data mapper will be a better option.

Table data gateway
Name:

Table data gateway

Classification:

Data source architecture design patterns

Also known as:

---

Intent:

Using this design pattern, an object plays the role of a gateway to a database table or view, and communication with all the records of that table or view happens through this object.

Motivation, Structure, Implementation, and Sample code:

Suppose a requirement is raised, and we need to connect to the database and perform Create, Read, Update, Delete (CRUD) operations on the user table. For this purpose, functions such as adding a new user, changing the password, deleting a user, finding a specific user, and receiving a list of users should be implemented.

There are different ways to implement this requirement. One of the ways is to connect to the database using raw queries and do the desired work. Using this method, the placement of the queries is important. If these codes are placed among the business logic code of the application, they will create a significant problem. The problem is that programmers will need to write queries. Most programmers either do not have enough knowledge to write queries or if they do have enough knowledge, database administrators will have difficulty finding and improving the performance of these queries.

However, a better option is placing the queries outside the business logic location. Table data gateway design patterns can be important in achieving this goal. In this way, all the required queries are collected and defined in a separate business logic section:

public class UserTableDataGateway

{

private SqlConnection connection;

public UserTableDataGateway()

{

connection = new SqlConnection("...");

}

public async Task GetAllAsync()

{

return await new SqlCommand("" +

"SELECT * " +

"FROM Users", connection).ExecuteReaderAsync();

}

public async Task FindByUserNameAsync(string username)

{

return await new SqlCommand($"" +

$"SELECT * " +

$"FROM Users " +

$"WHERE UserName = N'{username}'", connection).ExecuteReaderAsync();

}

public async Task ChangePasswordAsync(

string username, string newPassword)

{

return (await new SqlCommand($"" +

$"UPDATE Users " +

$"SET [Password] = N'{newPassword}' " +

$"WHERE UserName = N'{username}'"

, connection).ExecuteNonQueryAsync()) > 0;

}

}

As seen in the preceding code, the GetAllAsync method is used to get the list of all users. To find a specific user based on username, the FindByUsernameAsync method is used, and the ChangePasswordAsync method is used to change the password. Other operations, such as inserting a new user and deleting a user, can be defined similarly. Another noteworthy point in the preceding implementation is that for the sake of simplicity, in the definition of queries, things like SQL Injection and opening and closing the connection to the database have not been paid attention to. Still, in the real environment, it will be necessary to do such things. In the business logic layer, you can use this gateway if you need to work with the Users table. Also, database administrators must only check the gateways to monitor and improve queries.

Notes:

By using this design pattern, database queries are encapsulated in the form of a series of methods.
To implement this design pattern, you can also benefit from a Data set. While using this method, most of the operations will be similar, and you can also benefit from a public gateway and send things like the table name in the form of input parameters.
In implementing this design pattern, there is no requirement to communicate with the database tables. According to the requirement, it may be necessary to communicate with the views. If a public gateway is used for communication, it should be noted that it is often impossible to change the table or sub-tables through views1. Therefore, defining a public gateway for tables and a public gateway for views will probably be necessary. Also, to hide the structure of the tables, it is possible to benefit from the stored procedures in the database and create a gateway to communicate with these stored procedures.
The important point in using this design pattern is how to return the results to the user. For this purpose, Data Transfer Object (DTO) or methods such as mapping can be useful. Using the mapping method is often not a good option, as we will not be able to notice the problem at the time of compilation, which will cause a bug to appear. In using the DTO method, it should be noted that a new object must be created, and this method can be considered only if the created DTO can be reused. The results can be returned in the domain object format if this design pattern is used along with the domain model pattern.
This design pattern, table module, and transaction script patterns can be very useful. For example, this design pattern returns the record set required for the table module, so it can perform its processing on the results.
Consequences: Advantages

All queries are placed in the database separately from the business logic, making it easier to make changes to the queries.
The implementation of this design pattern is very simple.
Mapping between the gateway and database tables or views will be simple. Each table or view in the database is mapped to a gateway.
Consequences: Disadvantages

Using this design pattern next to the domain model will not be appropriate, and it is usually better to use the domain model design pattern and the data mapper pattern.
As the complexity in the domain increases, this design pattern will become more complex and does not have good scalability.
Applicability:

It can be useful for implementing small applications. If the business logic of the domain is simple, this design pattern can be very useful.
This design pattern and the table module design pattern can be useful.
Related patterns:

Some of the following design patterns are not related to the table data gateway design pattern, but to implement this design pattern, checking the following design patterns will be useful:

Table module
Transaction script
Domain model
Data mapper
Data transfer object
Row data gateway

Row data gateway

Name:

Row data gateway

Classification:

Data source architecture design patterns

Also known as:

---

Intent:

Using this design pattern, an object acts as a gateway to a record in the data source, and there is a different object for each record.

Motivation, Structure, Implementation, and Sample code:

Suppose we want to rewrite the example presented in the table data gateway design pattern with a row data gateway. In the table data gateway design pattern, we put all the queries related to the user's table in one class and connect to the database through that class. This section shows the difference between the row data gateway design pattern and the table data gateway pattern. In the row data gateway model, instead of a class having all the queries related to a table, we work through an object with a database record.

It is obvious that to find a record from the database and assign it to an object, and we will need a separate class. We name this class Finder. Finder is tasked to find and return a record from the table based on the provided condition. In other words, the finder tries to return a gateway for each record. The returned gateway has two main parts:

Properties: This section is the one-to-one mapping between the properties of the class and the columns of the table or data source. For example, ID, username, password, and so on.
Behaviors: This section is the queries that need to be executed on the data source. For example: insert, edit, delete, and so on.
In the following code, we have rewritten the scenario related to users with row data gateway:

public static class UserFinder

{

public static async Task FindAsync(string username)

{

var reader = await new SqlCommand($"" +

$"SELECT * " +

$"FROM Users " +

$"WHERE UserName = N'{username}'",DB.Connection).ExecuteReaderAsync();

reader.Read();

return UserRowDataGateway.Load(reader);

}

public static async Task> GetAllAsync()

{

var gatewayList = new List();

var reader = await new SqlCommand(

"SELECT * FROM Users",DB.Connection).ExecuteReaderAsync();

while (reader.Read())

{

gatewayList.Add(UserRowDataGateway.Load(reader));

}

return gatewayList;

}

}

public class UserRowDataGateway

{

public string UserName { get; set; }

public string Password { get; set; }

public static UserRowDataGateway Load(IDataReader reader)

{

return new UserRowDataGateway()

{

UserName = reader["Username"].ToString(),

Password = reader["Password"].ToString()

};

}

public async Task ChangePasswordAsync()

{

return (await new SqlCommand($"" +

$"UPDATE Users " +

$"SET [Password] = N'{this.Password}' " +

$"WHERE UserName = N'{this.UserName}'",

DB.Connection).ExecuteNonQueryAsync()) > 0;

}

}

In the preceding code, issues related to SQL injection or connection opening and closing are not mentioned, but in the real project, you must pay attention to these issues.

UserFinder is responsible for mapping a record to a gateway in the preceding code. It first retrieves the record from the database and then assigns the result to a specific object through the Load method.

Notes:

In using this design pattern, a finder class is often considered for each data source (table, view, and so on).
This design pattern is very similar to the active record design pattern. If the row data gateway contains domain business logic, this design pattern essentially becomes an active record and contains the business logic of the database.
Like the table data gateway design pattern, this design pattern can also communicate with a table, view, or stored procedure.
Consequences: Advantages

When mapping the columns of a record to class properties, type conversion may be required. Using this design pattern, this type of conversion happens only in one place (Load method).
Mapping between database objects and classes is simple, as each table has a gateway class. Depending on the type of implementation, a finder class will probably be needed for each table.
Consequences: Disadvantages

Using this design pattern, the maintenance cost sometimes increases because a series of clear codes must be written constantly.
Dependence on the database in using this design pattern is high because the object's properties are directly dependent on the table's columns.
The compatibility of this design pattern with the domain model design pattern is low, and the use of this design pattern along with the domain model design pattern will cause us to face three different data view methods. The first view is in the form of a domain model, the second is in the form of a gateway, and the third is in the form of a database. This is even though two different data views are needed, so using this design pattern alongside the domain model is not recommended. It is better to use other design patterns, such as active records.
Applicability:

Using this design pattern, the transaction scripts design pattern is a good combination.
This design pattern can be useful when the business logic of the domain is simple, and the probability of making changes to the data source is low. Type Safety is important when designing and coding.
Related patterns:

Some of the following design patterns are not related to the row data gateway design pattern, but to implement this design pattern, checking the following design patterns will be useful:

Table data gateway
Active record
Transaction scripts
Domain model
Active record

Name:

Active record

Classification:

Data source architecture design patterns

Also known as:

---

Intent:

In this design pattern, an object has the task of a gateway to a database record. In addition to this task, it also implements the business logic related to the domain of that record.

Motivation, Structure, Implementation, and Sample code:

An object has a series of data and behaviors, and data often needs to be persistent, stored, and retrieved in a database. This operation is presented and implemented in the form of data access logic. Using an active record design pattern, object-related behaviors, which are the business logic of the domain, are placed next to the data access business logic.

Suppose we want to rewrite the example presented in the row data gateway design pattern section with an active record design pattern. As seen in the row data gateway section, we used the finder class to find the desired record and shape the desired object, and through the gateway class, we implemented requirements such as changing the password. Now, if we add business logic related to the domain to that structure, the designed structure will become an active record:

public class UserActiveRecord

{

public string UserName { get; set; }

public string Password { get; set; }

public static UserActiveRecord Load(IDataReader reader)

{

return new UserActiveRecord()

{

UserName = reader["Username"].ToString(),

Password = reader["Password"].ToString()

};

}

public bool IsPasswordValid()

=> Password.Length > 6 &&

(Password.Contains('@') || Password.Contains('#'));

public async Task ChangePasswordAsync()

{

if (IsPasswordValid())

{

return (await new SqlCommand($"" +

$"UPDATE Users " +

$"SET [Password] = N'{this.Password}' " +

$"WHERE UserName=N'{this.UserName}'",

DB.Connection).ExecuteNonQueryAsync())> 0;

}

else

throw new Exception("The password is not strong enough.");

}

public async Task IncreaseFailedLoginAttemptAsync()

{

return (await new SqlCommand($"" +

$"UPDATE Users " +

$"SET [FailedLoginAttempt] = [FailedLoginAttempt] + 1 " +

$"WHERE UserName = N'{this.UserName}'",

DB.Connection).ExecuteNonQueryAsync()) > 0;

}

public async Task ResetFailedLoginAttemptAsync()

{

return (await new SqlCommand($"" +

$"UPDATE Users " +

$"SET [FailedLoginAttempt] = 0 " +

$"WHERE UserName = N'{this.UserName}'",

DB.Connection).ExecuteNonQueryAsync()) > 0;

}

public async Task GetFailedLoginAttemptAsync()

{

return (int)await new SqlCommand($"" +

$"SELECT [FailedLoginAttempt] FROM Users " +

$"WHERE UserName = N'{this.UserName}'",

DB.Connection).ExecuteScalarAsync();

}

public async Task IsUserExistsAsync()

{

return ((int)await new SqlCommand($"" +

$"SELECT COUNT(*) FROM Users " +

$"WHERE [Password] = N'{this.Password}' AND " +

$"UserName = N'{this.UserName}'",

DB.Connection).ExecuteScalarAsync()) > 0;

}

public async Task LoginAsync()

{

var loginResult = await IsUserExistsAsync();

if (loginResult == false)

{

await IncreaseFailedLoginAttemptAsync();

if (await GetFailedLoginAttemptAsync() > 3)

throw new Exception("Your account has been locked.");

}

else

await ResetFailedLoginAttemptAsync();

return loginResult;

}

}

In the preceding implementation, for the sake of simplicity, things like SQL injection and opening and closing the connection to the database have not been considered in the definition of the queries. However, in real scenarios and production environments, it will be necessary to consider such things.

In the preceding code, as you can see, in addition to the data access logic such as ChangePasswordAsync, IncreaseFailedLoginAttemptAsync, ResetFailedLogin AttemptAsync, GetFailedLoginAttemptAsync or IsUserExistsAsync methods related to domain business logic such as IsPasswordValid and LoginAsync are also present. As it is clear in the preceding code, the design is completely similar to the design of the row data gateway design pattern, with the difference that the business logic of the domain is also included in this design.

Another point in this design is that the finder class is not included, and the design of this class will be the same as the design done in the row data gateway design pattern.

Notes:

Part of the domain logic may be written using the transaction script pattern in this design pattern, and data access logic or data-related codes may be implemented with Active Record.
The data structure presented in the active record must be the same as the data structure of the database.
In using this design pattern, it is possible to communicate with the view or stored procedures instead of communicating with the table.
This design pattern is very similar to the row data gateway.
Consequences: Advantages

Implementation and understanding of active records are very simple.
It is very compatible with transaction script design patterns.
Since the domain business logic related to a record is next to the data access logic, the design is more coherent.
Consequences: Disadvantages

Using this design pattern can be useful only when the active record object is the same as the structure of the tables in the database. This design pattern can be useful in structures of the same shape. Still, if the business logic becomes more complex and there is a need to use capabilities such as inheritance and so on in the code, this design pattern will not be useful.
Due to the high dependence of the database structure on the object structure, it will be difficult to perform refactor operations at the database or object level, and hence the scalability will decrease.
The presence of domain business logic, data access logic, and increasing design and code coherence reduces reusability. Data access logic can be placed in parent classes through inheritance to solve this problem.
Due to the high dependence of this design pattern on the database, it is difficult to write unit tests.
Due to data access logic next to domain logic, the Single Responsibility Principle (SRP) and Separation of Concern (SoC) are violated. For this reason, this design pattern is suitable for simple applications.
Applicability:

This design pattern can be useful for implementing simple domain business logic such as CRUD operations or almost complex logic in the case that the possibility of changing the data source is low.
Related patterns:

Some of the following design patterns are not related to active record design pattern, but to implement this design pattern, checking the following design patterns will be useful:

Transaction script
Row data gateway
Data mapper
Name:

Data mapper

Classification:

Data source architecture design patterns

Also known as:

---

Intent:

Using this design pattern, a layer of mappers is placed between the domain and the database, and in this way, the dependency on the database and the domain is eliminated. The task of these mappers is to transfer data from the domain to the database and vice versa.

Motivation, Structure, Implementation, and Sample code:

Suppose we want to rewrite the example of users mentioned in active record design patterns. As we saw in the active record design pattern, the principle of SRP and SoC was violated due to the high dependency on the database and the domain layer. Using this design pattern will be very simple. It is enough that the object related to the domain is not involved in issues related to the database or data source, and the data source or database is also not involved in the business logic of the domain. This makes these two areas not have mutual effects. For example, a domain-related object may have collections, inheritance, and so on.

The same features in the domain object will help you later in implementing the complex business logic of the domain. This is while these capabilities do not exist in the database, so this difference in different capabilities will cause two completely different data structures to exist. One structure conforms to the domain implementation, and the other to the database design. The part in charge of harmonizing and translating these two completely different structures is the same as the mapper. Consider the following code:

public class UserModel

{

public string UserName { get; set; }

public string Password { get; set; }

}

public class UserMapper

{

public static async Task Create(UserModel newUser)

{

return (await new SqlCommand($"" +

$"INSERT INTO Users " +

$"(Username, [Password]) VALUES " +

$"N'{newUser.UserName}', N'{newUser.Password}'",

DB.Connection).ExecuteNonQueryAsync())>0;

}

}

public class UserDomain

{

public static bool IsPasswordValid(string password)

=> password.Length > 6 &&

(password.Contains('@') || password.Contains('#'));

public static async Task Register(string username, string password)

{

if (IsPasswordValid(password))

{

return await UserMapper.Create(

new UserModel {UserName=username,Password=password });

}

else

{

throw new Exception("The password is not strong enough.");

}

}

}

In the preceding implementation, for the sake of simplicity, things like SQL injection and opening and closing the connection to the database have not been considered in the definition of the queries. Still, it will be necessary to consider such things in the real environment.

As it is clear in the preceding code, the UserDomain class is responsible for implementing the business logic of the domain. This class has no information about the structure of the data source and the database. Even this class has no information about the existence of data sources. The UserDomain class prepares a UserModel object according to its needs and delivers it to the UserMapper. UserMapper is responsible for communicating between the domain and the data source. Therefore, upon receiving a UserModel object, it prepares and sends it to the database.

Notes:

According to this design pattern, the one-to-one relationship between the database structure and the domain model has been lost, and this relationship may not necessarily be one-to-one.
The preceding code example is a simple data mapper design pattern implementation. In more complex design and implementation, to perform insertion, editing, or even deletion operations, the mapper will need to know the previous and new state of the object so that it can choose the appropriate operation based on that. Also, in order to be able to make these changes in the form of a transaction, the UnitOfWork design pattern can be used.
Considering the internal connections of the domain models with each other, it is necessary to know to what level the data should be fetched and placed in the model. If we do not have this information, we may have to retrieve the entire data, which of course, will not be the right thing to do. You can use the lazy load design pattern to solve this problem and improve the amount of fetched data.
There will be two ways to create domain model objects inside the mapper. The first way is to use the parametric builder for the domain model. In this way, to create a domain model object, it will be necessary to send the required information to the class constructor. The second way is to use the constructor without parameters. In this way, first, the object is created, and then its various characteristics are set. The first way is better in terms of object quality guarantee. The problem with using the first method is the possibility of a cyclic dependency problem. In circular dependency, object A needs object B to be created, and object B needs object A to be created.
Suppose the domain model is simple, and the task of designing and developing the database is in the hands of the domain developer team. In that case, the development process can be improved with direct access to the database. For this purpose, an active record design pattern can be used instead of this design pattern.
In the first step of software development, there is no need to produce a complete mapper with all possible features. Sometimes using a pre-produced and ready-made mapper is better than implementing a new mapper.
Consequences: Advantages

When developing the domain, the data source or database can be completely ignored during design, development, and testing.
There is a loose coupling between the data source and the domain.
It has very high compatibility with the domain model design pattern, and along with this design pattern, it helps the reusability of the code.
Consequences: Disadvantages

For simple business logic, using this design pattern will not be cost-effective because it increases the complexity by adding a new layer (mapper).
Applicability:

Using this design pattern can be useful if we want to design and develop the data source model independently of the domain model.
This design pattern is often used together with the domain model design pattern.
This design pattern can be useful when the business logic of the domain is complex.
When the probability of change in the data source or domain layer is high, using this design pattern will be useful.
Related patterns:

Some of the following design patterns are not related to the data mapper design pattern, but to implement this design pattern, checking the following design patterns will be useful:

Unit of work
Domain model
Active record
Lazy load
Conclusion
In this chapter, you got acquainted with different design patterns related to data source architecture. In this chapter, you also learned how to simulate the data source structure and map them to different objects.

In the next chapter, you will learn about object-relational behavior design patterns.

1 If the view contains a series of conditions or a series of triggers are used, data can also be changed through the view.

Join our book's Discord space

Join the book's Discord Workspace for Latest updates, Offers, Tech happenings around the world, New Release and Sessions with the Authors:

https://discord.bpbonline.com

NET 7 Design Patterns In-Depth 6. Domain Logic Design Patterns

Chapter 6
Domain Logic Design Patterns
Introduction
To organize the domain logic, the patterns can be divided into the following four main sections:

Transaction script: This is the simplest way to store domain logic. It is a procedure that receives input from the display layer, processes it, and returns the response to the display layer. Processing in this model can include validation and calculations, storing a series of data in a database, and so on.
Domain model: We create a domain model and put the logic related to the domain in these models. The difference between this pattern and the transaction script design pattern is that a procedure is responsible for handling an action. Still, in the domain model, each model only processes the part of the request that is related to it.
Table module: This is very similar to the domain model, and the main difference is that in the domain model, each object represents a record in the database, but in the table module, there is one object in general, which has tasks related to all objects. It does the relevant things. This design pattern works with the record set pattern. Also, the table module can be considered as an intermediate layer between the transaction script and domain model.
Service layer: Tries to determine the boundaries of the system and the capabilities available to the user layer. In this way, business logic, transaction management, and so on are hidden from the user's view.
The choice between these four design patterns depends on the level of logical complexity that we want to implement.

Structure
In this chapter, we will cover the following topics:

Domain logic design patterns
Transaction script
Domain model
Table module
Service layer
Objectives
In this chapter, you will learn domain logic design patterns and how to improve code maintainability while increasing complexity using different design patterns. These design patterns will teach you to manage complex business logic appropriately and optimally. In this chapter, you will get to know the domain and learn how to manage business logic in different domains.

Domain logic design patterns
The domain model design pattern can be very useful for complex logic, and it is important to understand when logic is complex and when it is not. Understanding this point is challenging, but using domain experts, or more experienced people, can obtain a better approximation.

Thanks to the various features available in .NET, using the table module design pattern can be useful. However, maintaining this design pattern becomes difficult with increasing complexity, and the codes will become very complex over time.

There are very few good use cases for the transaction script design pattern where it can be argued that the pattern is appropriate. Still, it is possible to start the implementation with this design pattern. Over time, the design can be migrated to the domain by acquiring the appropriate knowledge about the domain model:

Figure%206.1.png
Figure 6.1: The Relation between Effort to Enhance and Complexity of Domain Logic

According to Figure 6.1, with the increase in complexity, the use of table modules and transaction script design patterns will lead to a great increase in maintenance complexity and improvement capability. As it is clear in Figure 6.1, the x and y axis in this diagram are two qualitative attributes. As long as these qualitative attributes are not converted into quantitative attributes and numbers, it is impossible to obtain the correct border from these design patterns easily.

Now that we have a general understanding of the preceding three design patterns, we need to know the fourth design pattern of this category. When the domain model or table module design pattern is used, the handling of a request is distributed among several classes. In this case, a single interface is needed so that the presentation layer can deliver its request to that interface. That interface is responsible for sending the request to different classes, and this interface is the service layer.

Using the service layer as a covering layer on the transaction script will not be very useful because, in the transaction script, we will not face the complexity of communication in the domain model or table module. The important point in using the service layer is understanding how much behavior can be included.

The simplest mode of this pattern is to use it as a forwarder that takes the request and, without doing anything, refers it to its lower layers. The most complicated case is when the business logic inside the transaction scripts is placed inside the service layer. According to the previous explanations, placing business logic inside the service layer is useless. Over time, it will lead to the complexity of maintenance and improvement.

Transaction script

Name:

Transaction script

Classification:

Domain logic design patterns

Also known as:

---

Intent:

This design pattern tries to organize the business logic with the help of a series of procedures. Each procedure handles a presentation layer request in this design pattern.

Motivation, Structure, Implementation, and Sample code:

Suppose there is a need to implement the purchase process. To implement the purchase process, we have reached the following model during the analysis:

Figure%206.2.png
Figure 6.2: Sample shipment process

According to Figure 6.2, the entire purchase process is considered a transaction. Based on the transaction script design pattern, this whole transaction can be implemented as a procedure in which the presentation layer delivers the information related to the purchase to this procedure. Then this procedure processes the transaction and returns the result to the presentation layer. According to these explanations, Figure 6.3 class diagram can be imagined for this design pattern:

Figure%206.3.png
Figure 6.3: Transaction Script design pattern UML diagram

As shown in the Figure 6.3 class diagram, ESaleTS handles purchase requests. The Sale method in this class will implement the desired procedure for purchase. To understand this pattern more precisely, consider the following pseudo code:

public bool Sale(int productId, int productCount, int userId)

{

/*

* Inventory check.

var product = tblProduct.Find(productId);

if(product.Stockx.UserId==userId)

var totalPrice = productCount*product.UnitPrice;

if(userWallet.Balance < totalPrice){ throw new Exception("The balance of the wallet is not enough."); } * Deduct the purchase amount from the wallet balance. userWallet.Balance = userWallet.Balance - totalPrice; * Subtract the number of requested goods from the total number of available goods. product.Stock = product.Stock - productCount; * Registration of shipment request. DeliveryRequest request = new DeliveryRequest( item: product, count: productCount, user: userId); tblDeliveryRequest.Add(request); Save(); */ return true; } As can be seen in the preceding code, the entire purchase transaction is implemented in the form of a procedure in which the business rules are checked first. Then the balance of the wallet and the goods inventory are updated, and a request to send the shipment is finally registered. Notes: In implementing this design pattern, procedures are directly connected to the database. If there is a need for a wrapper between the procedure and the database, this wrapper should be brief enough and away from complexity. To implement each procedure we may want to implement each procedure in the form of a series of sub-procedures. It is better not to have a reference to the presentation layer in implementing the procedures. In this case, changes can be made easily. To implement procedures, several procedures can be placed inside a class, or each can be placed in its class using the command design pattern. Usually, using the domain model design pattern will be more appropriate in more complex businesses than the transaction script design pattern. Consequences: Advantages Since each request is executed separately, different procedures do not need to know how other procedures work. Consequences: Disadvantages As the business logic becomes more complex, the maintenance work will become very difficult, and we may face a significant amount of duplicate code. As business logic becomes more complex, writing unit tests will become difficult. Applicability: This design pattern can be useful for implementing simple applications. When the team's general knowledge of object-oriented concepts could be higher, this design pattern can reduce production time in small applications. Related patterns: Some of the following design patterns are not related to transaction script design patterns, but to implement this design pattern, checking the following design patterns will be useful: Command Domain model Domain model Name: Domain model Classification: Domain logic design patterns Also known as: --- Intent: Using this design pattern, it tries to model the elements involved in the domain in the form of a series of objects that include data and behavior. Motivation, Structure, Implementation, and Sample code: As seen in the transaction script design pattern, when the complexity of the business increases, the transaction script pattern will not be suitable, and its use will threaten maintainability and development. Now, when faced with the complexity of business logic, we can divide business logic into different domains and give each domain the task of defining and managing tasks related to that domain. In a way, we are now doing domain modeling. The result of this modeling will be classed with a series of features that can include data and a series of behaviors related to the domain. Since each domain has its own data and data structure, it is easy to map each domain to a table in the database. With the preceding explanations, the domain model design pattern tries to create a connected network of objects where each object manages a part of the business logic. The noteworthy point in this model is that usually, the structure of the domain model will be very similar to the structure of the database model. However, it will also have differences with the database model, among which the most important differences can be the existence of more complex relationships, multiple features, value, and use of inheritance mentioned. To design the domain model, the first step is to know the domain. The domain is the scope of business that we are trying to solve related issues. Suppose we face an internet sales system; we usually face domains such as customers, carts, or products. As mentioned, each of these domains has behaviors specific to that domain, and along with these behaviors, they also have a series of data structures similar to the database model. To clarify the issue, consider the following code: public class Customer { public int CustomerId { get; private set; } public string Name { get; private set; } public string MobileNumber { get; private set; } public Customer(int customerId, string name) { if (customerId <= 0) throw new ArgumentException("Customer ID is invalid."); CustomerId = customerId; if (string.IsNullOrWhiteSpace(name)) throw new ArgumentException("Customer name is required."); Name = name; } public Customer(int customerId, string name, string mobileNumber) : this(customerId, name) { if (string.IsNullOrWhiteSpace(mobileNumber)) throw new ArgumentException("Mobile phone number is required."); if (mobileNumber.Length > 11 && mobileNumber.Length < 10) throw new ArgumentException("Mobile number must be 10 or 11 digits."); if (int.TryParse(mobileNumber, out _)) throw new ArgumentException("Mobile number must be numeric."); MobileNumber = mobileNumber; } public string GetMobileNumber() { if (string.IsNullOrWhiteSpace(MobileNumber)) return string.Empty; string maskedMobileNumber; if (MobileNumber.Length == 10) maskedMobileNumber = "0" + MobileNumber; else maskedMobileNumber = MobileNumber; maskedMobileNumber = string.Concat( maskedMobileNumber.AsSpan()[..4], "***", maskedMobileNumber.AsSpan(7, 4)); return maskedMobileNumber; } } The preceding code shows a very simple structure for the client domain. According to the definition of this domain, the customer has a customerID, name, and mobile number. As you can see, a private set is used to define these features, and this is because the value of these properties cannot be changed from the outside. Next, with the help of the two defined constructors, you can create new objects of the customer type. In this case, the advantage of using these constructors is that we are always sure that the created objects are valid. This means every object made must have a customer ID and name or a customer ID, name, and mobile number. It is also prevented from assigning invalid values to properties according to the validation built into the constructor. The customer class also has a behavior called GetMobileNumber. In this method, before returning the value of the mobile number, we have replaced the three middle digits with a star (*). Many different things can be done in the created domains, and the preceding code was just a simple example of a domain. Notes: It should always be noted that the domain models store business logic. Therefore, testing them easily without dependence on other layers should be possible. Sometimes the domain class becomes big, and we face behaviors specific to scenarios. In this case, two decisions can be made. You can separate the specific behaviors of an application from the main class and form a new domain model, or you can keep these behaviors in the main domain model class. By separating the behavior to a specific application, the size of the domain model class is reduced. On the other hand, we may continue to face code duplication and maintenance problems. In this case, keeping the codes specific to a specific application in the domain model within the same domain model is suggested. Of course, this problem can be managed with the help of other concepts, such as aggregate roots, or by using different design patterns, such as strategy. Defining domains, they can be defined as simple or complex. For example, in a simple scenario, we may define a domain model for each table in the database. Or it is possible to define and connect the domains through a series of complex relationships with the help of different design patterns and different capabilities of object orientation, such as inheritance. Usually, for complex businesses, it is more useful to use complex domain models. Still, it should be kept in mind that, in this case, mapping between the domain and database models will be more challenging than simple domain models. Usually, to establish independence between the domain model and the database and prevent dependency, simple domain models are related to the active record pattern, and complex domain models are related to the data mapper design pattern. Consequences: Advantages All the logic related to a certain area of business is placed in one place, and in this way, repetition is prevented. Consequences: Disadvantages If we are faced with a team that does not have a proper attitude towards object-oriented or domain-oriented design, usually the use of this design pattern in this team will face difficulties. Applicability: This design pattern can be useful in implementing large and complex businesses. If we deal with a system where the business rules are undergoing fundamental changes, this design pattern can be useful. Still, if we are dealing with a small business with limited business rule changes, the transaction script design pattern will be more useful. Related patterns: Some of the following design patterns are not related to domain model design pattern, but to implement this design pattern, checking the following design patterns will be useful: Strategy Active record Data mapper Transaction Script Table module Name: Table module Classification: Domain logic design patterns Also known as: --- Intent: This design pattern uses a specific object to implement the business logic associated with all the records in a database table or view. Motivation, Structure, Implementation, and Sample code: Suppose there is a need to read various employee information from the database and provide it to the user. To implement this requirement, you can benefit from the domain model and implement the requirement. But the point of this method is that there will be one object for each of the personnel in this design pattern. This procedure is normal and coincides with the object-oriented design. Each object has an identity in this design, and the data and related behaviors are next to it. According to the above design, the question that arises is that if there was no need to use databases and benefit from their power and facilities, what would be the need to use relational databases or the presence of such a complex system? Wouldn't it be better and more appropriate to use the facilities of different parts in a useful way in order to implement our requirements? There are no precise and clear answers to these questions, and these are only questions that should be answered when designing and choosing a method from a group of methods. Another way to implement the preceding method is to use an object to manage the business logic of all the records of a table or view in the database. In the domain model method, one object is considered for each record, but in the design pattern, one is considered for all records. According to the preceding description, the following codes can be considered to implement the requirements raised using the table module design pattern: public class DbManager { protected DataTable dt; protected DbManager(DataSet ds, string tableName) => dt = ds.Tables[tableName];

}

public class RollCall : DbManager

{

public RollCall(DataSet ds) : base(ds, "rollcalls") { }

public double GetWorkingHoursSummary(int employeeId)

{

if (employeeId == 1) return 100;

else throw new ArgumentException("Employee not found.");

}

}

public class Employee : DbManager

{

public Employee(DataSet ds) : base(ds, "employees") { }

public DataRow this[int employeeId] => dt.Select($"Id = {employeeId}")[0];

public double CalculateWorkingHours(int employeeId)

{

var employee = this[employeeId];

var workingHours = new RollCall(dt.DataSet)

.GetWorkingHoursSummary(employeeId);

if (employee["Position"] == "CEO")

workingHours *= 1.2;

return workingHours;

}

}

To implement this design pattern, DataSet has been used. Since finding the table using DataSet is a fixed method for all tables (dataset.Table[abc]), the DbManager class is considered to implement this behavior. Other classes inherit from this class and prepare their own DataTable by providing the table name.

The Employee class is responsible for implementing the business logic related to all employee records. To find specific personnel, in this implementation, a C#.NET language Indexer is used. However, this behavior can be implemented using other methods as well. The CalculateWorkingHours method also calculates the number of staff working hours. To calculate the number of working hours of personnel, by providing employeeId, working hours can be calculated. The important point in implementing this method is to connect the method to the RollCall class. To create an object from the RollCall class, it will only be necessary to present the dataset to the constructor of this class, and in this way, the data table will be created.

Notes:

The object produced using this design pattern is very similar to conventional objects, with the difference that the object produced using this method has no identity. This means that if we want to get more details of a database record, we will need to search for that record from the mass of records using a parameter. Often, the primary key in the database can be a suitable parameter to find a record among the mass of records. For example, the following code shows how we can fetch employee detail by using employeeId:
Employee GetDetail(int employeeId)

To choose between the power of the domain model in implementing complex business logic and the ease of integration of the table module with the tabular structure, a cost-benefit analysis will be required.
There is often the need to communicate with several table modules to implement business logic.
To implement a table module, it can be defined as static or instance. In this case, you should consider the differences between class definitions as instance and static. One of the most important differences between these two methods of class definition is the possibility or impossibility of using inheritance.
In implementing table module queries, you can use the factory method design pattern with a table data gateway. The advantage of using a table data gateway is that you can connect to several data sources using the corresponding gateway using a table module. Each table module does its work on the record set. To access the record set, it will be necessary first to make the set of records available with the help of a table data gateway or factory method.
Consequences: Advantages

By using this design pattern, while collecting data and related behaviors in a class, you can benefit from the capabilities and advantages of the database.
Consequences: Disadvantages

If we face a more complex business logic since the relationships between objects are not seen in this design pattern and polymorphism features cannot be used. The domain model design pattern will be a better choice.
Applicability:

This design pattern can be useful when faced with a tabular data structure and looking for a convenient way to access the data.
Related patterns:

Some of the following design patterns are not related to the table module design pattern, but to implement this design pattern, checking the following design patterns will be useful:

Domain model
Factory method
Table data gateway
Record set
Service layer
Name:

Service layer

Classification:

Domain logic design patterns

Also known as:

---

Intent:

This design pattern tries to determine the boundaries of the system and the capabilities available to the client layer. In this way, business logic, transaction management, and so on are hidden from the client's view.

Motivation, Structure, Implementation, and Sample code:

Suppose that several repositories are prepared using the repository design pattern. The client, as a controller, needs to communicate with one of these repositories and insert a new user. Before adding a new user, a series of business validations need to happen. Suppose that we want to check the non-duplication of the username in the form of this validation. There are different ways to implement this validation.

One way is to do this validation inside the controller. The problem with this way is that the controller will depend on the data access layer. For this purpose, we must validate the controller using a repository-type object.

The second way is to validate the repository. This method also has an important drawback: the Separation of Concern (SoC) will be violated. Because of this, we have entered the business logic into the data access layer.

The better way is to do the desired business logic as a middle layer. This middle layer takes the request from the controller and communicates with the repository, and performs the validation work. This middle interface is the service layer:

Figure%206.4.png
Figure 6.4: Service Layer design pattern UML diagram

As shown in Figure 6.4 diagram, UserService implements the IUserService interface, and the client submits a new user insertion request. Now the business logic is checked in the UserService, and a new user is inserted, or an appropriate error is returned to the client. The following code can be considered for this class diagram:

public interface IRepository

{

void Insert(User user);

User FindByUserName(string username);

}

public class UserRepository : IRepository

{

public User FindByUserName(string username)

=> UserDbSet.Users.FirstOrDefault(x => x.Username == username);

public void Insert(User user) => UserDbSet.Users.Add(user);

}

public interface IUserService

{

void Add(User user);

}

public class UserService : IUserService

{

private readonly IRepository _repository;

public UserService() => _repository = new UserRepository();

protected bool IsValid(User user)

=> _repository.FindByUserName(user.Username) == null;

public void Add(User user)

{

if (IsValid(user))

_repository.Insert(user);

else

throw new Exception("User exists!");

}

}

The preceding code shows that the business logic required to insert a new user happens before the insert operation is performed inside the UserService. In this way, the client does not need to communicate with the repository, and the UserService provides a new interface for the client to send requests through. In this provided interface, the client submits the insertion request, the UserService, performs the necessary checks, and if the request is valid and can be performed, it handles the request. To use the preceding code, you can do the following:

IUserService userService = new UserService();

userService.Add(new User { Username = "user5", Password = "123456" });

In the preceding implementation, one important point is that the client is accessing an object of type User. Usually, the repository's model is different from the model the client works with. This is because the repository usually works with a model that matches the data source, but the client works with a model that matches its needs. Therefore, it may be necessary to map between the model sent by the client and the model delivered to the repository in the implementation of the service layer.

Notes:

In examining the service layer design pattern, it is very important to pay attention to different types of business logic. In general, there are two types of business logic:
Business logic is related to the domain, called domain logic, in which we implement a solution for a problem within the domain. For example, we will implement the withdrawal of funds or how to increase the balance.
The business logic of the user program, which is called application logic, and in which the withdrawal process is implemented. For example, to withdraw money, first, the user's request is received, then the balance should be checked, then the current balance should be updated, and finally, a response should be returned to the user. How the account balance is checked or how the current balance is updated is addressed in domain logic format.
One of the most important applications of the service layer is domain-driven design. There is a need to establish a connection between several domains, and a series of checks take place. This way of implementing a service layer is also called a domain facade.
What methods or behaviors should be presented to the user through the service layer? The answer to this question is simple. The service layer should provide the user with the capabilities that the user needs. In other words, the capabilities provided by the service layer are based on user requirements. The user and their needs may directly relate to the User Interface (UI). This dependency is because the UI is usually designed and implemented based on the user's needs. In the following, it should be noted that most requirements in implementing a user program are limited to Create, Read, Update, and Delete (CRUD) operations. Implementing each part of CRUD usually includes a series of different tasks. For example, creation often requires a series of validations and then insertion. After the completion of creation, it is usually necessary to inform others or other parts of the program about the result of this operation so that those parts can do other necessary work. This part of the work, which includes informing other departments and coordinating with them, is one of the duties of the service layer.
To implement the service layer, the domain facade, and the operation script methods can be used:
In the domain facade implementation, the service layer is defined as a series of classes or thin interfaces, and the classes that implement these interfaces do not implement any business logic (the reason for using the word thin is the same), and business logic is implemented only through the domain model. These interfaces define the boundary and range the user layer can communicate with.
In implementing the operation script method, unlike the domain facade method, the service layer is defined as a series of fatter classes containing business logic (the reason for using the word fat is the same). Each class that implements business logic is usually called an application service in this type of implementation.
To implement the business logic of the domain, both the domain model and the transaction script design pattern can be used. It is recommended to use a domain model because the business logic is not repeated in different departments, and we will not face duplication.
To implement the service layer, it is better to implement it locally first, and then according to the need, if necessary, add the possibility of remote to it. To add remote functionality, you may need to use a remote facade.
Consequences: Advantages

Using the service layer, it is possible to get the business logic of domain-related business inside the domain, and the business logic of the user program will be implemented in another part. This separation will make the different business logic into different parts depending on their type (Domain logic or application logic).
Consequences: Disadvantages

If we face a single user and do not need to communicate with different transaction resources to meet their needs, this design pattern will increase the complexity.
One of the points mentioned in the implementation of this service is its remote implementation. Suppose there is no good cost-benefit analysis between the remote and local implementation of this design pattern. In that case, implementation will lead to an increase in complexity without achieving a specific advantage.
Applicability:

When faced with several different users and the need to communicate with multiple transaction sources for each user, then using this design pattern can be useful.
Related patterns:

Some of the following design patterns are not related to the service layer design pattern, but to implement this design pattern, checking the following design patterns will be useful:

Repository
Remote facade
Domain model
Transaction script
Conclusion
In this chapter, you got acquainted with the design patterns related to domain business and learned how to manage different businesses suitably using different design patterns. Always try to establish a good relationship between the complexity of the business and the upcoming requirements and choose the more appropriate design pattern. Otherwise, you cannot always use a fixed design pattern for different business management blindly and without careful examination. This will reduce the quality of the code, and over time, code maintenance will become a very serious challenge. In the next chapter, you will learn about design patterns related to data source architecture.

Join our book's Discord space

Join the book's Discord Workspace for Latest updates, Offers, Tech happenings around the world, New Release and Sessions with the Authors:

https://discord.bpbonline.com